That Neuroscience Guy - The Neuroscience of Analytical and Intuitive Thinking
Episode Date: March 26, 2025In everyday thinking, we are making decisions that can range from intuitive, based on 'gut hunches', to analytical, based on prior experience and critical thinking. In today's epsiode, we discuss the ...neuroscience behind balancing analytical and intuitive decision making.
Transcript
Discussion (0)
Hi, my name is Olav Krogolson and I'm a neuroscientist at the University of Victoria.
And in my spare time, I'm that neuroscience guy.
Welcome to the podcast.
So today I want to talk to you about thinking and specifically two types of thinking, intuitive
thinking and statistical thinking.
Now I think all of you probably get what I mean when I say intuitive thinking, but I'll
talk about that in a bit.
But what do I mean when I talk about statistical thinking?
So I'm going to start with an example.
A lot of medical schools, when medical students are trained, they're trained in what's called
a differential diagnosis, or they're trained to make a differential diagnosis.
So when they're presented with a case, they literally put different possible outcomes,
like what, given this data we have,
what does this person have?
And then they are supposed to assign probabilities to it.
So it's, you know, a 30% chance it's this,
it's a 20% chance it's this,
it's only a 5% chance it's this.
And then they're supposed to go
with the most probable cause and see if that's it.
Now of course sometimes they order additional tests.
Anyone that's been through a medical system
knows that process.
And the reason they're doing those tests
is to change those probabilities
or update those probabilities.
So in other words
they're actually being trained to use Bayesian logic. All right so what is
Bayesian logic? Well Bayes theorem, so Bayes theorem basically says that you can
update the probability of an event by incorporating new information. So that's
what the medical students are doing in the example.
They have a probability for a specific outcome,
and then what they do is they update that probability
by incorporating new information.
And that's why they do the additional tests,
is to get that new information.
Now Bayes' theorem is named after an 18th century
mathematician, Thomas Bayes,
and it's actually used quite commonly.
It's used in finance to calculate or update
risk evaluations, for instance.
So let's go through a few other examples
of Bayesian decision-making theory.
So let's make it more specific.
Imagine that a medical doctor has someone that
has a certain set of symptoms and the most likely cause for the symptoms they're experiencing is a
stomach ulcer. So you assign a probability that it is a stomach ulcer and then the doctor runs
more tests to assess whether or not it is a stomach ulcer. And when they run those tests they're basically updating probabilities
so they might rule out some things because it makes the probability zero.
So if the doctor tells you well we've ruled out this based on a test that
means that the new information puts the probability to zero and it might
increase the likelihood it's something else. And in certain situations, they might have two things with almost equal
probabilities, and they'll keep running tests over and over again until one
becomes more likely than the other.
So they're using a Bayesian decision process.
At least that is the idea.
Now, that might be a complicated example.
So let's just talk about my favorite thing, pizza.
Let's say you're trying to establish
which is the best pizza restaurant in your hometown.
So you go out and you try pizza
at five different restaurants.
And you assign a probability then if I go
back to this restaurant there's a 40% chance I'm going to be happy.
However at this restaurant there's an 80% chance I'm going to be happy. But
every time you go to the restaurant you update the probability using
Bayesian logic. So if it's even better than expected and we talked about
prediction errors a long time ago,
but this isn't quite the same thing because in this case you're updating probabilities.
And the idea is, is you'll have this sort of Asian statistical map of the world,
and you will use that to decide what is the best possible way to do things. Another example would
be driving home. You move to a new city and you've got three routes
to choose from.
You assign an initial probability to each route.
Maybe they're each equally probable.
And then as you try the routes out,
you update the probability that the route
will get you home quickly.
And again, that's an example of using Bayesian logic.
And there's lots of research that supports
that people do this.
You know, in one study, they basically ask people
to make predictions about the duration
or extent of everyday events, right?
Things such as human lifespans
and how much a movie we'll make at the box office.
And basically the results of the study confirm this
is that people were assigning some initial probability,
all right, but as they got new information
about how long humans live,
or new information about how well a movie was doing,
they would update their probabilities.
So that is Bayesian thinking.
Now what's the alternative?
I said at the outset, another way of thinking
is intuition or intuitive thinking,
or what we refer to more commonly as a heuristic.
So what's a heuristic?
Well, in psychological terms, a heuristic
is basically an automatic response to a given situation.
You could call it a rule of thumb.
And I'm going to give you a couple of examples of common heuristics that people use.
One that I know a lot about is called the scarcity heuristic.
And the way the scarcity heuristic works is quite simply,
if you believe something is rare, you assign more value to it.
And this is kind of the case with diamonds, right?
Diamonds actually aren't as rare as people think they are, but the perception that the
diamond industry wants to put forth is that diamonds are rare and thus they're valuable.
You know, the classic one dates back quite a while because it goes back to marbles. And
most kids these days don't play marbles anymore.
In fact, I wouldn't even know where to go to buy marbles, but most of us know what a marble is.
And in some original studies, they basically found that there were these rare marbles,
like a pure white marble, and people would actually assign a lot of value to that marble,
even though it had the same literal value as other marbles, it just was rare.
And we did a research study with this, with EEG, so brain waves.
We basically created a situation where people had to choose between some outcomes,
so some of them were quite common and there were a lot of common outcomes and some were quite rare.
What they didn't realize is these were basically gambles.
What they actually saw was a bunch of green squares and a couple of blue squares.
They had to select the square and that was making a gamble and they either won or lost.
What was interesting is that when they gambled on the less common squares, the blue squares, the brainwave response to wins
was significantly larger than when they gambled
on the more common green gambles,
even though the reward was the same amount.
So just because it was rare,
it seems like the brain was biasing the outcome
and saying, well, this has to be valuable
because it's rare.
Now, there's a lot of other heuristics
that people use when they're thinking.
There's the availability heuristic.
And it basically means that if you have recent information
about something, you overestimate probabilities
of events.
So a common example of this is if you recently heard
about a shark attack on the news,
you probably are gonna overestimate the likelihood
of encountering a shark when you go swimming,
even though statistically shark attacks are quite rare.
You know, this is the one about flying as well, right?
People think flying is dangerous.
Well, the actual statistics are quite simple.
You're more likely to die walking across the street
or driving a car than you are flying.
Another heuristic that's out there
is called the representativeness heuristic.
And basically, this involves making a judgment
based on how similar something is
to a stereotype or prototype.
And it basically means you ignore other information out there.
So one example of this is if someone describes someone as quiet and they're a book lover,
and you give them a choice whether they're a librarian or a firefighter,
you're probably going to go with librarian, even though statistically
there's far more firefighters than there are librarians and there's lots of firefighters
that are quiet and book-loving individuals.
But because you've been prompted with that, you basically, you fall into a stereotypical
judgment and that's another heuristic, the representativeness heuristic.
There's another heuristic called confirmation bias.
Confirmation bias is basically when you seek out information that
confirms your existing beliefs and you ignore information that contradicts them.
So basically a good example of this is politics. So here in Canada we have two political parties, at least we have more than two, but we have
two big ones running for the election, the liberals and the conservatives.
And if you're a strong liberal, you basically only watch news channels that support the
liberal position because news channels have biases too.
And some news channels are more liberal and some are more conservative.
And if you're more conservative, then you watch news channels or listen to news that
supports the conservative position.
So that's confirmation bias.
Another one that's out there, which I always love because you see it all the time, is called
hindsight bias.
And this is when people basically ascertain that they knew an event was going to happen even if
they didn't.
So after a stock market crash, for instance, you might say that you knew it was going to
happen even if you didn't predict that it was going to happen.
That's hindsight bias.
People do this all the time.
You know, a significant other breaks up with you and you didn't see it coming, but you're
wandering around saying, yeah, I knew that was going to happen.
I totally saw it coming when you actually didn't see it coming, but you're wandering around saying, yeah, I knew that was going to happen. I totally saw it coming when you actually didn't.
The last one I'll give you is the overconfidence bias.
And it basically, I love this one as well,
because you see it all the time again.
This is when you overestimate your own abilities and knowledge, right?
So the example, one example that I see a lot at universities,
students believe they're going to easily pass a challenging exam even though they don't study for it.
So they assume that they're straight A students when they're actually not.
All right, because most of the students I teach aren't straight A students.
They might have been in high school, but they no longer are at college.
So they have the overconfidence bias.
So what actually is it? What do people actually
do? Do they do intuitive thinking or you rely on heuristics or do they
do this statistical Bayesian thinking? Well most people in my line of work tend
to believe we spend most of our time on automatic pilot. Alright so you just go
through your daily life relying on intuitive thinking or heuristics.
So you just make simple choices without thinking and you just go on that way.
But however, sometimes you're forced into certain situations where you are forced to
assess probabilities and you probably use more of a Bayesian model.
I'm not 100% convinced that people would actually do
Bayesian calculations in their brain, although I'll be fair, there are a lot of
people out there that say that's what you would do, but I do think there are
these situations where you slow down, the prefrontal cortex kicks in, and you do a
more analytical thought process. And I'll end by going back to the medical example.
If you put medical students in a room
and you tell them that they have to develop
a differential diagnosis,
they will go through this Bayesian
or statistical thought process.
But we know from anecdotal reports
and from research studies,
the doctors on award typically use heuristics. They see
a certain pattern and they make a decision on what it is and they're going
with this intuitive thinking as opposed to analytical thinking. And this is
actually a recognized problem in the medical profession. What they want
doctors to do is when they're tired or when they're relying, and this is when
people doctors tend to make intuitive decisions, is they want them to stop and slow down
and make that more statistical or Bayesian based decision. All right, that's intuitive versus
statistical thinking. I hope you hopefully you found that interesting and you enjoyed it.
Don't forget to check out the website thatneuroscienceguy.com.
There's links to Patreon where you can support us. Remember a dollar a week, five
dollars a month, anything helps and it only goes to graduate students. We have
another student now, Jen, who's helping with the podcast. She's responsible for
all the cool posts on Instagram. At that Neurosci Guy, check it out. We've got an
Instagram feed and there's information there.
And Matt, of course, doing all the sound editing.
So the money goes to them.
I don't take a penny.
You're helping students get through university
if you donate.
There's also links to our Etsy store.
We had a talk just today about finally,
we need to really double down
and make some more shirt designs.
So we're gonna try. I've been saying more shirt designs. So we're going to try.
I've been saying that for years now, but we're going to try.
That's all I can say.
And of course, send us ideas.
We really do want to know what you want to know about the neuroscience of daily life.
Today we had a meeting, the three of us, and we planned out the next 15 episodes and they
all came from you, the listeners.
So on threads or X at that Neurosci Guy,
or of course we have an email address,
thatneuroscienceguy at gmail.com.
And of course the podcast.
Thank you so, so much for listening.
We're gonna keep it going.
I know we can be inconsistent,
but we're doing our best to deliver content
and I hope you
enjoy it.
So thank you for listening to the podcast and please subscribe.
My name is Olav Krogolson and I'm that neuroscience guy.
I'll see you soon for another full episode of the podcast.