The Joe Walker Podcast - Rational Minds Part 5: Heuristics Make Us Smart - Gerd Gigerenzer
Episode Date: December 21, 2020Gerd Gigerenzer is a German psychologist and director emeritus of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development.See omnystudio.com/listener for priva...cy information.
Transcript
Discussion (0)
Men, wrote the Scottish journalist Charles Mackay, think in herds.
It will be seen that they go mad in herds, while they only recover their senses slowly and one by one. My name is Joe Walker and three years ago, trying to understand
Australia's obsession with residential real estate, I began researching housing bubbles.
I read almost everything I could find. Increasingly, I came to view the question of bubbles
as not just a topic of interest to the fortunes of my country, but a vehicle to explore deep questions of human nature.
There's a long literature that gleefully lays bare the madness of crowds, from Mackay to Galbraith, Minsky and Kindleberger to Schiller and Chancellor.
But it left me with a nagging question.
No one in a bubble ever thought she was crazy.
So what is going on here?
In this series, I'm using the prism of financial bubbles to tackle an eternal question. What does
it mean to be a rational person? I'll be guided by five world experts who will show me that we're
not quite so befuddled as popular narratives would have us believe.
I'm inviting you to come with me on this journey
to reconsider what you might have been told
and to give rational minds a second chance.
I used to be kind of sane
I used to act kind of normal
I couldn't complain
Now it's never the same
I'd like to express a very rational thank you to Blinkist for sponsoring this series on Rational
Minds. You probably know about the product by now, but I did want to add that I recently discovered
that Blinkist also has audiobooks on their app, which you can purchase at member
prices. I've been listening to Henry Hazlitt's Economics in one lesson, and I noticed that
Blinkist also has John Quiggan's Economics in two lessons, which I'll be sure to listen to
after I finish Hazlitt's book. If you want to sign up to Blinkist, go to Blinkist.com slash swagman, where you can get 25% off an annual subscription as well as trying Blinkist Premium free for seven days.
That's Blinkist.com slash swagman.
You're listening to the Jolly Swagman Podcast.
Here's your host, Jo Walker.
Ladies and gentlemen, boys and girls, swagmen and swagettes,
welcome to the final episode in this series on rational minds.
The point of this series has not been
to deny the existence of irrationality altogether.
Rather, it has attempted to swing the pendulum back,
to push it away from the claim that bubbles are phenomena
that are quintessentially about individual madness,
to question what is rational.
The exact mixture of rationality and irrationality that
generates bubbles, their precise recipe, remains beyond me. But two themes have emerged in this
series so far. Number one, most people, including those who participate in bubbles, are just trying
to do their best in an uncertain world. And two, we are social creatures,
and bubbles are best thought of as social phenomena. I hope this series has made you think.
Perhaps it has raised more questions than answers. If so, that is okay. I feel you,
we're in the same boat. And so far, we haven't really directly tackled the question, what does it mean
to be a rational person? Well, this episode will hopefully deliver some of the goods.
Now, behavioral economics claims to have taken a great leap forward in uncorking the mysteries
of human rationality, or rather, irrationality. Wikipedia's list of cognitive biases contains no fewer than 185
misfirings of the mind, from the availability bias to the zero-sum bias. But how does an animal so
dumb that its brain is home to a zoo of cognitive biases become so ecologically dominant that it
can stick every other animal in actual zoos? This episode seeks to answer that
question, and with one of the world's most distinguished thinkers on the topic, no less.
Gerd Gigerenzer is a German psychologist famous for his debates about rationality
with Daniel Kahneman and Amos Tversky. Gerd is Director Emeritus of the Center for Adaptive
Behavior and Cognition at the Max
Planck Institute for Human Development in Berlin, and he is probably the world's leading
authority on ecological rationality and how heuristics make us smart.
Without much further ado, please enjoy this conversation with the great Gerd Gigerenzer.
Gerd Gigerenzer, welcome to the show.
I'm glad to be with you.
Gerd, I've been looking forward to this conversation for quite some time,
both for selfish reasons and for altruistic reasons. Selfishly, I'd like to pick your brain on some questions I've been wrestling with for a while. Altruistically, I'm excited for my audience to hear your ideas, especially when they might
have heard only one side of the story about heuristics. But it strikes me that despite
being reasonably familiar with your work, I know next to nothing about you, except that you're
German and you play the banjo so I thought we could
begin with your background. Gerd, where were you born?
No, I was born in lower Bavaria that's at the end of the world and grew up in
Munich. That is a beautiful city where many people would like to live.
What are some of your early memories of growing up in Munich?
It's the Oktoberfest, of course.
It's being very different from today, being a free little boy who can run around all day, at least after making homework, and there's no phone that parents use to control where you are, what you're doing.
So it was freedom.
Why were you originally attracted to studying psychology?
Oh, I think that happened to many people.
I had a wonderful teacher at high school, and he was a biologist, but he had also a diploma in psychology.
And he always told us something that we found very exciting, so I decided, why not study that field?
What did you find most exciting about it? First I knew next to nothing. I entered psychology having read
Freud, Jung, Adler and only to find out that was no longer up to date. And what was psychology at the beginning was
statistics, which I hadn't expected. So I learned statistics and only later
realized how important that is.
You received your PhD from the University of Munich in 1977
and became a professor of psychology there the very same year.
Then in the early 1980s, you spent a life-changing year
at the Center for Interdisciplinary Research in Bielefeld, Germany,
studying with a group of philosophers and historians the probabilistic
revolution that occurred in the 17th through 19th centuries.
What did you learn during that year that changed your life?
No, I learned the importance of interdisciplinary research.
That was a group consisting of people from many, many fields. They were all brought
together to study how ideas of probability and chance have transformed the sciences,
but also everyday life. And this year in Bielefeld, the Center for Interdisciplinar Research also transformed my personal life because I met my wife there.
She was then an assistant professor in Harvard.
And so we got together by chance in a year on chance.
But interdisciplinary research is that is a lesson that I learned. There are so many topics like rationality
and even probability, which do not respect the borders we have erected around our
hometowns of disciplines. And the entire disciplines are an invention of the 20th century. They weren't existing in this way before.
And I later used this insight when I became a director
at the Max Planck Institute for Human Development
to set up a group that consisted from people,
from psychology, economics, from business,
from mathematics, from computer science, AI, engineering, philosophy,
evolutionary biology, and other people who were really interested in learning what other
disciplines know and how they approach the idea of rationality.
Are most academics territorial or do they welcome interdisciplinarity?
There are two ways to do science.
One is territorial.
You identify with a discipline, or better, a subdiscipline, as tiny as possible, and try to become the king.
The other way to doing science is you fall in love with a topic and then try to have a specialized view about something.
So often they don't even know the history of their own field.
And that means one doesn't understand, why do I am asking this question?
And why am I running experiments to resolve this
question? And the question has been handed down
by others. And in order to be
innovative, you have to change the questions. It's not
about the answers in the first place,
it's about finding the right kind of questions.
What was the probabilistic revolution?
Now, the probabilistic revolution is the transition from a world of certainty into one of uncertainty.
So physics has been seen at the time of Newton as a discipline that's about certainty.
And although Newton knew about probability, he did not apply it to physics.
So Newton was also the head of the mint in his other job,
where he used very similar techniques as modern statistical quality control to find out whether a coin has still the right content of gold.
And here he used statistics, but not in science.
And the probabilistic revolution is that probability conquered sciences itself,
and not just the methods of science.
So in economics, it took a long time. In psychology it also took some time. For instance methods like
statistics were used in psychology before say 1950 to test hypothesis, but not to understand the processes in the mind.
So the idea that the mind could be a kind of intuitive statistician did not occur to
anyone in psychology before about 1950. So Piaget and Inhelder were one of the first ones
with their book on the development of probabilistic thinking in children,
published 1951 in French.
It took more than 20 years until this research was translated into English.
Very different from Piaget's other book. So what we think today is common sense,
that the mind would be an intuitive statistician or a kind of computer, and that cognitionist
computing was absent before 1950. I'm going to ask a very innocent-sounding question,
but why is the world that we live in uncertain?
Why is the world better characterized as a large world
rather than a small world, to use Savage's terminology?
Yeah.
So the probabilistic revolution is about taming uncertainty. Taming uncertainty by probability theory. That meant you could only tame part of uncertainty by probability theory. And you couldn't tame the rest. That was always clear until, yeah, in the last century,
some people were thinking that probability theory applies to every kind of uncertainty.
For instance, some kind of subjective Bayesianism.
So the distinction between situations where we can calculate the probability or at least estimate them reliably and those where this is not the case is the distinction use to tame uncertainty, so that you can use for risk, are not the same ones that you use in the case of uncertainty.
So, probability theory is the tool to deal with risks, but it gives you only limited
ideas if you apply it to situations of uncertainty.
So let's define that clearly, because most of the time, risk and uncertainty are even
used interchangeable.
That's the original sin.
So, in Savage's words, a small world, that is a world of risk, is defined.
So, you have the full knowledge of all future states.
So, the exhaustive and mutually exclusive set of future states of the world.
And also the full knowledge of all of their consequences and hopefully the probabilities.
So if you play the roulette, you are in a situation of risk.
The future states that can happen are numbers between 1 and 36 and green,
and you know the consequences of probabilities.
Playing roulette or lotteries,
you do not need anything beyond probability theory.
So you do not know heuristics.
You don't need intuition. You even don't
know it, you have to know anything. But in situations of uncertainty, that's different.
So here, the set of future states may not be known, their consequences may not be known, or the probabilities may not be
known. That's often called radical uncertainty if the future states or consequences are not known,
as opposed to ambiguity if only the probabilities are not known. And here, and I'm very interested
in what tools do people have and use in order to deal with uncertainty, as
opposed to just probability theory.
Does uncertainty sit on a continuum or is it binary?
For example, could we say something like stock markets are more uncertain than housing markets?
Would that make sense?
Yeah, that's true.
The distinction between risk and uncertainty is a continuum.
So you can calculate some things, but others not.
So in a Las Vegas casino,
you can calculate and also hedge the risks by having many tables, but the great losses occurred because of unforeseen things happening.
So when a tiger attacked or some person dynamited the casino. So these are things all unforeseen things. So yes,
it's usually a continuum. And basically, you need to boast, you need to look at facts, at numbers,
in a typical situation, but also realize that in numbers that you have are numbers from yesterday, and the future may be different.
That's the limit of big data.
If you're in a stable world where the future is like the past, use big data, rely on it,
fine-tune your algorithms about the past and try to optimize.
You always optimize relative to data on the past and assumptions.
But the more you are in a situation of uncertainty,
so investment is typically a situation of uncertainty,
much of healthcare and diagnosis,
or a corona crisis.
We are living at the moment there and experience it. Nobody knows
how it's going on.
Or just to
find the best romantic partner.
These are all situations of uncertainty
where unforeseen
things can happen.
And here
you just can't
calculate the future.
And if you do, and if you confuse a situation uncertainly with one of risk,
then that has a name.
That's the Turkey illusion.
It goes back to a story by the philosopher Russell,
and Nassim Taleb popularized it,
and it's an important concept.
So the turkey illusion is assume you are a turkey. It's the first day of your life. A man
comes in and you fear he may kill me
but he feeds you. Next day the man comes again
you fear he might kill me but he feeds you. Next day, the man comes again. You fear, he might kill me,
but he feeds you.
The third day, the same thing.
After, if you use Bayesian updating as a turkey,
and every day,
the probability that the man will feed you
and not kill you goes up a little bit,
and on day 100, it is higher than ever before, but it's the day before Thanksgiving, and you are dead meat.
So the turkey missed an important piece of information.
It was not in a world of risk. and probably turkeys are blamed
but it's people who commit the turkey illusion
and if you look at what happened
before the last financial crisis
where the optimism increased and increased
till shortly before the crisis happened
and the reason are similar mathematical models being used, as in the
case of the Turkish Bayesian updating, which create an illusion of certainty
until everything breaks down. So we need to be aware of the Turkish illusion and
take uncertainty seriously.
Give us a quick definition of Bayesian inference.
Now, Bayesian inference is named after Thomas Bayes,
who published the paper.
Exactly his friend Price published it.
It had no impact, even though Laplace had independently discovered it.
So it is a way, in modern terminology, to calculate the probability of hypotheses given data.
That's also called direct probability. And there's a formula which you can find in Bayes' original paper,
which is even in the easiest case where you have a binary event like cancer or not cancer in a binary test,
like positive mammogram or negative mammogram is very hard for most people
to understand. I can give you an example. The point of the example will be that
with probabilities and conditional probabilities people have a hard time to think.
The second point is, that doesn't mean that people are somehow irrational.
And the reason is because thinking is ecological,
so it depends on the representation of the information.
So I give you first the same problem that a doctor faces in conditional probabilities
and hope that your mind will be clouded.
And then I'll give you the same information in a representation we developed called natural
frequencies and you will see through the problem.
Ready?
That's an exercise in Bayesian thinking.
So you are a doctor, you do mammography screening.
What you know about the population is that one out of 100 women has breast cancer.
And you also know that if a woman has breast cancer, chances that she tests positive are 90%.
And if she doesn't have breast cancer, chances that she nevertheless tests positive are 9%.
So here's a woman who just tested positive,
and she asks you, doctor, tell me, what is
the probability that I really have breast cancer?
And so what do you say?
So repeat, a base rate of 1%, a sensitivity of 90%, and a false positive rate of 9%.
Most doctors I have worked with, and I've trained about more than 1,000 doctors in their continuing medical education,
they have fog in their minds.
They don't know.
And you get answers that range from, yeah, 90% chance that you have breast cancer to 1%. So an easy way to help people to think Bayesian,
and that's part of the study of ecological rationality,
is to change the notation, the
representation of the information, rather than blaming the mind.
And there is a very simple representation where you think about not one person and probabilities,
but 100 people.
100 women go screening and translate the probabilities.
We expect that one of them has cancer.
She likely tests positive.
That's the 90%.
Out of the 99 who do not have cancer, we expect another nine who test positive.
So we have about 10 who test positive.
How many of them do actually have cancer?
One out of ten.
It's not 90%, it's not 1%, it's 10%.
So that's called natural frequencies.
The general point is that to judge the rationality of people
is not enough to give them some problems, but to think about how the human
mind evolved in how it's adapted to certain kinds of information in its environment.
And probability theory is a latecomer. And for most of human history,
there were no information like 90% sensitivity, but there were counts. And these counts were
natural frequencies. And that can help. And we use this to teach doctors to understand evidence. And I'm very proud
of that the concept of natural frequencies has entered the technical
terms in evidence-based medicine and we already have convinced the Bavarian Ministry of Education and since last year,
every 11th grader in Bavaria
will be taught natural frequencies.
The past was they were taught bass, yeah?
But with conditional probability
and 80, 90% didn't understand
and thought it's their problem.
I'm not good in math. But you need to
teach people representations where they can succeed. And that's one way to overcome the
rhetoric of irrationality by making people strong, boost them. And then there's no need to nudging.
We shall come to nudging.
So that's Bayes and natural frequencies.
Before we move on to ecological rationality, I'd like to take a step back
and consider four competing visions of rationality.
Firstly, we have unbounded rationality. Secondly, we have unbounded rationality.
Secondly, we have optimization under constraints.
Thirdly, we have satisficing.
And then fourthly, we have fast and frugal heuristics.
Gerd, could you please take us through each of those visions of rationality?
Yeah.
I mean, the dominant vision is still optimization of the constraints. So, let's start with full rationality. I mean many models are being made in many fields, psychology, economics, and others.
So you assume that people know the – they're basically in a world of risk where the only question is to ask the probabilities.
So that's the one world. That's a world which is true in lotteries and in the casinos and a few other things.
Optimization of the constraints realizes that there are constraints both in knowledge, so inside, but also outside, in the information costs and so on.
So then the question is also some one of kind of omniscient.
So if I would know the future costs of information search compared to the future benefits,
when would I stop information searching?
But actually Herbert Simon had in his famous article
that's cited for satisfying,
the heuristic type of satisfying,
in the appendix classical models of optimization under constraints.
That has become one of the main definitions of rationality in economic models.
So, we are still in the world of risk.
Now, Simon's idea of satisfying was basically the question,
how do people behave when they are under uncertainty?
So he phrased it when the assumptions of economic theory, of neoclassical theory are not met.
And that's basically when we are not in something like a small world.
And here he pointed to a tool, a class of tool that's heuristics.
So Simon came from computer science, or basically he was a man of so many different disciplines.
So one of his fields was, he was a founder of artificial intelligence, not only behavioral economics. So his idea of heuristics was totally different from the idea of Kahneman and Tversky.
It was the idea of computer science heuristics.
So heuristics are tools to make computers smart, not heuristics are tools that make
people dumb, to put it simple. And so he was thinking and
satisfying was used by him as a general term for everything
that's not optimizing, but also for a more specific model.
Namely, did you, for instance, if you want to buy a house, you
set an aspiration level or you sell your house, you set an
aspiration level. And if a your house, you set an aspiration level,
and if a customer comes who is willing to pay that, fine, that's it.
If nobody comes, then you lower it a little bit and wait, and so it goes on.
That's a heuristic that needs experience to set the aspiration level. And it doesn't need any kind of, it realizes that we cannot foresee the entire distribution
of customers and the world.
The word satisfying is a portmanteau of the word satisfy and suffice.
Yeah.
Did you ever meet Simon?
Yes.
What was he like?
Oh, so how to describe a man who has so many talents?
So he was a more humble person.
He loved to argue about everything.
He also loved to be a little bit contrarian.
He was a man who I was amazed how much he knew from computer science to economics to psychology to biology to many things.
His daughter told me that she knows him reading books, books, books, books and books.
He allegedly never watched TV. reading books, books, books, books, and books.
He allegedly never watched TV.
I think he wanted to use his time for something better.
And he also applied his own theory to himself.
For instance, at some point, he decided he doesn't want to waste time deciding every
day what to eat for breakfast or for lunch.
So he just fixed it and it made a habit and ate the same kind of cheese sandwiches and other things.
So he was one of these last people who really had an immensely broad scientific education. And I think he spoke
something like eight, ten languages. And he was eager to learn. A very interesting...
Interpolary math.
Yeah.
But sorry, I interrupted your explanation of satisficing.
Yeah.
Shall I start again?
Sure.
Okay.
Or you can continue where you left off.
Yeah.
Mm-hmm. The term fast and frugal heuristic is a term that we introduced in order to make clear
that we are talking about heuristics that our strategists people use, which are typically
fast and frugal, because some of the heuristics in computer science are also highly complex.
And I consider my own program on fast and frugal heuristics and ecological rationality
as a continuation of Simon, where Simon left, because he did so many different things.
So, it is a continuation in several respects.
First, what I call the study of the adaptive toolbox is a continuation of, to extend his notion of satisfying, that's not the only thing people do, to other heuristics that help people to get through the day, that help experts to make decisions on the uncertainty, and also that help organizations to organize the environment in order to improve decision making and enable innovation.
That's the descriptive part which Simon started. My research group and I, we added
a prescriptive part, namely the question,
can we identify the situations under which a given
heuristic is not only faster
and more economically, but also more accurate
in making decisions or predictions.
That was new, because even today, the standard idea is that heuristics are always second
best, some probability theory is always better, and then the question never arises, because arises because it's unthinkable that a single simple heuristic could actually outperform say
a logistic regression but we have shown that this thing happened and that happened systematically
and this is why a world of uncertainty is different from a world of risk in a world of uncertainty is different from a world of risk. In a world of uncertainty,
it often happens that less is more. That means if you use less data or less computation,
you get better predictions or better decisions. That should never happen according to standard decision theory, where there is an assumed trade-off between effort and accuracy.
So the typical argument that you can still read in the textbooks is that people use heuristics, yes,
but they lose accuracy at the price of accuracy.
It's easier, but you have to pay a price for it. Of course,
it's easier, but we have shown that you don't have to pay always a price for it. In contrary,
if you use complex models on the uncertainty, you often have to pay a price for the complex models,
not for the simpler. Or to put it very differently, less is more doesn't mean that the less you know, the better
you are, although those situations also exist.
But it means that usually you should acquire a certain amount of information and computation,
but if you do more it's going down this is an insight that is
well known in computer science where you know that algorithms over fit if you're
too complex if too many parameters but it's not always well known, at least not implemented in many areas of finance or also economics,
where models are built with lots of free parameters.
And if they don't do well, more parameters are added.
But that all adds to overfitting.
So we need models to predict the future that are sufficiently simple.
That means have few parameters in order to be robust in a situation of uncertainty.
If you are under risk, then just make it complex and use all the information. This is also an
important insight to evaluate when AI will predict well and will it will not
predict wrong. So the big successes of AI are in chess, go, face recognition and a
stable situation. And that's similar to the world of risk, of savage. But
the moment we have to do with human behavior. So predict the best romantic partner for you,
or even predict the flu, like Google flu trends once tried and its situations
where viruses mutate
or people enter search
terms for all kind
of reasons
not just because they are sick
but because they are curious
so then
the idea
to fine tune on the past
and hope that the future is like that is a failable one.
One over fits. For instance, we have shown that a very simple heuristic that uses one data point
can outperform Google Flu trends that uses a secret algorithm with 160 or so terms.
So that's a case of less than one.
What is the one data point?
That's, this is, so what Google Flu Trends predicted is the flu-related doctor visits
in the future. And the one data point is the number of the most recent number of
flu related data points you have a week ago. That's the only data point you need. And you can
see that for instance, Google Flu Trends was calibrated on something like four years of data.
And it learned, among others, that the flu is high in the winter, low in the summer, high in the winter, low in the summer.
And then it was tested.
And Google actually made predictions, which is rare.
Most of the big data claims I see are just claims.
But you haven't seen any prediction, but it did predict.
And then the swine flu came out of the season and it flopped. So a reason to heuristic,
that's what people use in situations of uncertainty where the past cannot be trusted.
So you just go by the last event. A reasonency heuristic can follow a development of a new virus in the summer.
And so that's the example.
How many different fast and fugal heuristics have you identified in the adaptive toolbox?
I would say I talk about classes of heuristic, like reasons of heuristics.
That's all one reason heuristic.
So you have just one reason and ignore all the rest.
Then there's another class of heuristics which go sequentially.
If the one reason doesn't help you, then you go to a second one.
If that helps you, it's the end.
Otherwise, you go on.
These are known as lexicographic heuristics.
People could go one by one sequentially,
but they're still only using one heuristic.
So fast and frugal trees are a good example for that,
or take the best.
A third class of heuristics,
they use all the information,
but they don't try to weigh.
So an example is, so if you mean variance portfolio to calculate the weights.
Or you could use what Harry Markowitz himself used when he made his investment, his own
money for the time of his retirement.
You could use, he used a simple heuristic that's called one over n that means divide your money equally
so if you have only two assets it's 50-50 three it's a third third third so
that's a heuristic that is for allocation that and it is yeah there are a number of studies who have shown that it can outperform
Markowitz optimization and but the real question is not whether it's better or worse that's a wrong
question but it's the ecological rationality question. Can we identify situations or the type of situations where 1 over n is likely to outperform Markowitz or more modern Bayesian models of investing and where it's not the case?
And to understand this, some principles on statistics are obvious here.
So the more free parameter the model has, the more error it will incur, estimation
error. And the number of free parameters is an exponential function of the number n. So
if you have only few assets, so n is small, this will be to the advantage of the complex models who estimate
covariances.
And if you're a large number, it will be to the advantage of the heuristics.
So this is the type of ecological rationality thinking.
And there are more interesting things to that.
So we have now three classes.
Then there's an entire class of social heuristics.
Like, what do you do with your life?
Do the same thing as everyone else, as your peers do.
How to study decision making?
Well, you do what the others do.
And so imitation and other things of that.
None of these heuristics is good or bad.
That's very important.
Don't look down to imitation.
But imitation is one of the driving forces in culture.
Without that, humankind couldn't have the cultural revolution that it had. There's no species where infants imitate so precisely
and so generally as in humans.
And that's a big advantage.
Do you think that we need group selection
in order to explain culture and imitation?
There's a debate about that, and that would lead too far.
I would say, what you can say is that the social heuristics are extremely important, and imitation is one.
Advice-taking is another one.
The principles like tit for tat
are other ones. And these heuristics
in the social contact
define us. They're also glued to emotions.
So a heuristic like trust your doctor
is highly emotional.
You basically risk your own life.
And if you're betrayed, if you find out that your doctor recommended you a certain kind of drug, which he knew makes you dependent on opium, for instance, something
like that, then the reactions are highly emotional.
The interesting thing is, and these emotions are functional, because they make care that other people are punished and so on.
And also shows you that our emotional fabric, so the way we evolved, together with the heuristics,
are all made to deal with uncertainty.
We did not evolve to deal with risk.
That's an unusual situation. And the decision theory, the standard mainstream
decision theory can be defined as something that eliminates everything psychologically.
There should be no trust, that's a weird thing. There should be no emotions, there should
be no heuristics. There should be no
storytelling, no causal stories. They are important to understand. And also, the assumptions
are that you're by yourself, you're alone. That's the Western, the so-called weird bias that we have, as opposed to accepting that there are other societies where
we don't maximize your own profit if you can, but we're looking at our family or bigger units.
So that's four classes of heuristics. Are there any other classes? If I would know what all the heuristics are,
that would be one of the goals.
I think about it rather like a periodic table of elements.
So maybe the better question is,
what are the building blocks of the heuristics?
So typically it's a search rule, a stopping rule, and a decision rule. The better question is, what are the building blocks of the heuristics?
So typically, it's a search rule, a stopping rule, and a decision rule.
So you have a fast and frugal tree, which is lexicograph, gives you an idea in what order to search.
And then there's a stopping rule, where you stop searching, and then there's a decision rule.
So that might be more like the elements, like the
elements of the atoms,
like an electron.
And so
here is
your question is well posed.
We just don't
know that yet. We have
some ideas about classes,
and I often struggle about with with questions like, could it be
for instance, on the extreme, what single heuristics
would bring you through the day, if you just follow one?
Could it be, imitate
what your peers do? That brings you quite far.
But, on the other
side is a
systematic study of the adaptive toolbox
is something that's desperately needed.
And
the main obstacles are
that many of my fellow researchers still don't take heuristics seriously.
You think it's a kind of something to be eliminated, that the heuristic is the problem.
No, and uncertainly, it's a solution to the problem.
One of the criticisms of behavioral economics and the irrationality program is that it spawned this zoo of different biases
but lacks an organizing theory behind them i like your periodic table metaphor but what's the
what's the organizing theory behind the adaptive toolbox so the is it evolution in the simplest terms?
So the organizing factors are certainly human evolution, human experience, and this evolution is also culture.
And some of the heuristics, they create culture.
So if you wouldn't imitate,
culture would be limited.
And in formal terms,
I think about a distinction between building blocks,
as I mentioned, like search rule,
there are different ones,
stopping rules, there are different ones,
and to organize these.
There are a number of principles that have to do with ecological rationality,
like the key insight that under uncertainty, you need to avoid to overfit.
You need to scale things down.
And that's why stopping rules become very important.
There are mathematical principles like the bias-variance decomposition from machine learning.
I can't go into this, but it's basically the insight that if you make errors,
it's not due to a bias.
So ignore for a moment irreducible measurement error.
But it's also to another factor that's called variance.
And variance is the reason for overfitting.
You have a too complex model.
You incur measurement errors.
You're not in a world of risk.
In a world of risk, there is no error by variance.
You can't overfit because you know everything already.
And that insight tells you that a mind has to do a different trade-off.
It's not between effort and accuracy.
It's between bias, being biased, and being too overly flexible.
That's variance and to overfit.
So that insight tells you that to become rational in an uncertain world doesn't mean that you reduce your bias to zero.
No.
Then you will probably be worse off.
You need to find a trade-off
between bias and variance.
So in simple words,
a trade-off between thinking in a way
or having a model
that's not too far off the truth.
But at the same time, don't make it too complex
because that can reduce the bias
but incurs user trade-off.
Take the example of Markowitz mean variance model
and the 1 over n heuristic he actually used.
The mean variance model has a bias,
so that's the difference between its mean prediction
and the real state of the gold,
but also variance.
1 over n has a bias, probably a higher one,
but no variance.
It doesn't estimate anything.
It's an extreme version of a heuristic.
And by that way, one can understand, one, that there are situations where it's better to ignore the entire
data. So this is, one over n is
zero data. Not big data, zero data.
Decision making.
And also you can start to understand that that can actually be, under certain situations, the wise thing to do.
How strong is the evidence that people actually use fast and frugal heuristics in the real world?
It's pretty strong.
Sorry, there was a phone.
Can I break your phone?
Yeah, yeah.
It's this one.
Die Grenze.
Felix, können wir später telefon. I'm still on podcast.
Yeah, I'll call you.
Sorry.
No worries. Do you want me to re-ask the question?
Yeah.
How strong is the evidence that people actually use fast and frugal heuristics when making decisions in the real world?
Yeah.
We've run a number of experiments, and others have too.
And the evidence for a number of heuristics is quite strong. I also should say that the, say in the 1970s,
when Kahneman and Tversky rightly put heuristics
back to the attention of psychologists,
the common belief was people use heuristics,
but they shouldn't. When we showed
that when using,
when we formalized these heuristics,
that is what I think is the improvement
over the Kahneman and Tversky research that we did.
We formalized the heuristics so they can be actually tested.
They make predictions,
while availability doesn't make any predictions,
but it explains everything post hoc,
or system one doesn't make any prediction, but it explains everything post hoc, or system one doesn't make any prediction,
explains everything post hoc.
So then when we showed that these simple heuristics,
once they are formalized,
and the other heuristics that we discovered, developed,
that they actually can do well,
first the doubt was that they actually do well.
And we published papers, we published the entire data sets, so that everyone could calculate
and find that less is more.
Point.
Then the argument came, so if heuristics can actually do better in certain situations than, say, regression models, then it can't be that people use heuristics.
Because the assumption was this belief in the irrationality of people.
And that's the type of argument.
I just invite everyone to look in the papers, look in the research,
and a heuristic like one-on-one
is always a model
of what people do, and they may
not exactly
make it equally.
They may deviate on that,
but it leads you
a prediction, and you can test
it. And if someone does Markowitz optimization, then the person doesn't use 1 over n.
That's very clear.
But that's not a refutation of the heuristic.
If nobody uses it, yes.
But if people systematically use them, for instance, what I find is that professionals in finance use 1-0-1,
but they don't admit it. Because then the, in short, I think it is very clear that
people don't optimize in an uncertain world because it's impossible by definition. So,
then the question is what they're going to do. And we have developed a number
of heuristics. Some of them predict quite well what people do. Others didn't predict
that well. And others are used in very specific situations. And you need just to work that
out.
That example you just mentioned, Gerd, of how finance professionals internally use the 1 over N heuristic but then talk about something different to their clients seems to be an illustration of the element of truth in Kahneman's infamous system one versus system two distinction?
That is to say, there's a difference between the process by which we make decisions on
the one hand, and then the ways in which we describe those decisions to people on the
other hand?
Or am I being too charitable to Kahneman?
Yeah. So my honest opinion is that the distinction between system one and system two, which is not, by the way, not Kahneman's.
He took it from other psychologists like Jonathan Evans, is, in my opinion, and I don't want to blame people, it is going backwards in theorizing.
So we have precise models for heuristics, which are now called System 1, like lexicographic
heuristics, and they're all lumped together into one System 1.
We have precise models for so-called rational models, and they're not one.
There are Bayesians, there are Neyman-Pearsons,
there are Fischer's, there are other statistical theories which are all lumped into system two.
We are not gaining anything by reducing everything to two words
which are based on dichotomies.
By the way, not even the dichotomies match.
So heuristics are lumped together with
unconscious and with making errors. Every heuristic I have studied can be used consciously
or unconsciously. And the heuristic can be better or worse than other models. That's,
I don't think, that doesn't lead us very far. Of course, the advantage is that you
can explain everything with system one and system two. You could use just one system. In the good
old times, with one system. Why did you behave the way you do? Because God made you to do so.
So, my own research program is getting more precise, make precise models where you can show that in a certain situation, take the best explains what exactly 40% of the people exactly do and the rest do something different.
That's what you can say.
So the…
How do we…
Go ahead. Go ahead.
Go ahead.
So the reason why executives use a heuristic or go by their hunch and not admit that is not to be understood internally or just internally.
So what I think one of the key biases in mainstream behavioral economics, as opposed to what Herbert
Simon envisioned, and many other behavioral economists do, the main bias is an internal one.
It's the same bias that some psychologists have
by explaining everything that's happening
by inner desires or inner limitations
without looking at the environment, the world. So a key reason why someone doesn't admit what he or she is doing is because the person
fears consequences.
So give an example.
I have studied a number of large corporations on the German D that's like the Dow Jones and worked with them and asked
the executives how often is an important professional decisions you make or you make
within a group at the end a gut decision. Emphasis at the end.
Because, of course, they are going through the entire data, analyzing everything.
But most of the time, that doesn't give you a unique answer.
And if an executive then uses a gut feeling based on years of experience that tells him don't do that and follows that
that's a gut decision it is not what uh what's said in uh standard behavioral economics
where one experiment after the other one is run to show that intuition is going wrong.
No.
Intuition is something extremely important, and it's defined. I define this as implicit knowledge based on years of experience,
where you quickly know what you should do, but you can't explain it and it's the
drive of innovation for instance Einstein said the intuitive mind is a
sacred gift and the rational mind is a faithful servant we have created a
society that honors the servant and has forgotten the gift. That could be a description of mainstream behavioral
economics. So, the point here is, in my view, you need to look outside the mind. You need to look
at the, for instance, the error culture in which this executive is in. So the same executives, when I work with them,
they would say that about 50% of all decisions
are at the end of that decision,
but they never would admit that in public.
They fear, and rightly,
because they might be punished if something goes wrong.
What happens then, they find reasons after the fact. And so they may ask a manager to find the reasons and then present the company decision as a fact-based decision. That's a waste of time, money and intelligence. Or a more costly version, the higher consulting firm, which on 200 pages
and a PowerPoint justifies the gut decision made without that being ever mentioned.
I've worked with consulting firms and asked them the principles in private conversation, are you willing to tell me how many of your decisions are
to justify an already made
decision? And the answer was
more than 50%. But don't mention my name.
So there are entire social games being done.
Because of the anxiety, admitting intuitive
decisions which are typically based on heuristics, and then pretending to have a rational solution
in a situation of uncertainty where you can't have one. And then it's a big market for consulting firms,
which could do something better with their great people they have
than justifying decisions already made and so on.
So one needs to analyze the system that's there.
And I don't think that
these concepts like system one or system two
get you anywhere.
How does economics define rationality?
Okay. Interestingly, economic theory
has several definitions of rationality. So the
most clearest one is
consistency. So, that may be the savage axioms, which are the basis for utility maximization
models. So, there are a number of axioms like transitivity, independence, that are necessary and sufficient for representation of a decision
in form of a utility curve. So that's the standard idea. It is very clear that this definition is not the same as what we think in every day about rationality.
Because, simple example, if you are totally consistent in your life, you can be totally wrong.
So, if you believe that the probability that Elvis is still alive is 99%,
and the probability that he's not alive is 1%, you're totally
consistent, but wrong.
So, that's the problem.
It's a definition of consistency that is abstract, where you don't need any knowledge, that abstracts
from any psychology, we have the same story again. And violations of consistency have been used in mainstream
behavioral economics to
claim that people are irrational.
And the problem is that we again, consistency requires a world
of risk and people's
psychology is tuned to other things. They take information into account.
They know something and they go with that.
And Hal Arkes, Ralf Hertwig and I have
published a paper in the journal Decision
which has looked at the entire literature to see whether
violations of consistency,
often called coherence violations,
whether there's any evidence that costs are incurred
so that people who violate consistency more often are less healthy,
less wealthy, or less happy, or anything like that you can measure.
We have found close to zero
evidence for that.
So that's also something that might you think.
So that's one definition.
My personal opinion is that consistency does not give us a criterion for rationality as we understand it, but it may be a rational
criterion in certain situations. You want to be consistent with respect to a friend,
but not necessarily with respect to your enemies. It is ecologically rational. You need to define
the situations where that's a good idea, that's not a good idea. The second major
interpretation of rationality is one that looks at achieving a certain goal.
That's less clearly defined, so that's your ability to use what you have in your mind and
the resources available to achieve a certain goal in economics. So the concept of ecology rationale is much more similar to the second one,
but it's more specific.
It looks at both sides.
So how do the cognitive capabilities we have, including the heuristics,
how do they match with the environment?
So, how do we, what are the situations where a lexicographic heuristic or just looking at one reason, where will they succeed and where not? So, the reasons the heuristic that can outperform
Google flu trends will not work well if the flu would be well behaved and people well behaved
where actually big data can predict the future. That's the type of ecological rationality is
much more a functional type of rationality. It's always a rationality for someone because you need
to define a criteria.
You want to earn as much or you want to predict correct.
And then you need to define the strategy.
So you need to talk about is the Bayes rule here a good strategy or is it a simple heuristic or something else?
And then analyze and that's part of this mathematical study about
the match of the two
Herb Simon famously
characterized bounded rationality
which I think for present purposes
people can just think of as being the same as
ecological rationality
as the blades
of a scissor and one of the blades is the mind, the other
blade is the environment.
If we run with that analogy for a moment, even though my mom always told me it was dangerous
to run with scissors, does the potential for the environment to change over time admit
of the possibility of irrationality?
So the decisions analogy is rightly on the basis of ecological rationality.
And it has to, it analyses the match between the, say,
heuristic and environment, but it also has to analyze the change, the dynamics of the environment.
And there are good examples where heuristics have been involved in animals,
and then environments changed, and they couldn't change at a pace.
And that's certainly something one needs to analyze very carefully.
So if you have a business…
For example, how moths can spiral into a light or a candle flame.
Would that be an example?
Yeah. How moths can spiral into a light or a candle flame. Would that be an example?
Yeah.
And since there were no candle fires there, probably.
Or a very simple example.
If you run on imitation.
So you live in a world where you inherit businesses from your fathers.
And it goes on and your father has accumulated experience from a grandfather and so on so if this world
is stable and nothing changing your world must to imitate what your father
did and it just a little bit if the environment is changing,
you're not well advised to go by imitation.
So these are,
rightly, one needs to analyze ecological rationality
not only relative to the environment as it is at the moment,
but also if it's changing.
And there is uncertainty.
One has to realize that by taking uncertainty seriously,
also the study of ecological rationality will not give you a recipe that is certain.
Because there is uncertainty in it all.
It will always be statements like, if that and that is the case,
we're well advised to ignore all information except one piece.
If it's not the case, and there are situations where we don't know.
But it also means to accept the uncertainty, the fundamental uncertainty in our world,
and deal with it, as opposed to trying to make optimization models
suggesting certainty.
But you don't say irrationality does not exist.
I would want to define irrationality.
Yeah, good point.
I guess I just feel like you have been painted into this corner where you think that heuristics make us 100% rational and Daniel Kahneman's been painted into this corner where he thinks that heuristics make us 100% irrational.
But I don't think that your positions are as extreme as some people perceive them to be. I have four quotes
from Daniel Kahneman that I wanted to put to you and see what you think. The first two are from
Thinking Fast and Slow, his best-selling book. The second two are from a journal article.
So this is the first quote from Thinking Fast and Slow.
The definition of rationality as coherence is impossibly restrictive. It demands adherence to rules of logic that a finite mind is not able to implement. Reasonable people cannot be rational by that definition, but they, and a stubborn resistance to a reasonable argument.
I often cringe when my work with Amos is credited with demonstrating that human choices are irrational
when in fact our research only showed that humans are not well described by the rational agent model.
And the second quote from Thinking Fast and Slow is,
the focus on error does not denigrate human intelligence any more than the attention to
diseases in medical texts denies good health. Most of us are healthy most of the time,
and most of our judgments and actions are appropriate most of the time. And then going back even earlier, this was the year 2000,
a commentary written by Kahneman in the journal Behavioral and Brain Sciences.
The first quote is,
Contrary to a common perception, researchers working in the heuristics and biases mode
are less interested in demonstrating human irrationality than in
understanding the psychology of intuitive judgment and choice and the second quote from that article
is all heuristics make us smart more often than not so what's the big disagreement yeah it's a
good question so let's start uh with the first quote. So let's start by saying the following. So Danny Kahneman had a reply after my talk and I had a reply to
his reply.
We had many private conversations and I've always tried to keep an intellectual disagreement
apart from a personal disagreement. So, and his, the first quote was about
coherence, and that quote is perfectly correct from my point of view. It's just
one should also add that many of Kahneman and Tversky's famous demonstrations are about
coherence.
The Bayesian problem is about coherence.
The Kantan is irrelevant.
The Lindner problem is about coherence, nothing else. And second, so what he writes in 2011 is also rethinking.
It's an afterthought about earlier and also consequence about the critique he got.
So, for instance, the Lindner problem is about coherence.
In the original paper, it's called conjunction fallacy.
And others have linked it to every disaster in the world. Not Dainey and Amos,
but it's called conjunct fallacy. One of the, when we showed that it disappears, Rolf
Hedwig and I, disappears the moment you're clear what probability means, frequency. And also things, Danny made an
effort and changed term into conjunction error, as opposed to fallacy. That's... And, but still, if he thinks that, I think he has now a good grasp about that coherence is not the only thing.
But note that almost all of the biases listed are biases of coherence.
And they are typically interpreted as irrational.
Kahneman and Tversky, to the best of my knowledge, have never used the term irrational.
But their close followers, like Thaler and Sunstein, talk about lack of rationality.
And Kahneman and Tversky have used other terms that clearly signal there's something wrong about that.
The key difference that you ask here is if there is a discrepancy between a coherence axiom and people's judgment.
In the heuristics and biases program of Kahneman-Tversky, that was labeled as an error.
In my view, you can't put the blame immediately on people.
You need to look at your criterion, your theory about rationality.
That's the big difference.
That's what ecological rationality is about so there's
just to summarize that
there's a discrepancy between the coherence
axiom and then what people do
Kahneman and Tversky
say well that's a failure of human
reasoning or human rationality
you say no that's a failure
of the model
you need to check whether that's a failure of the model.
Yeah, you need to check whether that's a failure of the model.
In other words, is the disagreement then a normative disagreement?
Yeah.
Descriptively, there's a lot of agreement,
but normatively, you're saying, hey, look, they're still judging people against the same benchmark of Bayesian inference, but departures from Bayesian inference are not irrational in a large world. You can understand the difference between Kahneman's approach and Simon's approach. Simon's approach is also my approach, is this.
So Simon was criticized.
Both of them criticized the neoclassical economics.
You can understand people better if you know whom they're criticizing.
Both of them criticizing them.
Simon criticized them on a normative and a descriptive point. So he was arguing that neoclassic economics doesn't care very much about how people make decisions.
They build as if models.
And second, these models are not correct under situations of uncertainty. Kahneman and Tversky adopt the first part,
say people do something different from the model,
but say the models are correct and blame the people.
That's a difference.
And by that, one adopts the,
this is not Simon's idea,
because Simon took psychology seriously, while in problems like Lindner problems or the Bayesian problems, there's no psychology there. No,
you don't need to know anything about taxi drivers or about people. Everything, it's just about consistency, doing a calculation.
And that's the big difference.
Do you think Danny still believes in that model?
I haven't talked to him recently, but I
have not seen anything that he said
that he would actually doubt the neoclassical economic models.
So I think we need a revolution of behavioral economics and also of neoclassical economic
theories that allows the theorists to deal with uncertainty and to test the tools that
humans use there rather than to denigrating them to something, whether it's now called
irrationality or just an error or something like that, that doesn't matter.
And the less is more effects are one of the key counter example against this philosophy of more is always better.
And optimization is always better.
We need a realistic understanding of heuristics, taking heuristics seriously and testing them and taking uncertainty seriously. And not simply siding with a theory that's mostly one of consistency,
and if people deviate, the blame is on the people.
That is the key difference.
So the difference between mine and Herbert Simon's points of view
and Kahneman-Tversky's is not that Kahneman-Tversky see the glass of rationality half full
and we see it half, or they see it half empty
and we see it half full.
No, it's about the glass itself.
So, both of us, we did not take seriously
that these consistency axioms will explain rationality in every situation.
In their classic 1974 article,
Judgment Under Uncertainty, Heuristics and Biases,
they outlined three heuristics and biases,
representativeness, availability, and anchoring.
Which of those three do you think is the most descriptively
accurate? None of them, sorry
So Kahneman and Tversky made... It was a leading question, wasn't it?
Yeah. Kahneman and Tversky made a big contribution
about putting heuristics back on the topic, but
the heuristics back on the topic. But the heuristics that were favored by Kahneman
and Tversky together were different from the model of heuristics Tversky had. Tversky had
very precise models of heuristics. Lexicographic ones, elimination by aspect are examples.
The moment he joined Dainey, these models were gone. And availability is
just a word, has been never defined what it's about.
There are
availability has at least about half dozen different meanings and it's
picked one.
It is ideally to explain after the fact what happened.
It is very hard to make any precise prediction out of it.
The same holds for representativeness.
The same holds for anchoring.
It's never defined what the anchor is.
So if you anchor on a base rate, it's anchoring.
If you're anchoring on the new description, it's also anchoring.
It's the opposite result.
So, what I think is our contribution is to take off where Kahneman and Tversky start
descriptively and turn this, make the Kahneman and Tversky research more a Tversky research. So make precise models of
heuristics that are testable, where you can show how many people
precisely follow that, and what the others do.
And there's a good reason why people are not homogeneous,
because in many situations, there's a so-called flat maximum,
you can do different strategies, and they all lead about to the same success.
That's one of the reasons for that.
Gerd, as you know, I'm interested in speculative bubbles.
And I tend to think of the reasoning that gives rise to bubbles in asset prices as falling into two different categories.
One is like, and I apologize, this is all very amateur, but bear with me.
One I would describe as horizontal heuristics.
So we could probably roughly compare these to the class of social heuristics that you
mentioned. That is people look around them as to what their friends and other people in their
community and their networks are doing. If all of those people are, for example, buying houses or
buying stocks in a certain company, then maybe they'll use that as a mental shortcut. The second category are vertical
heuristics, where people extrapolate recent price gains forward into the future. And one example of
this might be the work by Andre Schleifer and Nicolai Geniole, where they adopted Kahneman
and Tversky's representativeness heuristic to kind of create a model of beliefs and how beliefs can generate bubbles.
Which of those two categories, vertical heuristics and horizontal heuristics, do you think is more important in generating speculative bubbles. Now, if I could explain the inner working of bubbles, that would be great.
I can give you some ideas, but I think about it first.
It is certain imitation will play a role, but in a minute I will explain why I think
one should not look at this again like just an inner process.
That's the fundamental attribution error that's well known in social psychology, which is committed by many of us again, again, and again.
We always look for an inner reason.
People must be wrong. And it's maybe a bias due to the individualism of
Western society, whatever. Or just because of decision theory. There are no people in it.
Except in game theory. And then it's again a world of risk. So, one of the reasons why bubbles occur besides
you just buy what everyone else buys is defensive decision making, to bring another thought. So, together with the standard mathematical models,
that's your vertical heuristic that predicted the future is like the past.
So, one needs to understand, if you have a prediction model like Bayesian updating
or any linear regression model, you feed it with
data from the past, you get a result and can hope that the future is like the past.
Which is, if you look at the world of finance, which is usually the case for some years, but then something happens.
So the models are good for several years, but then they flop.
And that could be a bubble.
So that's one, I would say, that's one of the potential causes.
It's a reliance on mathematical models that are fine-tuned for situations of risk, where
the future is like the past, but they're misused in situations of uncertainty, and that's the
Turkey problem, and the Turkey illusion.
But there's also something else, I think, going on. So imagine you are a manager at a big international financial firm.
And you know, or at least you have an intuition, that you are sitting on toxic papers.
And you should sell them now
before something happens
assume it's
2004
you realize that
but you also know
if you sell them now
and your competitors
don't sell them
and the crisis is not next year and your competitors don't sell them,
and the crisis is not next year,
your company will lose money,
and you'll be blamed for it.
And particularly if the same happens in 2006,
they lose money again, you will be blamed again, and you may be fired.
At least in 2007, you're fired shortly before the crisis happened.
So defensive decision-making means that you do not follow what's best for your company,
but you want to protect yourself in the first place. And that can lead to outrageous risks for your company,
but not for yourself.
You just were like everyone else,
and you can say nobody could see it coming.
So that's one of the,
and there have been a few examples
for these theoretical positions
of whistleblowers who have warned years before the financial crisis that it will explode and have been fired.
Defensive decision-making is, again, not something that's inside our mind, but it is a kind of attitude, a kind of behavior that's often totally conscious, knowing that the environment is set up in a way that you will incur costs if you do the best for your own company.
It's very similar to the situation in medicine. Study after study shows that about more than 90% of American doctors say
they practice defensive medicine. They don't advise the patient the best thing to do. Typically,
they advise too much imaging, unnecessary treatments, drugs, and all kinds of things,
because they know they won't be sued if they do too much and hurt patients,
but they will be sued if they don't do anything.
So these are factors which I would put in the center rather than inner workings,
like defensive decision-making and a culture around that's a negative error culture. So defensive, let's take the example of the housing bubble that happened in the United
States in the early to mid 2000s. Defensive decision-making would apply at the level of the
derivatives, like the residential mortgage-backed securities and everything else spawning from those
in that industry. But in terms of the underlying assets, the homes themselves,
I'm not sure defensive decision-making applies.
So what would explain the decision by ordinary Americans
to start speculating on real estate?
Oh, so first, I'm not an expert in that area,
but there were a number of reasons,
including Clinton's decision or hope to make it possible that every American has a home, owns a home. that banks were not lending money and then keeping the contract but selling it immediately.
So packaging and selling it.
So a lot of changes in the world that enabled banks to make big, quick profit.
If a ninja bought something, then it was no longer their problem.
Those things are all true, but the key ingredient was very optimistic beliefs about house prices.
And one argument, a counter argument is, well, you can only say that in retrospect and it's impossible to identify bubbles ex-ante but the interesting thing about the u.s housing market during that period was there wasn't much uncertainty around the fundamentals you could make an argument that
radical uncertainty applies to the chinese housing market in recent years and that speculation is totally rational in that context.
But there was nothing really uncertain about fundamentals in the US housing market in the
early to mid 2000s. So I just find it really puzzling that ordinary people, A, became speculators
and B, on some of the survey data, for example, the surveys Chip Case and
Bob Schiller would run, people had beliefs about house price rises, which if you extrapolated
them out, would lead to like absurd situations.
But I mean, I suppose we can explain that with models of bounded rationality, right?
No, I would first look at what banks told those
people. So, it is, we know that most of the American public, including also other countries,
have no education in financial literacy. And the banks at this time had no incentives to make this very clear, because
they were profiting from the premium they were going. And so I'm always, as you can
see, I'm hesitating, immediately blame the American people in this case. Of course, they
have no education. Why don't we teach in school financial literacy and also in school the system
and then it's not just people who were so optimistic it were the experts themselves
at least many of them so here is a quote from henry paulson the U.S. Secretary of Treasury, in March 2008. He said, our financial institutions, banks, and investment banks are strong.
Our capital markets are resilient.
They're efficient.
They are flexible.
So, or David Vinnier, Chief Financial Officer of Goldman Sachs, reported that they were hit by 25 sigma events several days in
a row, unexpectedly.
So the problem was not that something really unexpected happened, because a 25 sigma event
is something that shouldn't happen since the Big Bang.
And so it's about the risk models.
Let alone every few days they were totally wrong so and provided an illusion of certainty it's not just the people who are yes uh largely ignorant
in the financial situation it's the entire system that has also incentives to uh blow things up and then yeah start again get final question
you're in my view the the intellectual successor to herb Simon you're a great
defender of heuristics and human rationality but do you have a favorite
example of irrationality oh I mean mean, the sources of what's
commonly said irrationality have little
to do with consistency.
On contrary,
people which we would say
are irrational or dangerous
are consistent
in their own
beliefs, in their own
often strange beliefs.
So one needs to look for something else one of the
key sources is lack of education in the risk literacy and that so we we invest in the
mathematics of certainty we teach that in school, algebra, geometry, and beautiful things.
Very few are taught statistical thinking.
And if they're taught, they're taught probability theory, which is boring, they need to be taught
thinking, and in a way that they can understand.
That's one factor, which I think.
The other key factor that things are going wrong is that we are social beings.
The strength of humans is that we act in groups.
So that has been the strength.
Humans are not the fastest runners compared to other animals.
They're not the strongest ones.
They wouldn't win weightlifting against other animals if they would do that.
And so it's our group.
And that demands a certain kind this group cohesion is that people defend beliefs, as we see in the Corona times, which have no factual basis, but they cannot afford not defending that because they would lose their friends.
And one means against that is education.
Make people strong.
That is start thinking.
It will not exclude all of that.
But these are some of the factors that are dangerous.
But all of these factors, except not educating people, but the social side have also its
benefits and we need to tackle that.
And I think we are on the wrong road. If you just look internal and individual mind and think that here is the origin of all things going wrong.
And heuristics are, as you said, are a tool to deal with uncertainty.
But I'm not of the opinion that heuristics are always good.
No, this is why we study ecological rationality,
which is the answer, when does it work and when does it not work?
I have to repeat and talk this again and again and again,
because many of the audience are indoctrinated by thinking
that something must be either optimal or not.
And that's the Bayesian theory.
There are so many Bayesian theories.
It's a framework with so many free possibilities.
A specific Bayesian theory is not optimal.
It's only optimal relative to the assumptions being made. And one has to realize
that a heuristic is not better than avoid the term optimal because it implies that it could
actually prove what's best that you can do in a world of risk, but not in an ill-defined situation
that we have most of the time.
What's best needs to undergo an analysis of ecological rationality.
So the difference between what's now mainstream behavioral economics and my position and Simon's position is not that mainstream economics
thinks that heuristics are mostly good and sometimes bad. That's
Kahneman's version. It is
to analyze when they
are good and when they are bad. And also
to add the other sentence
that so-called optimization
models are sometimes good and
sometimes bad. This sentence is
rarely ever mentioned.
And that's the illusion behind that.
That helps the same thing.
The question is, when does a certain Bayesian model is promising?
And it has the same thing.
If you use Bayes' rates, then the future needs to be like the past.
If it isn't, forget.
Then you do like the Turkey, Bayesian updating, and you end in the Turkey illusion.
And that's very similar to what happened in the last financial crisis. basing on updating and you end in a turkey illusion.
And that's very similar to what happened in the last financial crisis.
So that's a kind of rethinking the kind of questions being asked.
What is rationality? So if you say it's something, a method to reach a certain goal,
then study its ecological rationality and don't start with prejudices about
heuristics.
Take them seriously
and take uncertainty seriously
and forget the illusion of certainty.
Gerd Giverenza,
you are a gentleman and a scholar.
Thank you so much for your time.
It was a pleasure to talk with you.
Thank you so much for listening.
I hope you enjoyed that conversation as much as I did.
For show notes, including links to everything we discussed, you will find those on my modestly titled website, josephnoelwalker.com.
That's my full name, J-O-S-E-P-H-N-O-E-L-W-A-L-K-E-R.com.
Please do subscribe to or follow the podcast depending on which app you use
to ensure that you never miss updates when we release new episodes the audio engineer for the
jolly swagman podcast is lawrence moorefield our very thirsty video editor is alf eddie
i'm joe walker until next time thank you for listening ciao Ciao.