The Joe Walker Podcast - Rational Minds Part 4: The Blind Leading The Blind - David Hirshleifer
Episode Date: December 20, 2020David Hirshleifer is a professor of finance and currently holds the Merage chair in Business Growth at the University of California.See omnystudio.com/listener for privacy information....
Transcript
Discussion (0)
Men, wrote the Scottish journalist Charles Mackay, think in herds.
It will be seen that they go mad in herds, while they only recover their senses slowly and one by one. My name is Joe Walker and three years ago, trying to understand
Australia's obsession with residential real estate, I began researching housing bubbles.
I read almost everything I could find. Increasingly, I came to view the question of bubbles
as not just a topic of interest to the fortunes of my country, but a vehicle to explore deep questions of human nature.
There's a long literature that gleefully lays bare the madness of crowds, from Mackay to Galbraith, Minsky and Kindleberger to Schiller and Chancellor.
But it left me with a nagging question.
No one in a bubble ever thought she was crazy.
So what is going on here?
In this series, I'm using the prism of financial bubbles to tackle an eternal question. What does
it mean to be a rational person? I'll be guided by five world experts who will show me that we're
not quite so befuddled as popular narratives would have us believe.
I'm inviting you to come with me on this journey
to reconsider what you might have been told
and to give rational minds a second chance.
I used to be kind of sane
I used to act kind of normal
I couldn't complain
Now it's never the same I just listened to a new short cast on Blinkist by Malcolm Gladwell.
It's called The Obscure Virus Club, and the short version of it is that throughout the 1970s,
a biologist named Howard Temin became convinced that something wasn't right in science's understanding of viruses.
His colleagues dismissed him outright as a heretic, but he turned out to be right,
and we all have a lot to thank him for. I love this story because I love stories of people who
persevere through ridicule and doubt to achieve great things for their communities or for the
species. I consider these people to be heroes. Revisionist history has lots of great
shortcasts, which you can find on Blinkist, and all of them are crafted by Malcolm Gladwell
himself. Each of his revisionist history shortcasts re-examines something from the past,
an event, a person, an idea, and asks whether we got it right the first time. I recommend checking them out.
They are really enjoyable.
And if you do, make sure you sign up via blinkers.com slash swagman.
You're listening to the Jolly Swagman Podcast.
Here's your host, Joe Walker.
Ladies and gentlemen, boys and girls, swagmen and swagettes,
welcome to the penultimate episode in this series on Rational Minds.
I have some questions.
Why do authors buy copies of their own books to hack bestseller lists?
Why do many of us judge the quality of a restaurant by the proportion of its tables that are occupied?
Why do waves of optimism and pessimism,
often triggered by the courage or cowardice of leaders,
prove decisive in warfare?
Why did ancient Roman families hire professional mourners at funerals?
Why are drugs criminally sanctioned in Indonesia
but sold at cafes in the Netherlands? Why does management advice recommend that the boss gives
their opinion last in a meeting? Why do Australians act Australian, the French act French,
and Americans act weird? Just kidding, American listeners. What do all of these things have in
common? Well, they are all examples of localized conformity. Conformity has long been understood
as one of the most powerful forces animating social life. Aristotle wrote that man differs
from other animals, particularly in this, that he is imitative and acquires his
rudiments of knowledge in this way. In Shakespeare's time, the word ape already meant both primate
and imitate. One way of modeling conformity is called an informational cascade, which occurs
when individuals with private information make decisions sequentially.
Now, not all instances of copying behaviour are rational,
but an informational cascade is one way of modelling bubbles.
Sometimes it is rational to copy the behaviour of those around you
and thus not pay the cost of gathering your own information.
But if only a fraction of the population end up
doing their homework, the entire population can become vulnerable to bubbles. To quote from
Matthew, if the blind lead the blind, both shall fall into the ditch. My guest, David Hirschleifer,
created the informational cascades model in a seminal paper co-authored with Ivo Welch and Suchil Biktundani in the early 1990s.
David is a professor of finance and currently holds the Mirage Chair in Business Growth at the University of California, Irvine.
Without much further ado, here is David Hirschleifer.
David Hirschleifer, welcome to the show.
Thank you for having me.
It's so great to finally meet you.
It has been a long time coming and all roads have led to David Hirschleifer in my quest in understanding or trying to understand bubbles and rationality.
So this is a real pleasure for me.
Dave, before we get into those topics, I thought I might learn a bit more about you and your
background. And I don't think many people know this, but the Hirschleifers are actually a high
fashion family. Tell me about that. Well, thanks, Joe. Yeah. so the East Coast Hirschleifers are the ones with the fashion sense,
and I'm in the West Coast Hirschleifers, so the fashion-impaired branch of the family.
But one of my grandmothers and her side of the family developed the clothing, the high-end clothing store.
It started as a fur store, actually, but diversified into clothing of all varieties.
And actually, in my research, eventually I became interested in fads and fashion.
So there may be some hidden genetic component to what we do.
Were you involved in the business in any way when you were growing up?
I was not.
So my grandmother's two sons, there was my Uncle Paul and my father Jack, and Jack went off into academia and was not
really involved with the business, and Paul was an excellent businessman. So I was exposed to the
academic side of studying business rather than the practical side.
And it was not until many years later when I started to consult that I started to get some flavor of the practical side of business.
What first interested you in economics?
Well, for me, it was in the blood to a large extent since my dad was an economist and he was always super enthusiastic about his research and about ideas in general.
When I was growing up, we used to take walks in the evening through the neighborhood and just talk about any ideas, but those were often economics ideas that he was thinking about. So his excitement was really contagious. And
really, from quite a young age, I knew that I wanted to become an economist.
So it's a kind of a weird deviant childhood, I guess. But it was hardly even a choice for me
in deciding whether to go into academia. How did you find yourself becoming interested in behavioral finance?
So this actually ties back to my dad also a little bit.
So my dad had an interesting combination of ideas
because he was very interested in what came to be known as evolutionary psychology.
So the idea that the human mind was designed by thousands of generations of natural selection in the human ancestral environment.
And so that naturally led him to ideas about imperfect rationality.
So, for example, that there could be a value to an irrational feeling of rage when someone steps on your toes.
Well, that might seem just crazy, but if everyone knows that you're prone to rage, then they don't step on your toes very often.
And so that can be beneficial.
So on the one hand, he developed these kind of ideas of imperfect rationality but on the other
hand when it came to economics he was a big fan of the fully rational approach and so he favored
models in which prices get set and everything works nicely as if everyone were perfectly rational.
So I never felt like I had a completely good read from him on that,
about how he reconciled those two things.
But from my own point of view, when I went to graduate school and was studying finance at the University of Chicago,
under Gene Fama and other distinguished proponents of the efficient markets hypothesis,
it seemed kind of strange to me because I knew how irrational I often am in my own personal life.
And it seemed like a lot of people make severe mistakes.
And so how can it be that markets work so perfectly?
Now, there are standard arguments, of course, that, well, maybe it only takes the truth, then I tend to react, oh, no, it's not, right?
I just start trying to poke holes in it.
And so my reaction as a graduate student was, well, no, it's not.
I mean, of course, there's a lot of efficiency.
There's a high degree of efficiency.
But people seem to make systematic errors in
groups. And there are some rather obvious illustrations of this. So in the left-right
political divide, you have people who are passionately on both sides, and each one thinks
that the other side, people on the other side are idiots, that they're analyzing the information
imperfectly. The two sides can't both be right, right? So either one side has a really systematic
error or the other side or both. And so those are important decisions about how society should be
run. And there are also disagreements over factual issues
as well. So it seemed to me rather contrived or unlikely to think that people have these profound
disagreements over politics, over religion, over many issues. And yet, when it came to valuing firms, suddenly they were free of bias and able to process information,
consistently able to process information almost perfectly.
If people don't act according to the axioms of rationality, does that mean that people are
irrational? Or is it an indictment on the model as being unrealistic?
So when we evaluate is somebody behaving irrationally, we always have a model in our head, right?
So we can't really say that something is a mistake unless we have a model of what is the situation that the decision maker is facing.
And then relative to that model, we can identify the mistake.
So I think you have a good angle there that if we see what looks like a mistake, we could conclude that the person is irrational or maybe we just have the wrong model.
So if we see an adult losing to a child playing chess, we might say, well, that was a wrong move.
How stupid. But maybe it's just that the adult's goal was different. Maybe he was trying to encourage the child and um and then maybe this was entirely rational
so common sense always has to come into play that if we see someone trading bitcoin
and maybe taking wild risks and taking actions that look perhaps not appropriate relative to a reasonable degree
of risk aversion, then we can say one might conclude that the person is irrational,
or one might conclude that there's entertainment value to taking bets on Bitcoin, telling friends about the wonders of cryptocurrencies, and so on.
So ultimately, one needs to appeal to common sense or perhaps to additional implications
of the model. So if one thinks that it's entertainment and it's about talking to friends, then one might look at, yes, well, what about people who are not talking to others very money? So they let an option that's in the money expire
at the expiration date. So that's probably not that entertaining to just throw away money,
at least to me. And it's probably not something I'm going to go bragging about too much. So that
would be a case where it really looks like a mistake, probably. And of course,
one can always construct a contorted model to say, no, this is part of my game of three-dimensional
chess. But sometimes that's plausible and other times it's not.
In thinking about bubbles in asset markets, I've come to the conclusion that the most
important fact about bubbles is that market participants' decisions are not independent.
And your work has been absolutely central to that understanding.
So let's talk about how.
In a classic 1992 article titled A Theory of Fads, Fashion, Custom and Cultural Change
as Informational Cascades, you and your co-authors Sushil Bhiktenjani and Ivo Welch defined what you call informational cascades. You and your co-authors Sushil Bhiktenjani and Ivo
Welch defined what you call informational cascades. And you found that these cascades
can lead even perfectly rational people systematically into error. What is an informational
cascade? So an informational cascade is a situation where someone who needs to make a decision,
and that person's decision is independent of the information that the person has received
privately.
So someone observes the actions of those ahead of her, and because of the information contained in what she observes, her own private signal she
ignores. So for example, we can think of a, in the simplest case, a set of people lined up in a row,
each making a choice in sequence. So let's say to buy a car or not buy a car and each one is trying to figure out whether that's
a good idea to do and each one receives some private signal that says this is a good idea
for to buy a car, for example, now the fourth
person may say, well, my signal says don't buy a car, but those three other people all did buy a
car. So maybe that is a good idea after all. And so if that social information that comes from observing others overwhelms
the fourth person's private signal, then she is in an information cascade.
Now, that in turn has an important consequence, because now her decision doesn't depend on her
private signal. That means her decision is completely uninformative to later decision makers.
So if we now move on to the fifth person, the fifth person is looking and saying, well, now should I buy a car?
Well, the fifth person is now basically saying exactly the same thing that the fourth person did, that the first three people adopted.
So the fifth person, from an informational point point of view is in exactly the same position. So he's also going to buy the car even if his information is against it. In fact, you
could have a thousand people who all have information against the car, but just because the first few
people happen to have bought the car, that creates a herd or a cascade,
and you get very poor decision-making. So the decisions and the information of just a few early decision-makers tends to be dominant, and you get errors occurring all too often because of that.
So what is the difference between herding and informational cascades? Should we think of informational cascades as a would view herding as a situation where people or animals are behaving
in a similar way, and it's not just happening by chance. So of course, if you just randomly scatter
beads on the ground, some will tend to be clumped by chance, and I wouldn't want to call that
herding. But if there's any attraction where they're moving towards each other intentionally
or causally, then I would call that herding. And then information cascades are a special case of
that where the reason for herding is informational. So you could have this clumping because maybe there are predators out there.
And so there are wildebeests and there are lions.
And the wildebeests are all trying to get away from the periphery and into the center.
And so they end up tightly clumped.
So they're not learning from each other.
It's just that there's a payoff benefit to being inside the club. Or
there could be that if we all are on the same social media app, then we can all talk to each
other. And so there's a payoff benefit to our all doing the same thing. So those are not information cascades but on the other hand it may be that i see you buying
an apple phone and i think well i wasn't sure whether to get an android or an apple but since
you got an apple maybe i should too and that kind of situation will often lead to information cascades.
Gotcha.
So the preconditions for informational cascades are we have a group of people making a decision in sequence, and then people have private information, but they also have information revealed by the actions of those acting ahead of them in the queue.
That's right, Joe. How likely is it that the wrong cascade occurs?
So people end up making the wrong decision en masse?
Yeah, it turns out to be shockingly high under rather mild conditions. And
so one thing to answer that question,
it's important to have a benchmark. And so a nice benchmark would be, what if we could just
talk to each other? And so when it's my turn making a decision, I just talk to everyone
who went ahead of me, and I ask each of them, what was your signal?
Now, in that situation, if you accumulate enough signals, then you can make an almost perfect decision.
So if I'm the one millionth person in line, I've collected up a million signals about whether to buy an iPhone. And if different people are receiving signals
that have at least some degree of independence
so that the different signals are incrementally informative,
then a basic theorem of statistics and about probability updating
says that you're going to be able to draw conclusions that are correct almost for sure.
You can just collect enough signals.
In stark contrast, in a cascade setting, after, say, three people have adopted the iPhone, I might fall into a cascade.
And so that means that really the only information are three signals in a row.
So what if these signals happen to be rather imprecise?
So that if buying an iPhone is good, then with probability 0.51, I get a favorable signal saying that it's a good idea.
But with probability 0.49, I get an unfavorable signal.
And similarly, if the iPhone is bad, these are reversed.
So the signal is just a little bit informative.
So in that case, the probability that there are three adoptions in a row, regardless of whether the iPhone is good or not, is one half times one half times one half.
And so you get a one eighth chance of getting into a mistaken cascade right out right off the bat.
And then you have a million people all making the wrong decision.
That's not even the only way to get into a bad cascade.
But that just gives an instant illustration that you can really easily fall into the wrong decision. I'll just add, it's worse than that because what if Joe Walker really likes smartphones?
And so his signal is a little bit more accurate than mine. And so just a tiny bit and so you go first and everyone in the world knows that Joe Walker
his signals accuracy is 0.52 instead of 0.51 so in other words the signals are almost pure noise
they're not too good you really need a lot of them to make a good decision but everyone knows
that Joe has studied things a little more carefully. Well, as soon as
you buy the iPhone, well, your signal dominates mine, at least if it's this simple binary signal
up, down. So I'm immediately in a cascade. And I'm just going to imitate you. And then whoever
looks at me is going to say, well, I know David's decision is uninformative, so I'm going to follow Joe, too.
And that goes on to a million people.
So that means it's not just one eighth probability.
It's 100 percent chance that a cascade forms after one person.
There's almost a 50 percent chance that that cascade is wrong.
So depending on how you twist the assumptions, you can get different numbers.
But what's generally true is that the chance of a mistake in cascade is surprisingly high.
One of the features of your theory is that fads are as fragile as they are spectacular.
Why are cascades so brittle well cascades are brittle in this model because people are smart
and so right now we're talking about a model where people are rational and they understand
everything that's going on so when a cascade forms it form it is usually forms quite early, and it always forms before a very conclusive information has arrived.
So the cascades are not too well informed.
But everyone understands that.
They're all sophisticated about this.
So because of that, if I'm the millionth person, yeah, I know there's a cascade and I should follow it.
But I also know that the chance that it's the right decision is not 99%.
It's maybe 60% or 70%.
There's a very significant chance that it's wrong.
So that means if you have any shock to the system, then people will suddenly start changing their minds.
So maybe a million people have bought iPhones, and now a new review comes out that contains new information.
I took apart the iPhone, and I discovered this.
Or a new competing phone comes out.
There's some new information.
It doesn't have to be highly informative information.
It can be just a little bit.
But as long as it's a little bit informative,
then I had only this moderate preponderance of evidence
saying that the iPhone's good.
Now I'm no longer sure.
Now I start using my own signal.
Now, if I buy an Android
the next person looks and says aha a million people bought the iPhone but the
last person bought an Android there was some shock to the system so now I'm
going to use my own signal too and the process occurs again so a few people are
going to act based on their own
signals and then quickly they'll fall into another cascade. So that's the fragility.
FADS is something different and I can get to that too. In the original model, the way it was set up,
it doesn't capture bubbles or crashes because the cost of adoption in a bubble,
for example, buying a stock or buying a home, increases as the bubble forms.
So how do you apply cascades to thinking about bubbles?
Okay, well, that's a really central question because we think about market bubbles.
These are happening in markets where prices are being set.
So if it's okay with you, I'll take a step back and also tell you about fads,
and then we can go on to market bubbles.
Let's do that.
So the fads are much like the situation where we had fragility, right?
You have some shock to the world, and now people use their own signals.
The cascade is dislodged.
But let's think about a situation
where there's only a chance
that the shock to the system occurs.
So 100 people have acted.
Now there's a chance that the world has changed,
5% chance.
We're not sure about whether it's really changed or not,
but there is that possibility.
Now, the 101st individual, so let's say that the cascade was to buy an iPhone. Now, the 101st
person gets a signal that says, no, Android is better. And furthermore, he knows that maybe something's different now about the technology of
the Androids versus Apple. So that might be enough to get him to use his own signal, even though he's
not sure whether the world is really different or not. And so you can have a situation where in
reality, the world didn't change. The iPhone is exactly as
good as it was before, so is the Android. But the sheer possibility that the world might have
changed can be enough to dislodge a cascade. And in fact, in our theoretical model, we give a
little example where there's a 5% chance that the world has actually changed. But then the
probability that the people switch their action and switch over to
the opposite cascade is more like 8%. So there's like a 70% higher chance that the chance that the
cascade or the mass behavior switches is something like 70% or 80% higher than the chance that the world has changed.
And so we call these things fads, where millions of people change their behavior,
the world itself may not have changed, the fundamentals may not be different,
and the probability of the behavior changing is way higher than the chance of the world changing.
Okay, so with that kind of fun in mind, let's now think about cascades in markets where there's price setting. And so you raised a key insight here, which is that if I'm buying a stock,
and let's say three people in a row are buying a stock and now
you see that a bunch of people have bought the stock, that doesn't necessarily mean you
should buy the stock too because as we buy the stock, we're driving up the price.
So on the one hand, you're gaining information that the stock is good but on the other hand,
you also learn that the price is correspondingly higher.
And so in the simplest model of this, in fact, prices get set exactly to prevent information cascades.
So if you think about that, if prices were set so that there was a cascade, then that would mean that there were only buyers, no sellers, or that there are only sellers, no buyers.
Everyone is just imitating the same action.
That doesn't sound like a market equilibrium. So that's a key insight. It turns out, however,
that if you have subtler models of how capital markets work that are more realistic in certain ways, it turns out that mistaken cascades can rear their ugly heads again.
So there are kind of, I'd say actually three different things involved here.
First, I'll go away from price setting again, which is think about excitement about Bitcoin,
where I see my friend Joe is talking about Bitcoin all day.
And so he reads about Bitcoin.
Then I might say, well, Joe thinks it's interesting to read about Bitcoin.
I'll read about Bitcoin, too.
Now, that has nothing to do with the price of Bitcoin, right?
This is just an action. It's not trading on a market.
And so it's easily possible that a cascade of studying Bitcoin or of studying cryptocurrencies or of talking about them can form. So that's one way in which cascades can affect markets,
despite this argument about how prices get set to block cascades.
So the cascade could be in the initial process of getting interested in some investment
or of losing interest in the investment.
Okay, now let's get back to prices.
So a subtler situation, which is very real in securities markets, is that you have some traders who have private information and others who are just trading for liquidity reasons.
So a liquidity trader might just be selling her shares to send to college.
And it's not because she has any special information.
And then the informed trader, well, she may have done careful analysis of the firm
and have some positive or negative signal about its prospects. So there are models of the situation
where there are the informed traders, but the informed traders may be kind of unsure
about the quality of their information
so that it can be helpful to an informed trader
to know are other informed traders
getting the same signal as me
or are they getting the opposite signal?
So for example, if the state of the world is good,
so the stock is very valuable,
then maybe three quarters of the signals will be high and one quarter low.
And if the state of the world is bad, then it's reversed.
So three quarters of the signals are low and one quarter are high.
Now I've got this high signal, but I don't know, is it really the good state or is it just that I was unlucky and I got the wrong signal in the bad state? So now I look
at the market and I see someone buys or if I can't see that person's trade, I see the price blip up
and I infer someone bought. Well, now I may feel confirmed about my information signal.
So now I think there really was private information here.
This person got the same signal as me, so I'm going to buy two.
So notice the cascade-like flavor to this, that I see a buy, and then I buy two, even though the price got pushed up.
Or even I saw the price go up, now I'm going to buy too. So it turns out that you can get a kind of herd behavior or information cascade.
It's not that everyone trades in the same direction, but the informed traders may choose
to trade in the same direction. So this can have this effect then where the investors are reinforced in imitating the behaviors that they observe.
Why have we evolved this propensity for imitation? So in the ancestral environment in which humans evolved, there was value to learning from others.
And so there's a need to locate food, locate prey animals for hunting for gathering food there's a need to know which which foods
are poisonous and which are helpful or what kind of cooking or treatment is needed in order to make
something edible and no single individual can directly gather enough information to survive effectively so we there has been
cultural evolution and development of ideas which it is the accumulation of ideas which has allowed
the human species to successfully inhabit a wide range of different environments. And so that means that it's crucial
that we acquire information from others. To some extent, we can talk to others and acquire
information that way, but it's also often efficient to just see what the other person does
and take that as useful information. Tell me about social emergence.
Well, great.
So the general concept of social emergence is that you get outcomes
that occur at the group level that are not just scaled-up versions
of what any single individual does right so um if i buy an iphone and then a million people buy an iphone
that's just a scaled up version uh but it it doesn't always work that way uh so uh the my
favorite example of social emergence is death spirals in army ants.
And so for these army ants, it turns out that sometimes they get stuck walking in a circle.
These spirals can be hundreds of feet wide, and they walk days and days on end until they starve to death and die.
Now, there's no instinct for the individual ant to walk in circles until it dies. And in fact,
natural selection would operate strongly against that individual instinct. But there is a logic
here. So ants have an instinct to forage and look for food somewhat randomly in part and they also have an instinct to follow each other
relating to pheromone trails but the it makes sense that if an ant has followed food that
other ants should follow so that larger numbers of ants can gain access to the food source
so this system works very well for ants usually, but occasionally the ant
that's at the front of the line might accidentally start following the ant that's at the end of the
line. And if so, then they start walking in circles and then it's highly dysfunctional
social outcome. A bit the way that a market bubble is a dysfunctional social outcome.
So we should think of bubbles and crashes as the financial version of these ant-death spirals?
Well, I think there's a parallel for sure, that the outcomes can be disastrous,
and they can be disastrous for both individuals and society as a whole.
And yet there's still a logic to them at the individual level that we can understand why individuals are behaving the way they are.
And then we can see the bad outcome emerging as a surprise. That is, it's logical. After it's explained,
we can see why it emerges, but it's counterintuitive at first. If you just see the aggregate outcome
and you don't know the microstructure behind it, then it's a surprise.
So I can give you a different example of an emergent outcome in finance, which is that people often trade actively.
So they will invest in active mutual funds or on their own account.
They'll bet on cryptocurrencies or tech stocks.
They'll go on Robin Hood and invest in small bankrupt companies and so on.
And empirically, what's been found is that when retail investors are trading actively, that on average, they underperform.
So they basically waste any transactions costs of trading.
Nowadays, those have gotten really, really low.
And individual investors even seem to have a talent for losing money.
So they seem to have a skill at underperforming the market.
So that raises the question of why are people trading actively when they could just hold an index fund
or they could they could they could they could just passively hold the stock market rather than
take bets on individual stocks and so my co-authors bing han j Johan Wolden, and I have a model of this, which has this property of social emergence.
So at the level of the individual investor the investors will tell her trading strategy to
the other investor and tell the other investor what return she experienced on her trading strategy.
And there are two trading strategies. There's the active strategy and the passive strategy.
And by one interpretation of active strategy is high variance. So it's something quite risky, maybe taking a big
position in Bitcoin. And the passive strategy might be not trading Bitcoin. And a key assumption
of the model we call self-enhancing transmission, which is that if I earned a high return on my
investment, I'm more likely to tell you about it than if I earned a lower return.
And it turns out there's a lot of evidence that people actually behave that way
when talking about their investments.
So the way that plays out then,
oh, second, another key assumption is that the receivers of messages are naive about this.
That is, when they hear about a high return on a stock, they don't stop and think, well, I wouldn't have heard about this if it had a low return.
So, there's a selection bias in what they're hearing about, but they don't discount for that. And they also think that past performance is indicative of future
performance. So they hear about a good return, then that makes it more likely that they will
switch to that strategy than if they hear about a low return. Okay, so the way this plays out,
it turns out that high variance strategies spread through the population at the expense of low variance strategies.
The reason for that is that if a strategy has high dispersion of return, then there are a lot of, it's quite often there are really big returns in the upper tail that are getting reported.
And then that's persuading people to adopt the strategy.
The really low returns are there, but they don't get reported, so they don't really hurt the spread of the strategy. So even if the high
variance strategy underperforms on average, if it has a slightly lower expected return,
because of this transmission bias and what gets reported, it spreads through the population.
So you get this emergent outcome of popularity of very risky
strategies, such as actively trading individual stocks or cryptocurrencies, even if it loses
money on average or causes you to bear excessive risk on average, but it just socially emerges.
And so in particular, in this setting, there's nobody in the whole economy who's saying
i really like high variance i'm going to buy this because the variance is high
in fact nobody may be even aware of whether a strategy has high variance or low variance
so it's not that at the individual level people have a taste for variance, and then that scales up to the
aggregate level.
Instead, it socially emerges through bias in the social transmission process.
That is the perfect segue, because I'd like to move from that particular example to a
more general point.
In your 2020 presidential address to the American Finance Association, you made what I consider to be a deep and profound argument.
You argued that social emergence can create the illusion of a behavioral bias for something when no such bias exists.
And I'll quote you from the paper.
We cannot in general infer from an individual behavior that there is a direct bias
for that behavior. Social transmission bias indirectly generates outcomes that can look
very similar to direct individual level behavioral biases, end quote.
This one doesn't really end in a question mark, but I just want to throw to you to unpack that idea and take as long as you like.
Absolutely.
So this example I gave did indeed segue into this, that here you have a kind of mimicry, that it looks like investors have a taste for high variance, or maybe that they have a mistaken
belief that maybe that they're overconfident in their ability to choose stocks, and that that's
getting them to trade too much. But in fact, neither of those is the case in this model.
So there's a kind of mimicry that the situation creates the illusion that people have a taste for high risk
or an illusion that they're overconfident. But it's not such a simple direct individual level
bias. Instead, they are subtle biases in the process of social transmission that then cause
these outcomes to emerge. So I'll give you another example of mimicry. And this is another
paper of mine with the same co-authors. So our premise is that is another transmission bias,
which is that when someone chooses to consume, that's more visible to others than if I choose
to stay at home and do nothing in particular. So if I line up at Starbucks to buy
an expensive cup of coffee, people see me lined up and they think it's appropriate to buy expensive
cups of coffee. But if I'm sitting at home with my cheap cup of coffee, no one notices.
Or if I have a boat that's parked in my driveway, people notice and say, wow, it's cool to buy boats and
have expensive pastimes. But if my driveway is empty, they don't notice anything in particular.
So once again, what if people fail to adjust for that selection bias in what they're observing?
Then they'll tend to think that other people, they'll tend to update towards thinking
that other people are consuming heavily and that therefore consuming heavily is a good idea.
For example, maybe there's a certain risk we all face of wealth disasters. And so there's
some importance to saving for the future, but I don't know exactly how severe that risk is.
Well, I update toward thinking that that risk is not too
severe because I think I'm seeing my friends all consuming really heavily. So in consequence,
I'm going to start consuming heavily. But now people actually are consuming heavily,
and they're becoming targets of observation for others. So you have this self-feeding process, you have a positive feedback that
turns out can lead to extremely high levels of overconsumption in society as a whole.
Okay, so we have social transmission bias, we have overconsumption, but this is mimicry again,
because it's not that any single individual has an excessive taste for consuming now instead of in the future.
It's not a really high rate of time preference.
Or it's not some kind of direct psychological bias like temptation that I'm just tempted to consume.
And there are behavioral models that say this, that, well, people are tempted to consume, and that's a direct bias toward consuming too much.
But there isn't any such direct bias toward consuming too much in our model.
Instead, it just emerges socially through bias in the social transmission process.
So there's a whole family of models where people have relatively direct reasons for overconsuming.
So I want to show off.
I want to signal that I have wealth, Veblen theory.
So I overconsume to show off.
And there's this direct benefit to showing off.
But again, in our model, people don't even know that they're overconsuming.
So it's another example of this mimicry effect.
This kind of mimicry effect means that we as researchers can misinterpret the evidence,
that we think there's a certain kind of bias when there isn't.
And that makes a huge difference for policy in this setting,
because if people are just subject to temptation, then if you gave a disclosure and said, here's how much everyone's consuming, it wouldn't help at all, because everyone understands that
everyone's over-consuming. We already knew that. But in this social transmission bias setting,
some people at least may have misperceptions about how much other people are consuming.
And I might learn, whoa, those smart guys over there are not consuming very much.
Well, maybe I shouldn't either.
And so disclosures can actually sometimes help cure the over-conception problem. Oh, so all that is another illustration of mimicry, but one of the
biggest that I'd like to tell you about, since you mentioned this presidential address of mine,
is the main model of that address, which was a model of social transmission bias and bubbles.
And so in that model, again, you have a population of people
where they get drawn in pairs to meet and share information with each other. But in these models,
people have private information signals that help tell them how much an asset is worth.
So everyone's trying to value the security. We can call it the stock market.
And they're trading, buying, and selling based on the information they have. And over time,
people are accumulating more and more information because here I met someone. Now I've got that
other person's signal as well as mine. Now there's another meeting. Now I'm sharing my two signals.
And so the signals tend to grow exponentially over time, which ideally, if transmission were perfect, people would be becoming better and better informed over time.
And they would converge toward very efficient valuations of the stock market. So we modify that model in just a little way,
which is what if when I share information with you, I bias that information? It could be a very
small bias. Maybe it's an optimistic bias. So I take this average signal that I've accumulated,
but I add 1% to that signal or some tiny amount.
I twist it a little bit optimistically. Maybe I think that that makes my conversation more
exciting. Maybe I want to be viewed as a positive person. There are a wide number of reasons why
I might do that. Well, the effect of that is that there's a recursion here because this bias signal
that I've given to you, that now becomes an input to you when you add the bias and give it to someone
else. And so these biases are accumulating recursively. So the numbers of signals are
growing exponentially. So we're updating our beliefs more and more strongly based on the signals.
But sadly, the biases are accumulating as well.
And so the effect of that is that you get a bubble and the bubble starts out growing slowly.
But as the biases and signals accumulate, it starts getting faster and faster until you get explosive growth of the bubble.
Finally, ultimately, as public information arrives that gets more and more informative,
eventually reality has to set in.
We get the bad news and the bubble corrects.
So you get a kind of hump-shaped pattern of a rise of a bubble,
eventual correction of a bubble.
And this turns out to have this mimicry process too, because you have returns are high for a while, and then the beliefs become more and more optimistic.
So that looks like overextrapolation. It looks like people see high returns, and they're getting more optimistic. So that looks like over-extrapolation. It looks like people
see high returns and they're getting more optimistic. It looks like those returns are
causing the over-optimism. But that's not happening in the model, right? I've described
the assumptions and nowhere is someone looking and saying there was a high return, therefore I
think prospects are good. But it creates the illusion of over-extrapolation.
So people could look at the data and they could think there's over-extrapolation,
even though there isn't any. So once again, we have to go beyond behavioral finance.
So behavioral finance says we look at a psychological bias. We look at the consequence of that bias fairly directly for pricing, trading, and returns.
And instead, we need to go to social finance, which says, well, wait, that stuff is all
valid, but we also have to look at the process of social transmission between investors.
One school of thought which attempts to explain bubbles by using a family of what I
call for short vertical heuristics. So this is like Daniel Kahneman's system one. And,
you know, I think it's worth noting that all of Kahneman and Tversky's experiments were conducted
on individuals, usually Western university students, answering
their questionnaires in isolation. And it spawned this sort of zoo of heuristics and biases,
but they are all very individualistic. And you've spoken about overconfidence as one sort of
example. Then obviously there's like extrapolative beliefs and like Schleifer and Gennaro, they have a great model where they try and use the representativeness heuristic to explain extrapolative beliefs.
But we are a cultural species and to hijack Kahneman's metaphor, I think it's more fertile to think about system three
or what I'll call for short, like horizontal heuristics rather than vertical heuristics. And we react to one another.
We don't act in isolation.
It's not like when a bubble happens,
it's just like everyone coincidentally making the same mistake
around the same time.
People are reacting to one another.
And what is so interesting about your model
is that you can explain bubbles and crashes by using social transmission bias.
And it also highlights the fact that what happens is consistent with models of vertical heuristics but it's an illusion to try and explain the bubble
by using those heuristics like overconfidence and extrapolative beliefs my little exposition there
was very amateur but am i on the right track just in kind of like summarizing things back to you
i'd say you're on the right track, but from my point of view,
a little too emphatic. And so I'd like to emphasize that it's not either or. And so I
have great sympathy for the concept of over-extrapolation, overconfidence, and so on.
I think that individual level psychological biases are important.
But I love your phrasing of system three, which I'm going to steal shamelessly in the future.
And I think that those system three processes are crucial, too.
And also that system one processes, they underlie system 3 in subtle and surprising ways, right?
So that it can, so they're still there.
The individual level biases are still there, but they can underpin, they can be at least part of the underpinning of how we interact socially.
I guess my remaining critique or reservation would be that a lot of the individual level biases
we uncover through survey evidence.
And I wonder whether, and this is to push back
against your observation that it's not an either or,
but I wonder whether System 3 is actually the driving force, but then people
post hoc rationalize their system three inspired decisions, and they tell us some story which is
consistent with extrapolative beliefs. Well, one thing that triggers one of my big punchlines in life here,
which is in the spirit of your thinking that behavioral finance has taken over extrapolation
as given, or it might take contrarianism as given. And so we have the phrase, the trend is your friend. So basically,
momentum trading, I'm going to buy on the trend. But we also have buy on the dips, which says,
well, I'm going to buy trade against the trend. And so in behavioral finance, that stuff is
exogenous, and it's supposed to be some individual level inherent bias.
But what social finance says is, wait, these ideas can spread from person to person.
I hear the trend is your friend, and maybe that persuades me.
And so it spreads contagiously like an infection.
And at other times, maybe it's buying the dips that's spreading.
So in that sense, I'm very much on board with you. Now, as far as your skepticism
of survey evidence for eliciting biases, well, I do think that some biases are much better
documented than others. And there are some that I find persuasive.
Which ones?
Well, for example, overconfidence in the sense of miscalibration, that I view my opinions as more accurate than they really are. I think that there's a tremendous accumulation of evidence
for that. And experts, people who are more expert than me in the
psychological evidence also believe that. And so that would be an example. Limited attention
effects as well. So Dave, tying this all together,
how should we think about bubbles in asset markets?
Are they driven by irrational behavior, whatever that means? phenomena which are collectively irrational but individually rational, which we could explain
using, for example, models like informational cascades. How should we think about bubbles
and rationality? So there are models of bubbles with full rationality, but I feel that there is more promise for capturing reality to recognize the reality that people are imperfectly rational.
And that almost surely plays into capital market behavior and especially bubbles. So the key distinction that I would emphasize
would be between social or non-social, that there are many theories of bubbles,
behavioral theories, as well as the rational bubble theories that are non-social. That is,
people don't talk to each other. They don't observe each other. The only social interaction is through anonymous trading on the capital market. And so I call those the old kind of approach to bubbles.
And I think the right approach to bubbles is to view them as an inherently social phenomenon,
that you have waves of excitement that become amplified through social transmission processes.
So rational or irrational, what I'd suggest is we should look to individual processes that are not
totally crazy. So after all, people are trying to do as well as they can for themselves, and they do think about things, but not infinitely rational either.
So to assume that we can all kind of disentangle the complicated strategies of everyone else as they dynamically evolve through time, that probably places too great a burden on the cognitive processing powers and the computational abilities of the agents.
So instead, we should think of agents as following pretty smart heuristics,
but heuristics that can go astray. So in the example of this model in my presidential address
that I was telling you about, people are pretty reasonable on the whole. They're sharing signals. They're making
quasi-rational updates based on those signals, so-called Bayesian updates, which just means
using the laws of probability correctly. The only problem is this little bit of bias that's added in.
So there's a bit of deviation from rationality and the fact that people don't understand that that bias is coming to them from others.
So that would be one example.
There are many different possible ways of understanding this, but I'd say that general approach would be let's have people who, like real people, are at least somewhat sensible, but also like real people, are not Commander Data or Mr. Spock.
They don't have perfect computational abilities. That seems like a realistic view of humanity.
David Hirschleifer, thank you so much for sharing your time and your insights.
Thanks so much for having me, Joe. I really enjoyed it.
Thank you so much for listening. I hope you enjoyed that conversation as much as I did. For show notes, including links to everything we discussed, you will find those on my modestly
titled website, josephnoelwalker.com. That's my full name, J-O-S-E-P-H-N-O-E-L-W-A-L-K-E-R.com.
Please do subscribe to or follow the podcast,
depending on which app you use,
to ensure that you never miss updates when we release new episodes.
The audio engineer for the Jolly Swagman podcast is Lawrence Moorfield.
Our very thirsty video editor is Alf Eddy.
I'm Joe Walker.
Until next time, thank you for listening.
Ciao.