Freakonomics Radio - 267. How to Make a Bad Decision
Episode Date: November 17, 2016Some of our most important decisions are shaped by something as random as the order in which we make them. The gambler's fallacy, as it's known, affects loan officers, federal judges -- and probably y...ou too. How to avoid it? The first step is to admit just how fallible we all are.
Transcript
Discussion (0)
Let's say I flip a coin.
And it comes up heads.
Now I flip it again.
Heads again.
One more time.
And, wow, it's three heads in a row.
Okay, if I were to flip the coin one more time, what are you predicting?
Here's what a lot of people would predict.
Let's see.
Heads, heads, heads.
It's got to come up tails this time.
Even though you know a coin toss is a random event, that each flip is independent,
and therefore the odds for any one coin toss are a random event, that each flip is independent, and therefore the odds for
any one coin toss are 50-50. But that doesn't sit well with people.
Toby Moskowitz is an economist at Yale.
Toby Moskowitz We like to tell stories and find patterns
that aren't really there. And if you flip a coin, say, 10 times, most people think — and
they're correct — that on average you should get five heads, five tails.
The problem is they think that should happen in any ten coin flips.
And, of course, it's very probable that you might get eight heads and two tails, or it's even possible to get ten heads in a row.
But people have this notion that randomness is alternating, and that's not true.
This notion has come to be known
as the gambler's fallacy. This is a common misconception in Vegas. You go to the slot
machine, it hasn't paid out in a long time, and people think, well, it's due to be paid out. That's
just simply not true if it's a truly independent event, which it is the way it's programmed.
So, Toby, you have co-authored a new working paper called Decision Making Under the Gambler's Fallacy.
And if I understand correctly, the big question you're trying to answer is how the sequencing of decision making affects the decisions we make.
Is that about right?
That's correct. Correct. In fact, the genesis of the paper was really to take this idea of gambler's fallacy,
which has been repeated many times in psychological experiments,
which is typically a bunch of undergrads playing for a free pizza,
and apply it to real-world stakes where the stakes are big,
there's a great deal of uncertainty, and these decisions matter a lot.
Some of these decisions matter so much, they can mean the difference between life and death.
So these probably aren't the kind of decisions
we should be making based on this.
From WNYC Studios, this is Freakonomics Radio, the podcast that explores the hidden side of everything.
Here's your host, Stephen Dubner.
So, Toby Moskowitz and his co-authors, Daniel Chen and Kelly Hsu, have written this interesting research paper.
It's called Decision Making Under the Gambler's Fallacy.
It's the kind of paper that academics publish by the thousand.
They publish in order to get their research out there, maybe to get tenure, etc.
So, it matters for them.
Does it matter for you?
Why should you care about something like the gambler's fallacy?
Well, we often talk on this program about the growing science of decision-making.
But it's funny.
Most of the conversations focus on the outcome for the decision-maker.
What about the people the decision is affecting?
What if you are a political refugee hoping to gain asylum in the United States?
There's a judge making that decision.
What if you're trying to get your family out of poverty in India by starting a business and you need a bank loan?
There's a loan officer making that decision.
Or what if you're a baseball player waiting on a 3-2 pitch that's going to come at you 98 miles an hour
from just 60 feet, 6 inches away?
That's where the umpire comes in.
We'll start with Major League Baseball.
That was a simple one.
Moskowitz and his co-authors
analyzed decision-making
within three different professions,
baseball umpires, loan officers, and asylum judges, to see whether they fall prey to the
gambler's fallacy. Because there's all kinds of possible areas where the sequence of events
shouldn't matter, but our brains think they should, and that causes us to make poor decisions.
Decisions that are the result of?
What I would call decision heuristics.
A heuristic being essentially a cognitive shortcut.
Now, why choose baseball umpires?
Because baseball has this tremendous data set called PitchFX, which records every pitch from every ballgame.
And what it records is, if you look at the home plate umpire,
where the pitch landed, where it was located within or outside the strike zone, and also what the call was from the umpire. Moskowitz and his colleagues looked at data from over 12,000
baseball games, which included roughly 1.5 million called pitches. That is the pitches where the
batter doesn't swing, leaving the umpire to decide whether the pitch is a ball or a strike. As they write in the paper, we test whether baseball
umpires are more likely to call the current pitch a ball after calling the previous pitch a strike
and vice versa. There were 127 different umpires in the data. The researchers did not focus on
pitches that were obvious balls or strikes.
If you take a pitch dead center of the strike zone, umpires get that right 99% of the time.
Instead, they focused on the real judgment calls.
So the thought experiment was as follows. Take two pitches that land in exactly the same spot.
The umpire should be consistent and call that pitch the same way every time. Because the rules state that each pitch is independent in terms of calling it correctly.
It's either in the strike zone or it's not.
The first thing the pitch FX data shows is that umpires are generally quite fallible.
On pitches that are just outside the strike zone, they're definitely balls, but they're close.
On those pitches, umpires only get those right about 64%
of the time. So that's a 36% error rate. It's big. Slightly better than flipping a coin, but not much.
Not much. Yeah. Better than you and I could do though, I would say.
And how does the previous pitch influence the current pitch?
Just as a simple example, if the previous pitch was a strike, the umpire was already about a half a percent less likely to call the next pitch a strike.
Half a percent doesn't seem like that big an error.
But keep in mind, that's for the entire universe of next pitches, whether it's right down the middle or high and outside or in the dirt.
What happens when the next pitch is a borderline call?
So if you look at pitches on the corners, near the corners, that's where you get a much bigger effect.
So as an example, if I see two pitches on the corners, one that happened to be preceded by a strike call and one that didn't,
the one preceded by a strike call, the next pitch will less likely be called a strike about three and a half percent of the time. Now, if I increase that further, if the last two pitches were called a strike,
then that same pitch will less likely be called a strike five and a half percent.
So those are pretty big numbers.
And let me just ask you, other than finishing location of the pitch,
what other factors relating to pitch speed or spin or angle, et cetera,
did you look at and or could you control for? And is that important?
You always want to control for those things because some people might argue, well, maybe
they see it differently if it's a 98 mile an hour fastball versus a 80 mile an hour
slider or curve.
Maybe that changes the optics for the umpire.
So we try to control for all that.
And the beautiful thing about baseball is they have an enormous amount of data.
We threw in things like the horizontal spin and vertical distance of the pitch, the movement of it, the speed, the arc when it leaves the pitcher's arm to when it crosses the plate.
We also control for who the pitcher was, who the batter was, and even who the umpire was.
Since you're controlling for the individual umpires, I assume you have a list of the best and worst umpires, yes? On this dimension, yes. And it
turns out they're all pretty much about the same. So you can either view them as equally good or
equally bad. There wasn't a single umpire that didn't exhibit this kind of behavior. They all
fell prey to what we interpret as the gambler's fallacy in terms of calling pitches, which stands to reason because they're all human.
One of the biggest things you have to do when you're an umpire is be honest with yourself.
That's Hunter Wendelstadt.
Well, now I'm a Major League Baseball umpire.
I have been in the Major leagues full-time since 1999, so I've been able to travel this great
country doing something I love, and that's umpiring baseball games. Wendelstedt's father, Harry,
was also a major league umpire, an extremely well-regarded one. Harry also ran the Wendelstedt
Umpire School near Daytona Beach, Florida, which Hunter now runs during the offseason.
They start with the fundamentals.
You hold up a baseball.
Here's a baseball.
Here are the measurements of the baseball.
Here's the weight of the baseball.
Same thing with the bat.
And you go step by step.
There's a proper way for an umpire to put their mask on and take their mask off
so as to not block their vision.
Different ways to ensure that you get the best look you can.
And that's the first seven to 11 days.
If you're fortunate enough to make it as an umpire all the way to the majors,
you know you'll be subject to a great deal of scrutiny.
Because now, on any given day, at every major league stadium,
you have cameras, most of them high-definition, super slow motion,
that are critiquing every pitch and every play.
Wendelstet is a fan of the pitch FX system that Toby Moskowitz used to analyze umpire decisions.
Once these pitch systems got into place, and it's been a great educational tool for us because you look at it
and we get a score sheet after every game we work behind the plate. And it tries to see if you
have any trends and really helps us become a better quality product for the game of baseball.
We sent Hunter Wendelstadt the Moskowitz research paper,
which argues that major league umpires succumb to the gambler's fallacy.
I was reading that. I got nervous. But that was really interesting. You know,
that's just stuff I've never even thought about. It's kind of blown my mind the last couple of days. It's fallacy. I was reading that. I got nervous. But that was really interesting. You know, that's just stuff
I've never even thought about.
It's kind of blown my mind
the last couple of days.
It's pretty neat.
But Wendelstead wasn't quite ready
to accept the magnitude
of umpire error
the researchers found.
I think it's very interesting.
And I really look forward
to studying that some more
because, you know,
running the umpire school
and all that,
you got to keep up on the trends and the way that the perception is going out there also.
Wendell said did say that if an umpire makes a bad call,
whether behind the plate or in the field, you don't want to try to compensate later.
If you miss something, the worst thing to do, you can never make up a call.
People, oh, that's a makeup call.
Well, no, it's not, because if you try and make up a call, now you've missed two. And that's something that we would never, ever want to
do. The Moskowitz research paper only analyzed data for the home plate umpire, the one calling
balls and strikes. For those of you not familiar with baseball, there are four umpires working
every game, one behind home plate
and one at each of the three bases. The umps rotate positions from game to game, so a given
ump will work the plate only every few games. Interestingly, baseball uses six umps during the
postseason, adding two more down the outfield lines, which has always struck me as either
a jobs program or a rare admission of umpiring fallibility. Because if
you need those two extra umps to get the calls right during the postseason, doesn't that imply
they ought to be there for every game? In a more overt admission of the fallibility of umpires,
baseball has increasingly been using video replays to look at close calls. In such cases,
the calls are overturned nearly half the time.
Nearly half the time. Calls by the best umpires in the world. Which might make you question
the fundamental decision-making ability of human beings generally, and whether we'd be better off
getting robots to make more of the relatively simple judgment calls in our life, like whether
a baseball
pitch is a ball or a strike.
But human nature being what it is, and most of us having an undeservedly high opinion
of ourselves as good decision makers, we probably won't be seeing wholesale automation of this
kind of decision making anytime soon.
Making decisions, after all, is a big part of what makes us human, so it's hardly surprising
we'd be reluctant to give that up.
But if the gambler's fallacy is as pronounced as Toby Moskowitz and his colleagues argue,
you might wish otherwise.
Especially if you are, say, applying for a bank loan in India.
And we got a little bit lucky here.
Lucky, meaning some other researchers had already run an experiment.
With a bank in India and a bunch of loan officers on actual loans.
And the data from that experiment allowed Moskowitz and his co-authors to look for evidence of the gambler's fallacy.
Because...
What they did was they took that data and they reassigned them to other loan officers,
which allowed for a randomization of the sequence of loan applications.
Suppose you and I looked at the same six loans.
I happened to look at them in descending order.
You happened to look at them in ascending order, let's say alphabetically, just some way to rank them.
And then the question is, did we come to different decisions just purely based on the sequencing of those loans? Now, keep in mind, these were real loan
applications that an earlier loan officer had already approved or denied. This let the researchers
measure an approval or denial in the experiment against the correct answer. Although the correct
answer in this case isn't nearly as definitive as a correct
ball or strike call in baseball. Why? Because if a real loan application had been denied,
the bank had no follow-up data to prove whether that loan actually would have failed.
But the loans that were approved, we can look at the performance of that loan later on. You could
see whether it was delinquent or didn't pay off as well.
So unlike baseball where we know for sure there's an error here, it's not quite clear.
How much did loan officers in India fall prey to the gambler's fallacy?
So you and I are looking at the same six set of loan applications.
And the sequence with which I received them, suppose I had three very positive ones in a row, then I'm much more likely to deny the fourth one, even if it was as good as the other three.
The analysis showed that the loan officers got it wrong roughly 8% of the time simply because of the sequence in which they saw the applications. Talk for just a minute about why this kind of experiment, a field experiment, is inherently to people like you more valuable than a as opposed to maybe very smart undergrads,
but making a decision on something they haven't had a lot of experience doing
and shouldn't be considered experts doing. The second thing is incentives.
Ah, incentives. One beauty of the original experiment was that it had the loan officers
working under one of three different incentive
schemes, which allows you to see if the gambler's fallacy can perhaps be overcome by offering a
strong enough reward. Some loan officers operated under a weak incentive scheme.
Which basically meant you just got paid for doing your job, whether you got it right or wrong,
what we would call flat incentive.
Then there was a moderate incentive scheme.
Which is, we'll pay you a little more if you get it right,
and then pay you a little bit less when you get it wrong.
And finally, some loan officers were given a strong incentive scheme.
Which was, we'll pay you a little bit more to get it right, but we'll punish you severely for getting it wrong.
Meaning, you approved it when it should have been denied,
or you denied it when it should have been approved, then it costs you money.
So how was the gambler's fallacy affected under stronger incentives?
Well, this was the most interesting part.
With the strongest incentive at play where loan officers were significantly rewarded
or punished for not messing up an application simply because the order they read it?
We found that that 8% error rate, or I should say what we ascribe to the gamblers fallacy
affecting decision-making goes down to 1%. Wow. Doesn't get eliminated completely
but pretty nicely. We then looked at what the loan officers did in order to get
that 8% down to 1%. It turns out they ended up spending a lot more time on the loan application.
If they make a quick decision, they rely on these simple heuristics of, well, I just approved three
loans in a row. I should probably deny this one. But if I'm forced to actually just use information
and think about it slowly because I really want to get it right, because I get punished if I don't,
then I don't rely on those simple heuristics as much. I force myself to gather the information and I make a better decision.
Or to put it in non-academic terminology, if you're paid a lot to not suck at something,
you'll tend to not suck.
If effort can help. That's right.
Coming up next on Freakonomics Radio.
Let's hope that federal asylum judges aren't deciding 50% of their cases based on sequencing.
Also, how stock prices are affected by when a company reports earnings.
It makes today's earnings announcement seem kind of less good in comparison.
And if you like this show, why don't you give it a nice rating on whatever podcast app you use? Because your approval means everything to us.
Even if you've never watched a baseball game in your life,
even if you don't care at all whether someone in India gets a bank loan, you might care about how the United States runs its immigration courts and whether it decides to grant or deny asylum to a petitioner.
This is clearly a big decision, certainly for the applicants, right? I mean, in some cases it could mean the difference between life and death, right?
Or imprisonment and not imprisonment, if they have to go back to their country where they're fleeing for political reasons or something else.
These cases are heard in immigration courts by federal judges.
Each case is randomly assigned, which, if you're an applicant, is a hugely influential step. As Toby Moskowitz and his
co-authors write, New York at one time had three immigration judges who granted asylum in better
than eight of 10 cases and two other judges who approved fewer than one in 10. So as the
researchers compiled their data to look at whether the gambler's fallacy is a problem in federal
asylum cases, they focused on judges with more moderate approval rates.
The data went from 1985 to 2013.
So we looked only at judges that decided at least 100 cases in a given court
and only looked at courts or districts that had at least 1,000 cases.
Among that set across the country over those several decades, you're talking
about 150,000 decisions. And I think it was 357 judges making those decisions. So quite a large
sample size. The researchers controlled for a number of factors, the asylum seeker's country
of origin, the success rate of the lawyer defending them, even time of day, which, believe it or not,
can be really important in court. A 2001 paper looked at parole hearings in Israeli prisons to see how the judges' decisions were affected by extraneous factors, hunger perhaps. This study
found that judges were much more likely to grant parole early in the day, shortly after breakfast, presumably,
and again, shortly after the lunch break.
So Moskowitz and his colleagues tried to filter out all extraneous factors in order to zoom in on whether the sequencing of cases affected the judge's rulings.
Keep in mind, there's also no way to measure a correct ruling.
When a judge denies a certain case, we don't know for sure if that was the right or the wrong decision.
So I want to qualify that because what we can show is whether the sequencing of approval or denial decisions has any bearing on the likelihood that the next case is approved or denied.
And that we show pretty strongly.
So what does it look like for an asylum judge to be affected by the gambler's fallacy?
So if the cases are truly randomly ordered,
then what happened to the last case should have no bearing on this case, right?
Not over large samples.
And what we find is that's not true. If the previous
case was approved by the judge, then the next case is less likely to be approved by almost 1%.
Where it gets really interesting is, is if the previous two cases were approved,
then that drops even further to about 1.5%. And if these happen on the same day, that goes up even further,
closer to 3%. And then obviously, if it's two cases in the same day, it gets even bigger. It
starts to approach about 5%. So those are pretty big numbers, especially for the applicants
involved. Or to put it a little differently, just by the dumb luck of where you get sequenced that day could affect your
probability of staying in this country by 5% versus going back to the country that you're
fleeing. That's a remarkable number, in my opinion. And in a different arena, if I hear that a baseball
umpire might be wrong 5% of the time, I think, well, but the stakes aren't very high. But in the
case of an asylum seeker, this is a binary choice. This is not one ball or strike out of many. This is I'm either in
the country or I'm not in the country. And so what did that suggest to you about the level of the
severity that the gambler's fallacy can wreak, I guess, on different important decisions, whether it's for an individual or,
I guess I'm thinking at a governmental level, I've refused to declare war on a given dictator
three times in the last five years. But the fourth time gets harder, I guess. Yeah.
Right. No, I think that's right. And you can imagine the poor family that happens to follow
two positive cases, even if their case is just
as viable, their chances of getting asylum go down by 5%. That doesn't sound like much,
but compare that to what it would be if the reverse had been true. If the two cases preceding
them were poor cases and were denied, then their chances of being approved go up by 5%.
That becomes a 10% difference just based on who happened to be in front of you that day, total random occurrence.
So you wouldn't expect the magnitudes to be huge.
Let's hope that federal asylum judges aren't deciding 50% of their cases based on sequencing. So the lesson, if I'm seeking asylum or any other ruling, what I really want to do is bribe someone to let me get to the judge right after he or she has rejected the previous few applicants.
Right. I mean, what other than that?
It would be worth it.
Well, plainly, it would be really, really, really worth it unless you get caught bribing and then obviously get rejected for asylum because of just that.
So you're telling us the data from the decision makers side.
What about the seekers side? Is there anything that can be done to offset this bias?
I'm not sure there's much you can do. You're at the mercy of the courts. I suppose if you have a particularly good lawyer, maybe there's a way to lobby. I mean, I'm told the cases are randomized. I assume that's true. But who knows? Like you said, I'm not, you know, maybe bribes is a bit extreme, but maybe there's a way.
Well, feigning illness, at least.
Yes.
Right.
Exactly.
I mean, all baseball, occasionally make poor
decisions based on nothing more substantial than the order they face the decisions. But
what if these researchers are just wrong? What if there are other explanations?
No, that's a fair question. There are certainly other possible things to consider, and we try to rule them out.
The first thing, the most obvious thing would be that the quality or merits of cases has that similar pattern.
That seems hard to believe.
We believe the randomization of cases, certainly in the loan officer experiment where we know it's randomized because we did it and these other economists randomized it themselves, we know we can rule that out. So I don't think that's an issue,
but maybe just the quality of cases has this sort of alternating order to it. And these guys are
actually making the right decision. We don't think that's true. And in baseball, we can actually
prove it by showing that they're getting the wrong call. It's also interesting, to me at least, that what the Moskowitz research is pushing against
is an instinct that a lot of people are trying to develop, which is pattern spotting.
More and more, especially when we're dealing with lots of data, we look perhaps harder
than we should for streaks or anomalies that aren't real.
We may look for bias that isn't necessarily bias. Our umpire friend
Hunter Wendelstedt brought this up when we asked whether, as most baseball fans believe,
umpires treat certain pitchers with undue respect. Well, you know, here's the thing about it. You
take Clayton Kershaw. The umpire is going to call more strikes when Clayton Kershaw is out there.
Why? Is it because we like him better? No. It's because he throws more strikes.
Because he's a better pitcher than a rookie
that's getting the call-up from the New Orleans Zephyrs.
It's one of those things,
the reason that Greg Maddox and John Smaltz,
they're in the Hall of Fame for a reason.
Toby Moskowitz points to one more barrier
to unbiased decision-making
related to the gambler's fallacy,
but slightly different.
It's another bias known as
sequential contrast effects. That sounds like a very technical term, but it's a pretty simple
idea. The idea is if I read a great book last week, then the next book I read, even if it's
very, very good, I might be a little disappointed because my reference for what a really good book
is just went up. And you could see how that phenomenon would really be important in, let's say, job
applicants or any kind of applicant, yeah?
Correct.
We see this all the time that the sequence of candidates that come through for a job
I think matters, both from the gambler's fallacy as well as from sequential contrast effects.
So I, along with a couple of other researchers, were interested in this idea of sequential decision effects. So I, along with a couple of other researchers,
were interested in this idea of sequential decision errors.
That's Kelly Hsu.
I'm an associate professor of finance at University of Chicago,
the Booth School of Business.
She's also one of Toby Moskowitz's co-authors on the Gambler's Fallacy paper,
and she's a co-author on another paper called
A Tough Act to Follow, Contrast Effects in Financial Markets.
And I was talking to some asset managers in New York, and they said that when they consider earnings announcements by firms, their perception of how good the current earnings announcement was is very much skewed by what they've recently seen.
So Xu and her colleagues collected data on firms' quarterly earnings announcements from 1984 to 2013
to see how the markets responded.
We look at how that firm's share price moves on the day of the earnings announcement
and in a short time window before and after that announcement.
And what did they find? So what we find is that if yesterday an unrelated large firm announced a very good earnings announcement,
it makes today's earnings announcement seem kind of less good in comparison.
And on the other hand, suppose yesterday's earnings announcement was pretty disappointing,
then today's news, all else equal, looks more impressive.
Before you go thinking that stock market investors are particularly shallow,
Hsu notes that contrast effects like these have been widely observed in lab experiments.
So what they've shown is that subjects will judge crimes to be less egregious if they've
recently been exposed to narratives of more egregious crimes. College students will rate pictures of their female classmates to be less attractive
if they've recently been exposed to videos of more attractive actresses.
So something fairly similar is happening in the context of earnings.
In this research, as well as the gambler's fallacy research,
the timing of the
consecutive decisions really matters. Toby Moskowitz again. Meaning if the decisions that you're making
occur very close in time, then you tend to fall prey to the sequencing effect. So take the judge's
example, for instance. We find that if cases are approved on the same day, then the likelihood of the next case that same day goes way down.
If those cases were one day removed, the effect gets a lot weaker.
Or, in fact, if there's a weekend in between the decisions, then it's almost nonexistent.
So if the judge approved a bunch of cases on Friday, that really doesn't have much bearing on what happens Monday. Moskowitz has tried to apply this insight to his own decision-making when it comes to
grading students' papers.
If I see a sequence of good exams that may affect the poor students who happen to be
later in the queue in my pile, but one of the things I try to do, mostly just because
I don't want my head to explode, is I take frequent breaks between grading these papers. And I think that breaks that sequencing. My mind sort of forgets about
what I did in the past because I've done something else in between.
What do you do during your breaks?
Go for a walk, check email, get some coffee, maybe work on something else. Or, you know,
my students don't want to hear this, but occasionally I'll grade an exam in front of a baseball game and, you know, I'll stop and watch a couple of innings.
Obviously, every realm is different. A loan officer is different from a baseball umpire,
is different from an asylum judge, is different from a professor grading papers and so on. But
what they all would seem to have in common is a standard, a standard of, you know,
competence or excellence or whatnot. And so is there any way for all of us to try to avoid the bias of the gambler's fallacy,
to try to, I guess, connect more with an absolute measure rather than a relative measure?
Well, that's a very good question. I think it does depend on the field. Obviously,
if you think about asylum judges, the absolute measure, you know, sort of your overall approval or denial rate might be good from a judge's perspective.
But it's certainly not great from the applicant's perspective if you make a lot of errors on the side, right?
The errors may balance out, but to those applicants, there's huge consequences.
Now that Moskowitz has seen empirical proof of the gambler's fallacy, he sees it just about everywhere he looks.
My wife, who's a physician, claims that she thinks that happens.
I would also argue test taking.
My son, who's actually studying a little bit for the SSATs, you know, he'll say things like, well, you know what, I'm not sure what the answer to number four was, but the last two answers were A.
So it can't be A, right?
And you just, you sort of caution, that may not be right.
It sort of depends on whether the test makers have any biases either.
Well, then it becomes game theory, which becomes harder and more fun.
That's right.
That would actually be a more interesting test, wouldn't it?
If the students just figured that out, you'd let them in.
Moskowitz plays tennis, where there's plenty of opportunity for a rethink
on the sequencing of shots.
If you're serving, for instance,
one of the best strategies
is a randomized strategy, like a
pitcher should do in baseball.
And I'm not very good at being
random, just like most humans. I'll say to myself,
well, I hit the last couple down the middle.
Maybe I should go out wide
on this one. But that's not really random. What I should do is what some of the best pitchers in baseball do.
I think the rumor has it Greg Maddox used to do this, which is recognizing that he's not very
good at being random. He would use a cue in the stadium that was totally random. For instance,
are we in an even or an odd inning? And is the time on the clock even or odd, some other cue that would
just give him a sense of, well, I'll throw a fastball if there's two, you know, if the clock
ends on an even number and the inning's even, I'll throw a slider. If it's an odd, I should say to
myself, you know, if the score is even or odd, or if, whatever, if I count, you know, five blades
of grass on the court as opposed to three, something that's totally random that has nothing to do with it allows me to supply that random strategy, which my brain is not very good at doing.
Most people's brains aren't.
It's an interesting paradox that it takes a pretty smart person to recognize how not smart we are at doing something as seemingly simple as being random, because it wouldn't seem to be so difficult, right? I would say that's fairly true in general,
that the smartest people I know are so smart because they know all the things they don't know
and aren't very good at. And that's a very tough thing to do. Interesting. The smartest people
know all the things they aren't very good at. Me? I've never been very good at learning
just when to end a podcast episode.
I'm going to start working on that right now.
Coming up next week on Freakonomics Radio,
roughly 15 million Americans
will eat their Thanksgiving meal in a restaurant.
No cooking, no cleanup,
and increasingly, no tipping?
We just knew we had to go cold turkey on this whole tipping thing.
Why tipping is a ridiculous way of doing business and what one man is doing to change it.
That's next time on Freakonomics Radio.
Freakonomics Radio is produced by WNYC Studios and Dubner Productions.
This episode was produced by Harry Huggins.
Our staff also includes Shelley Lewis, Christopher Wirth, Jay Cowett, Merritt Jacob, Greg Rosalski, Noah Kernis, Allison Hockenberry, Emma Morgenstern, and Brian Gutierrez.
You can subscribe to this podcast on iTunes or wherever you get your podcasts. And come visit Freakonomics.com,
where you'll find our entire podcast archive, as well as transcripts of all our episodes,
if reading is your thing. Thanks for listening.