Freakonomics Radio - The Folly of Prediction (Rebroadcast)
Episode Date: August 22, 2013Human beings love to predict the future, but we're quite terrible at it. So how about punishing all those bad predictions? ...
Transcript
Discussion (0)
We have found the witch, may we burn her?
Burn her!
What does it mean to be a witch exactly in Romania?
Are these people that we know here as psychics or fortune tellers or are they different somehow?
I don't know how is a fortune teller in the United States,
but here generally they are a woman of different ages.
They can, they say they can cure some diseases, they can bring back your husband
or your wife, or they can predict your future.
Who is a typical client for a witch?
There are quite a lot of politicians who are going to witches.
You know that France's president Nicolas Sarkozy, he went to witches last years ago
and our president in Romania and very important politicians from different parties, they are going
to witches. Some of them they were obliged to recognize they went to witches. Some of them it's
an off-the-record information but me me being a journalist, I know that information.
Vlad Miksic is a reporter in Bucharest, the capital of Romania.
He knows a good bit about the witches there.
Quite a lot of them, they are quite rich. They have very big houses with golden rooftops. And a lot of the Romanians, they are living in small apartments
in blocks. So just going in such a building will give you a sense of majesty and respect.
But the Romanian witch industry has been under attack.
First came a proposed law to regulate and tax the witches.
It passed in one chamber parliament before stalling out.
But then came another proposal arguing that witches should be penalized if the predictions they make don't turn out to be true.
Tell me, what do you do with witches?
So if you are one of my clients and if I'm a fortune teller,
if I fail to predict your future, I would pay quite substantial fine to the state.
Or if this happens many times, I will even go to jail.
The punishment is between six months and three years in jail.
What's being proposed in Romania is revolutionary. It strikes me because
we typically don't hold anybody accountable for bad predictions. So I'm wondering,
in Romania, let's say if a politician makes a bad prediction, do they get fined or penalized in any way?
No, not at all.
In fact, this is one of the hobbies of our president.
He's doing a lot of predictions which are not coming true, of course.
And after that, he is reelected or his popularity is rising like the sun in the morning, you know?
No, anyone can do a lot of predictions here in Eastern Europe
and not a single hair will move from his or her head.
Come on, people, that doesn't seem fair, does it?
I don't care if you're anti-witch or pro-witch or witch agnostic.
Why should witches be the only people held accountable for bad predictions?
What about politicians and money managers, sports pundits?
And what about you? From WNYC, this is Freakonomics Radio.
Today, the folly of prediction.
Here's your host, Stephen Duffner.
All of us are constantly predicting the future, whether we think about it or not.
Right now, some small part of your brain is trying to predict what this show is going to be about.
Now, how do you do that? You factor in what you've heard so far, what you know about Freakonomics,
maybe know a lot, maybe you've never heard of it. You might think it's some kind of communicable disease. When you predict the future, you look
for cognitive cues, for data, for guidance. Here's where I go for guidance. I think to an economist,
the best explanation for why there are so many predictions is that the incentives are set up
in order to encourage predictions.
That's Steve Levitt. He's my Freakonomics friend and co-author,
an economist at the University of Chicago.
So most predictions we remember are ones which were fabulously, wildly unexpected,
and then came true. Now, the person who makes that prediction has a strong incentive to remind
everyone that they made that crazy prediction which came true. You look at all the people,
the economists who talked about the financial crisis ahead of time. Those guys harp on it
constantly. I was right, I was right, I was right. But if you're wrong, there's no person
on the other side of the transaction who draws any real benefit from
embarrassing you by bringing up the bad prediction over and over. So there's nobody who has a strong
incentive usually to go back and say, here's the list of the 118 predictions that were false.
And I remember growing up, my mother, who's somewhat of a psychic, would predict a stock-
Wait, somewhat of a psychic, would predict a stock. Wait, somewhat of a psychic? She's a self-proclaimed psychic.
And she would predict a stock market crash every single year.
Right.
And she's been right a couple times.
And she has been.
She's been right twice in the last 15 years.
And she would talk a lot about the times she was right.
I'd have to remind her about the 13 times that she was wrong.
And without any sort of market mechanism or incentive for keeping prediction
makers honest, there's lots of incentive to go out and to make these wild predictions.
And those are the ones that are remembered and talked about. Think about one of the predictions
that you hear echoed more often than almost anyone is Joe Namath's famous pronouncement about how the
Jets were going to win the Super Bowl.
And it was unexpected, and it happened.
Now, if the Jets had lost the Super Bowl, nobody would remember that Joe Namath made that pronouncement.
And conversely, you can probably find at least one player on every team that's lost the Super Bowl in the last 40 years
who did predict that his team would win.
That's probably right. Exactly right.
Now, the flip side, which is perhaps surprising, is that in many cases,
the goal of prediction is to be completely within the pack. And so, I see this a lot with pension
fund managers or endowment managers, which is if something goes wrong, then as long as everybody
else made the same prediction, you can't be faulted very much.
Pension managers, football players, psychic moms, Romanian witches.
Who doesn't try to predict the future these days?
Go ahead and make your predictions.
Start with the Masters.
I'm going to lean towards Phil.
Could I like this new putting stroke?
The old farmer's almanac predicts a colder than usual winter. I predict.
Bear Stearns is fine. Do not take your money.
If there's one tech going, Bear Stearns is not in trouble.
I predict all my sources go big.
Fannie and Freddie are fundamentally sound, but they are not in danger of going under.
I predict.
Paul the Octopus is making yet another World Cup prediction, betting on Spain. We will
in fact find evidence of weapons programs. Gaddafi will ultimately step down. And you know the worst
thing? There's almost nobody keeping track of all those predictions. Nobody, except for this guy.
Well, I'm a research psychologist who... Don't forget your name, though. I'm Phil Tetlock, and I'm a research psychologist. I've spent most
of my career at the University of California, Berkeley, and I recently moved to the University
of Pennsylvania, where I'm cross-appointed in the Wharton School in the psychology department.
Philip Tetlock has done a lot of research on cognition and decision-making and bias, pretty standard stuff
for an Ivy League psych PhD. But what really fascinates him is prediction. There are a lot
of psychologists who believe that there is a hardwired human need to believe that we live in a
fundamentally predictable and controllable universe. There's also a widespread belief among
psychologists that people try hard to impose causal order on the world around them, even when those phenomena are random.
This hardwired human need, as Tetlock puts it, has created what he calls a prediction industry.
Now, don't sneer. You're part of it, too.
I think there are many players in what you might call the prediction
industry. In some sense, we're all players in it. Whenever we go to a cocktail party or a colloquium
or whatever, where opinions are being shared, we frequently make likelihood judgments about
possible futures and the truth or falsity of particular claims about futures. The prediction
business is a big business on Wall Street and we have futures markets and so forth designed to
regulate speculation in those areas. Obviously, governments have great interest in prediction.
They create large intelligence agency bureaucracies and systems to help them
achieve some degree of predictability in a seemingly chaotic world.
Let me read you something that you have said or written in the past, that this determination to
ferret out order from chaos has served our species well. We are all beneficiaries of our great collective
successes in the pursuit of deterministic regularities and messy phenomena, agriculture,
antibiotics, and countless other inventions. So talk to me for a moment about the value of
prediction. Obviously, there's much has been gained, much to be gained. Do we overvalue
prediction though, perhaps? I think there's an asymmetry of supply and demand. I think there
is an enormous demand for accurate predictions in many spheres of life in which we don't have
the requisite expertise to deliver. And when you have that kind of gap between demand and real supply,
you get the infusion of fake supply.
Fake supply. I like this guy, this Philip Tetlock. He's not an economist, but he knows
the laws of supply and demand can't just be revoked. So if there's big demand for prediction
in all realms of life and not enough real supply to satisfy it, what does this fake supply sound like?
One could say it could.
There could be terrorist threats.
The economy could remain the driving factor.
Could, could.
Could, could.
Some of that radioactivity could carry in the atmosphere to the west coast of the United States.
There could be an interesting battle on the cards when the Oscar nominations are announced.
Gold could be peaking its head above $1,600.
It could certainly get to $2,000.
Gold could go to $3,500 an ounce.
There's a punditocracy out there,
a class of people who predict ad nauseum,
often on television.
They can be pretty good at making their predictions
tough to audit.
It's the art of appearing to go out on a limb
without actually going out on a limb. So for example, the word could, something could happen.
So the room you happen to be sitting in could be struck by a meteor in the next 23 seconds.
That makes perfect sense. But the probability is, of course, 0.0000, et cetera, one. It's not zero,
but it's extremely low. In fact, the word could,
the possible meanings people attach to it range from about 0.01 to about 0.6,
which covers more than half of the probability scale right there.
Look, nobody likes a weasel. So more than 20 years ago, Tetlock set out to conduct one of
the largest empirical studies ever of predictions.
He chose to focus on predictions about political developments around the world.
He enlisted some of the world's foremost experts, the kind of very smart people who have written definitive books, who show up on CNN or on the Times' op-ed page.
In the end, we had close to 300 participants, and they were very sophisticated political observers.
Virtually all of them had some postgraduate education.
Roughly two-thirds of them had PhDs.
They were largely political scientists, but there were some economists and a variety of other professionals as well.
And they all participated in your study anonymously, correct?
That was a very important condition for obtaining cooperation.
Now, if they were not anonymous, presumably we would recognize some of their names. study anonymously, correct? That was a very important condition for obtaining cooperation.
Now, if they were not anonymous, presumably we would recognize some of their names. These are prominent people at political science departments and economic departments at,
I'm guessing, some of the better universities around the world. Is that right?
Well, I don't want to say too much more, but I think you would recognize some of them, yes.
I think some of them had substantial Google counts.
The study became the basis of a book Tetlock published a few years ago called Expert Political Judgment.
There were two major rounds of data collection, the first beginning in 1988, the other in 1992.
These nearly 300 experts were asked to make predictions about dozens of countries around the world.
The questions were multiple choice. For instance, in democracy X, let's say it's England, should we expect that after the
next election, the current majority party will retain, lose, or strengthen its status? Or for
undemocratic country Y, Egypt maybe, should we expect the basic character of the political regime
to change in the next five years, in the next ten years?
And if so, in what direction, into what effect?
The experts made predictions within their areas of expertise and outside.
And they were asked to rate their confidence level for their predictions.
So after tracking the accuracy of about 80,000 predictions by some 300 experts over the course of 20 years,
Philip Tetlock found that experts thought they knew more than they knew,
that there was a systematic gap between the subjective probabilities that experts were assigning to possible futures
and the objective likelihoods of those futures materializing.
Let me translate that for you.
The experts were pretty awful.
Now, you may think, awful compared to what? Did they beat a monkey with a dartboard?
Oh, the monkey with the dartboard comparison. That comes back to haunt me all the time.
With respect to how they did relative to, say, a baseline group of Berkeley undergraduates making predictions, they did somewhat better than that. Did they do better than an extrapolation algorithm? No, they did not. They did, for the most part, a little bit worse than that. How did they do relative to purely random guessing
strategy? Well, they did a little bit better than that, but not as much as you might hope.
That extrapolation algorithm that Tetlock mentioned, that's simply a computer programmed to predict no change in current situation.
So it turned out that these smart, experienced, confident experts predicted the political future about as well, if not slightly worse, than the average daily reader of the New York Times.
I think the most important takeaway would be
that the experts think they know more than they do.
They were systematically overconfident.
Some experts were really massively overconfident.
And we were able to identify those experts
based on some of their characteristics
of their belief system and their cognitive style,
their thinking style.
Okay, so now we're getting into the nitty-gritty of what makes people predict well or predict poorly.
What are the characteristics, then, of a poor predictor?
Dogmatism.
It can be summed up that easily.
I think so. I think an unwillingness to change one's mind in a reasonably timely way in response to new evidence,
a tendency when asked to explain one's predictions to generate only reasons that favor your preferred prediction
and not to generate reasons opposed to it.
And I guess what's striking to me, and I'd love to hear what you have to say about this,
is that it's easy to apply one word, prediction, to many, many, many different realms in life.
But those realms all operate very differently.
So politics is different from economics and predicting a sports outcome is different than predicting an agricultural outcome.
But it seems that we don't distinguish so much necessarily and that there's this modern sense almost that anything can be and should be
able to be predicted. Am I kind of right on that or no? I think there's a great deal of truth to
that. I think it is very useful in talking about the predictability of the modern world to
distinguish those aspects of the world that show a great deal of linear regularity, and those parts
of the world that seem to be driven by complex systems that
are decidedly nonlinear and decidedly difficult, if not impossible, to predict.
Talk to me about a few realms that generally are very, very hard to predict and a few realms that
generally are much easier.
Predicting Scandinavian politics is a lot easier than predicting Middle Eastern politics.
Yes, that was the first one that came to my mind, too.
All right, but keep going.
The thing about the radically unpredictable environments is that they often appear for
very long periods of time to be predictable. So, for example, if you had been a political
forecaster predicting regime longevity in the Middle East, you would have done extremely well
predicting, you know, in Egypt, that Mubarak would continue to be president of Egypt year after year
after year, in much the same way that if you'd been a Sovietologist, you'd have done very well
in the Brezhnev era, predicting continuity. There's an aphorism I quote in the Expert
Political Judgment book from Karl Marx. I'm obviously not a Marxist, but I thought that's a beautiful aphorism that he had,
which was that when the train of history hits a curve, the intellectuals fall off.
Coming up, who do you predict we'll hear from next?
A bunch of people who are awesomely good at predicting the future?
Yeah, right. Maybe later.
First, we'll hear some more duds from Wall Street, the NFL, and the cornfield. Sometimes I can see the future.
Our own future.
The future of healthcare.
Corn futures.
The choice between two economic futures.
From WNYC, this is Freakonomics Radio.
Welcome to the future.
Here's your host, Stephen Dubner. Into the future.
So Philip Tetlock has sized up the people who predict the future, geopolitical change for instance,
and determined that they're not very good at predicting the future.
He also tells us that their greatest flaw is dogmatism, sticking to their ideologies,
even when presented with evidence that they're wrong.
You buy that? I buy it. Politics is full of ideology. Why shouldn't the people who study
politics be at least a little bit ideological? So let's try a different set of people,
people who make predictions that, theoretically at least, have nothing to do with ideology.
Let's go to Wall Street.
The experts on Wall Street are falling all over themselves to predict the year ahead.
Rising inflation and rates will send the U.S. in a downward spiral.
Higher interest rates are going to hurt the housing market.
They're going to make money over time. I would bet on it.
Amazon.
Cisco.
Pfizer.
Healthcare mutual funds.
General Motors.
Even good old IBM.
Don't hit the sell button just yet.
I'm Christina Fang, a professor of management at New York University's business school.
Christina Fang, like Philip Tetlock, is fascinated with prediction.
Well, I guess generally forecasting about anything, about technology, about product, whether it will be successful, about whether an idea, a venture idea could take off. A lot of things, not just economic,
but also business in general. Fang wasn't interested in just your street-level predictions,
though. She wanted to know about the big dogs, the people who make bold economic predictions
that carry price tags in the many millions or even billions of dollars.
Along with a fellow researcher, Yerker Denrell,
Fang gathered data from the Wall Street Journal's Survey of Economic Forecasts.
Every six months, the paper asked about 50 top economists to predict a set of macroeconomic
numbers, unemployment, inflation, gross national product, things like that.
Fang audited seven consecutive surveys with an eye toward a particular question.
When someone correctly predicts an extreme event, a market crash maybe, or a sudden spike in inflation,
what does that say about his overall forecasting ability?
In the Wall Street Journal survey, if you look at the extreme outcomes,
either extremely bad outcomes and extremely good outcomes, you see that those people who correctly predicted either extremely good or extremely bad outcomes, they're likely to have overall lower level of accuracy. In other words, they're doing poorer in general.
Uh-oh. You catching this? Those people who happen to predict accurately the extreme events, they happen to also have a lower overall level of accuracy.
So I can be right on the big one.
Yes.
But if I'm right on the big one, I generally will tend to be more often wrong than the average person.
On average.
On average.
Across everyday predictions as well. And our research suggests that for someone who has successfully predicted those events,
we are going to predict that they are not likely to repeat their success very often.
In other words, their overall capability is likely to be general, worse than average at predicting the economic future.
Now, why is this a problem?
Maybe they're just like home run hitters.
You know, a lot of strikeouts, but a lot of power, too.
All right.
I'll tell you why it's a problem.
Actually, I'll have Steve Levitt tell you.
The incentives for prediction makers are to make either cataclysmic or utopian predictions, right?
Because you don't get attention.
If I say that what's going to happen tomorrow is exactly the same as what happened today.
You don't get on TV.
I don't get on TV.
If it happens to come true, who cares?
I don't get any credit for it coming true either.
There's a strong incentive to make extreme predictions. Because seriously, who tunes in to
hear some guy say that next year will be pretty much like last year. And then once you have been
right on an extreme forecast, let's say you predicted the 2008 market crash and the Great
Recession, even if you're predicting every year like Steve Levitt's mother, you'll still be known
as the guy who called the big one.
And even if all your follow-up predictions are wrong, you still got the big one right.
Like Joe Namath.
The third annual Super Bowl game between the Baltimore Colts and the New York Jets.
Baltimore has far too many weapons.
Well, I like the Baltimore Colts.
I've been so impressed by the Colts.
I said, well, wait a minute.
We're going to win the game.
I guarantee it.
Well, the Jets have done it.
Beating the Baltimore Colts 16-7.
All right, look.
Predicting the economy, predicting the political future, those are hard.
Those are big, complex systems with lots of moving parts.
So how about football? If you're an NFL expert,
how hard can it be to forecast, say, who the best football teams will be in a given year?
We asked Freakonomics researcher Hayes Davenport to run the numbers for us.
Well, I looked at the past three years of expert picking from the major NFL prediction outlets, which are USA Today,
SportsIllustrated.com, and ESPN.com. We looked at 105 sets of picks total. They're picking the
division winner for each year, as well as the wildcard for that year. So they're basically
picking the whole playoff picture for that year. So talk about just kind of generally the degree
of difficulty of making this kind of a pick. Well, if you're sort of an untrained animal making NFL picks,
you're going to have about a 25% chance of picking each division correctly because there are only
four teams. All right. So, Hayes, you're saying that an untrained animal would be about 25%
accurate if you just pick one out of four. But what about a trained animal like me, a casual fan? How do I do compared to
the experts? Right. So if you're cutting off the worst team in each division, if you're not picking
from among those, you'd be right about 33 percent of the time, one in three. And the experts are
right about 36 percent of the time. So just a little better than that. OK, so if you're saying
they'll pick at about 36 percent accuracy and I, or someone by chance
would pick at about 33% accuracy. So that's a three percentage point improvement or about 10%
better. Maybe we should say, well, you know, that's not bad. If you beat the stock market by
10% every year, you'd be doing great. So should we think of these NFL pundits as picking 36 percent right being really wonderful or?
I wouldn't say that because there's there's a specific fallacy that these guys are operating from, which is they they tend to rely much too heavily on the previous year's standings in making their picks for the following year.
They play it very conservatively, but there's a very high level of parity in the NFL right now, so that's not exactly how it works. Tell me some of the pundits who,
whether by luck or brilliance and hard work, turn out to be really, really good.
Sure. There are two guys from ESPN who are sort of far ahead of the field. One is Pat Yacinkas,
and the other is John Clayton, who is actually pretty well-known.
He makes a lot of appearances on SportsCenter.
He's kind of a nebbishy, professorial type.
And they perform much better than everyone else because they're excellent wildcard pickers.
They are the only people who have correctly predicted both wildcard teams in a conference
and a season.
But they're especially good because they actually play it much safer
than everyone else.
Now you say that they're very good.
Persuade me that they're good and not lucky.
I can't do that.
There's a luck factor involved in all these predictions.
For example, if you pick the Patriots in 2008 and Tom Brady gets injured and they drop out
of the playoffs, there's very little you can do to predict that.
So injuries will mess with predictions all the time and other like turnover rates in
football that are sort of unpredictable.
So there's a luck factor to all of this.
Come on and be my little good luck charm.
So whether it's football experts calling Sunday's game or economists forecasting the economy or political pundits looking for the next revolution, we're talking about accuracy rates that barely beat a coin toss.
So maybe all these guys deserve a break.
Maybe it's just inherently hard to predict the future of other human beings.
They're so malleable, so unpredictable. So how about
a prediction where human beings are incidental to the main action?
I'm Joe Prusaki, and I am director of statistics division with USDA's National Agricultural
Statistics Service, or NAS for short. You grew up on a farm, yeah?
Yep, I grew up in, I always call it deep southern
Illinois. I'm sitting here in Washington, D.C., and where I grew up in Illinois is further south
than where I'm sitting today. We raised, we had corn, soybeans, and raised hogs.
You've heard of Anna Wintour, right? The fabled editor of Vogue magazine? Joe Prusaki is kind of like Anna Wintour for farmers.
He puts out publications that are read by everyone who's anyone in the industry.
Titles like acreage and prospective plantings and crop production.
Prusaki's reports carry running forecasts of crop yields for cotton, soybeans, wheat, and corn.
Most of the time, our monthly forecasts are probably within, I can guarantee you, within 5%, and most of the time I can say within 2% to 3% of the final.
And someone would say, well, that seems very good.
But in the agricultural world, the day users expect us to be much more precise in our forecast.
So how does this work?
How does the USDA forecast something as vast as the agricultural output of American farmers?
Like at the beginning of March, we will conduct a large survey of farmers and ranchers across the United States.
The sample size this time, this year, was about 85,000.
The farmers are asked how many acres they plan to devote to each crop, corn, let's say.
Then, in late July, the USDA sends out a small army of enumerators into roughly 1,900 cornfields in 10 states.
These guys mark off plots of corn, 20 feet long by two rows across.
They're randomly placed.
We have randomly selected fields and random location within the field.
So you may get a sample that's maybe 20 paces into the field and 40 rows over,
and you may get one that's 250 paces into the field and 100 rows over.
The enumerators look at every plant in that plot.
And then they'll count what they see or anticipate to be ears
based on looking at the plant.
A month later, they go back out again and check the corn stalks, check the ears.
Well, you could have animal loss.
An animal might cheer the plant off.
The plant may die.
So all along, we're updating the number of plants.
All along, we're updating the number of ears.
The other thing we need, you need an estimate of ear weight or fruit weight.
So they go out again, cut off a bunch of ears, and weigh them.
But wait, still not done.
After the harvest, there's one more round of measurement.
Once the field is harvested, the machine has gone through the field,
the enumerator will go back out to the field.
They'll lay out another plot just beyond our harvest area where we were.
And they will go through and pick up off the ground
any kernels that are left on the ground,
pieces of ears of corn and such on the ground
so we get a measure of harvest loss.
So this sounds pretty straightforward, right?
Compared to predicting something like the political or economic future,
estimating corn yield based on constant physical measurements of corn plants is pretty simple.
Except for one thing.
It's called the weather.
Officials declare a drought watch for the entire state.
Weather remains so hard to predict in the long term that the USDA doesn't even use forecasts.
It uses historic averages instead.
So, Joe, talk to me about what happened last year with the USDA corn forecast.
You must have known this was coming from me.
So the Wall Street Journal's headline was USDA flubs in predicting corn crops.
Explain what happened.
Well, this is the weather factor that came into play.
It turned off pretty hot and pretty dry.
And I had asked a few folks that are out and about in Iowa, what happened?
They said, this is just a really strange year.
We just don't know.
Now, if someone says, did we flub it?
Well, I don't know.
I mean, it was the forecast based on the information I had as of August 1.
September 1, I had a different set of information.
October 1, I had a different set of information. October 1, I had a different set of
information. Could we have did a better job? A lot of people thought they could have. Last June,
the USDA lowered its estimate of corn stockpiles, and in October, it cut its estimate of corn yield.
After the first report, the price of corn spiked 9%. The second report, another 6%.
Joe Prusaki got quite a few emails.
Okay, the first one is, this was, thanks a lot for collapsing the grain market today with your stupid,
and the word is three letters, begins with an A, and then it has two dollar signs.
Gotcha, I know that word.
USD report. As bad as the stench of dead bodies in Haiti must be,
it can't even compare to the foul stench of corruption emanating from our federal government
in Washington, D.C. It strikes me that there's room for trouble in that your forecasts are used
by a lot of different people who engage in a lot of different markets, and your research can move markets. I'm wondering what kind of bribes maybe come your way.
I have, it's really interesting. I have people that call, we call them fishers.
They call maybe a day or two days before, and it's like I tell them, I says,
why do you do this? We've had this discussion before.
This could do neither one of us good because I have to sign.
There's a couple of things.
One, I sign a confidentiality statement every year that says I shall not release any information before it's due time or bad things happen.
There's like a hundred, it's a hundred thousand dollar fine or time in prison.
And it's like, you know, the dollar fine.
Okay.
That's the prison part that bothers me.
But there's got to be a certain price at which.
So, so let's say I offered you, I came to you and I said, Joe, $10 million for, for a 24 hour head start on the corn forecast.
I'm not going to do it.
Trust me, somebody would track me down.
I hear you.
And again, the prison time, it bothers me.
All right, so Joe Prusaki probably can't be bought,
and the USDA is generally considered to do a pretty good job with crop forecasts.
But look how hard the agency has to work,
measuring cornfields row by row,
going back to look for animal loss and harvest loss, and still, its projection, which is only looking a few months into the future,
can get thrown totally out of whack by a little stretch of hot, dry weather.
That dry spell was essentially a random event.
Kind of like Tom Brady's knee getting smashed.
I hate to tell you this, but the future, it's full of random events.
That's why it's so hard to predict.
That's why it can be scary.
Now, do we know this?
Of course we know it.
Do we believe it?
Some scholars say that our need for prediction is getting worse,
or more accurately, that we get more upset now when the future surprises us.
After all, as the world becomes more rational and routinized, we often know what to expect.
I can get a Big Mac not only in New York, but in Beijing, too, and they'll taste pretty much the same.
So when you're used to that, and when things don't go as expected, watch out.
Our species has been trying to foretell the future forever.
Oracles and goat entrails and roosters pecking the dirt.
The oldest religious texts are filled with prediction.
I mean, look at the afterlife.
What is that if not a prediction of the future?
A prediction that, as far as I can tell, can never be categorically refuted or confirmed,
a prediction so compelling that it remains all these years later
a concept around which billions of people organize their lives.
So what do you see when you gaze into the future?
A yawning chasm of random events?
Or do you look for a neat pattern, even if no such pattern exists?
It's much more costly for someone to not detect a pattern.
That's Nassim Taleb, the author of Fooled by Randomness and The Black Swan.
It's much costlier for us as a race to make the mistake of not seeing a leopard than have the illusion of pattern and imagining a leopard when there's none.
And that error, in other words, mistaken, the non-random for random, which is what I call the one-way, it's a bias.
Now, that bias works extremely well because what's the big deal of
getting out of trouble?
It's not costing you anything.
But in the modern world it is
not quite harmless.
Illusional certainties make you think that things
that haven't exhibited risk
for example the stock market are riskless.
We have the turkey problem
the butcher
feeds the turkey for a certain number of days,
and then the turkey imagines that this is permanent.
The butcher feeds the turkey,
and the turkey imagines this is permanent.
So you've got to ask yourself,
who am I?
The butcher or the turkey?
Coming up, hedgehogs and foxes and a prediction that does work.
Here's a hint.
If you like this song,
you'll probably like this one too. Decision 2000.
Election Month From WNYC, this is Freakonomics Radio.
The Sunshine State will have plenty of sunshine for Al Gore.
NBC News projects that he wins the 25 electoral votes in the state of Florida.
You can pretty much take it to the bank.
Hey, guess what, Sunshine?
Al Gore didn't win Florida.
Didn't become president either.
Try walking that one back.
We don't just have egg in our face.
We've got omelet all over our suits.
So we are congenital predictors.
But our predictions are often wrong.
What then?
How do you defend your bad predictions?
I asked Philip Tetlock what all those political experts said when he showed them their results.
He had already stashed their excuses in a neat taxonomy. So if you thought that Gorbachev, for example, was a fluke, you might argue,
well, my understanding of the Soviet political system is fundamentally right,
and the Soviet Politburo, but for some quirky statistical aberrations,
the Soviet Politburo would have gone for a more conservative candidate.
Another argument might be, well, I predicted that Canada would disintegrate,
that Quebec would secede from Canada, and it didn't secede,
but the secession almost succeeded because there was a 50.1 percentage vote
against secession
and that's well within the margin of sampling error.
Are there others you want to name?
Well, another popular prediction is off on timing.
That comes up quite frequently
in the financial world as well.
And many, many very sophisticated students of finance
have commented on how hard it is.
They're saying the market can stay irrational
longer than you can stay liquid,
I think is a George Soros expression.
So off on timing is a fairly popular belief system defense as well.
And I predicted that Canada would be gone.
And you know what?
It's not gone yet.
But just hold on.
You answered very economically when I asked you,
what are the characteristics of a bad predictor?
You used one word, dogmatism.
What are the characteristics then of a good one?
Capacity for constructive self-criticism.
How does that self-criticism come into play and actually change the course of the prediction?
Well, one sign that you're capable of constructive self-criticism is that you're not dumbfounded by the question, what would it take to convince you you're wrong?
If you can't answer that question, you could take that as a warning sign.
In his study, Tetlock found that one factor was more important than any other
in someone's predictive ability, cognitive style.
You know the story about the fox and the hedgehog?
Isaiah Berlin tells us that the quotation comes from the Greek warrior poet
Archilochus 2,500 years ago. And the rough translation was, the fox knows many things,
but the hedgehog knows one big thing. So talk to me about what the foxes do as predictors and what
the hedgehogs do as predictors. Sure. The foxes tend to have a rather eclectic, opportunistic approach to forecasting.
They're very pragmatic.
A famous aphorism by Deng Xiaoping was he didn't care if the cat was white or black as long as it caught mice.
And I think the attitude of many foxes, while they really didn't care whether ideas came from the left or the right,
they tended to deploy them rather flexibly in deriving predictions.
So they often borrowed ideas across schools of thought that hedgehogs viewed as more sacrosanct.
There are many subspecies of hedgehog, but what they have in common is a tendency to approach forecasting as a deductive top-down exercise.
They start off with some abstract principles, and they apply those abstract principles to messy real-world situations, and the fit is often decidedly imperfect.
So foxes tend to be less dogmatic than hedgehogs, which makes them better predictors.
But if you had to guess, who do you think is more likely to show up on TV or in an op-ed column, the pragmatic, nuanced fox or the know-it-all hedgehog?
You're going to be paying $7 or $8 a gallon for your oil very soon. The pragmatic, nuanced fox or the know-it-all hedgehog?
You're going to be paying $7 or $8 a gallon for your oil very soon.
Doomsday scenario.
The policies are terrible.
The beginning of the end.
It's bullcrap.
You got it.
Hedgehogs, I think, are more attractive to the media.
Head hedgehogs are more likely to offer quotable soundbites,
whereas foxes are more likely to offer rather complex, caveat-laden soundbite.
They're not soundbites anymore if they're complex and caveat-laden. So if you were to gain control of, let's say, a really big media outlet,
New York Times or NBC TV, and you said,
you know, I want to dispense a different kind of news and analysis
to the public. What would you do? How would you suggest building a mechanism to do a better job
of keeping all this kind of poor expert prediction off the airwaves?
I'm so glad you asked that question. I have some specific ideas about that, and I don't think they would be all that difficult to implement. I think they should try to keep score more. I, there's very little incentive for you to want to have
your accuracy tested because your followers are quite convinced that you're extremely accurate,
and it's pretty much a game you can only lose. Can you imagine every time a pundit appeared on TV,
the network would list his batting average right after his name and affiliation?
You think that might cut down on blowhard predictions just a little bit?
Looking back at what we've learned so far,
it makes me wonder.
Maybe the first step toward predicting the future
should be to acknowledge our limitations.
Or, at the very least, let's start small.
For instance, if I could tell you what kind of music I like,
and then you could predict for me some other music I'd want to hear, that actually already exists.
It's called Pandora Radio.
Here's co-founder Tim Westergren.
So what we've done is we've broken down recordings into their basic components,
so every dimension of melody and harmony and rhythm and form and instrumentation
down into kind of the musical equivalent of primary colors.
The Pandora database includes more than a million songs across every genre that you
or I could name.
Each song is broken down into as many as 480 musical attributes, almost like genetic code.
Pandora's organizing system is in fact called the Music Genome Project.
You tell the Pandora website a song you like, and it rummages through that massive genetic database to make an educated guess about what you want to hear next.
If you like that song, you press the thumbs up button, and Pandora takes note.
I wouldn't make the claim that Pandora can map your emotional persona.
And I also don't think, frankly, that Pandora can predict a hit.
Because I think it is very hard. It's a bit of a magic.
That's what makes music so fantastic.
So I think that, you know, we know our limitations.
But within those limitations, I think that we make it much, much more likely
that you're going to find that song that just really touches you.
So, Tim, you were good enough to set up a station for me here.
It's called Train in Vain Radio.
So the song we gave you was Train in Vain by The Clash.
So let me open up my radio station here, and I'll hit play and see what you got for me.
Oh, yeah.
Yeah, I like them.
That's the jam.
So I'm going to give it a thumbs up.
All right, so I like Town Called Malice.
I think there are a couple more songs in my station here.
Yeah.
Television, Tom Verlaine. He was always too cool for me i can see why you would
think that i would like him and i appreciate your effort mr pandora how about you are you
were you a television fan yeah yeah and you know one thing of course is that these songs are all
rooted in guitar riffs yeah those are repetitive motifs played on the guitar and they're similar
sound and they got a little twang and they're played kind of rambly, sort of a little bit rough, which is that sort of punk element in there.
I got to tell you, even though when this song came up and I heard the song a few times and I told you I didn't like television very much of the song. I'm kind of digging it now.
See, there you go.
That's exactly what we're trying to do.
It's a really great thing to do, but it's not really predicting the future the way most people think of it as predicting the future, is it?
Well, I certainly wouldn't put our mission in the same category as predicting the economy or geopolitical futures.
But, you know, the average American listens to 17 hours of music a week.
So they spend a lot of time doing it.
And I think that if we can make that a more enjoyable experience and more personalized,
I think maybe we'll make some kind of meaningful contribution to culture.
So Pandora does a pretty good job of predicting the music you might want to hear based on what you already know you like. But again, look how much effort that takes. 480 musical attributes.
And it's not really predicting the future, is it? All Pandora does is breaks down the confirmed musical preferences of one person today and comes up with some more music that will fulfill that same person's preferences tomorrow.
If we really want to know the future, we probably need to get much more ambitious.
We probably need a whole new model.
Like, how about prediction markets? like whether a person wins the presidency or a team wins a sporting contest.
And people trade that asset and the price of that asset becomes a forecast of whether that claim is likely to be true.
That's Robin Hanson.
He's an economics professor at George Mason University and an admitted advocate of prediction markets.
As Hanson sees it, a prediction market is far more reliable than other forecasting methods because it addresses the pesky incentive problems of the old-time prediction industry.
So a prediction market gives people an incentive, a clear personal incentive to be right and not wrong.
Equally important, it gives people an incentive to shut up when they don't know, which is often a problem with many of our other institutions. So if you as a reporter call up almost any academic and ask them various vaguely related
questions, they'll typically try to answer them just because they want to be heard.
But in a prediction market, most people don't speak up.
Every one of your listeners today had the right to go speak up on Orange Juice Futures
yesterday.
Every one of you could have gone and said,
Orange Juice Futures forecasts are too low or too high.
And almost no one did.
Why? Because most of you don't think you know.
And that's just the way we want it.
So in most of these prediction markets,
what we want is the few people who know the best to speak up
and everybody else to shut up.
Prediction markets are flourishing.
Some of them are private.
A multinational firm might set up an internal market
to try to forecast when a big project will be done.
And there are for-profit prediction markets
like Intrade, based in Dublin,
where you can place a bet on, say,
whether any country that currently uses the euro
will drop the euro by the end of the year.
As I speak, that bet has a 15% chance on
in-trade. Here's another in-trade bet. Whether there will be a successful WMD terrorist attack
anywhere in the world by the end of 2013, that's got a 28% chance. Now, that's starting to sound
a little edgy, no? Betting on terrorism? Robin Hanson himself has a little experience in
this area on a U.S. government project he worked on. All right, so back in 2000, DARPA, the Defense
Advanced Research Projects Agency, had heard about prediction markets and they decided to fund a
research project. And they basically said, listen, we've heard this is useful for other things.
We'd like you to show us that this can be useful
for the kind of topics we are interested in.
Our project was going to be
forecasting geopolitical trends in the Middle East.
We were going to show that prediction markets
could tell you about economic growth,
about riots, about perhaps wars,
about whether changes of heads of state
and how these things would interact with each other. about riots, about perhaps wars, about whether changes of heads of state,
and how these things would interact with each other.
In 2003, just as the project was about to go live, the press heard about it.
On Monday morning, two senators had a press conference where they declared that DARPA and the military were going to have a betting market on terrorism.
And so there was a sudden burst of media coverage.
And by the very next morning,
the head of the military basically declared before the Senate
that this project was dead.
And there was nothing more to worry about.
What do you think we collectively, you in particular,
would know now about that part of the world, let's say, if this market had been allowed to take root.
Well, I think we would have gotten much earlier warning about the revolutions we just had.
And if we would have had participants from the Middle East forecasting those markets, not only we would get advance warning about which things might happen, but then how our actions could affect those. So for example, the United States
just came in on the side of the Libya rebels to support the Libya rebels against the Gaddafi
regime. What's the chances that will actually help the situation as opposed to make it worse?
But give me an example of what you consider among the hardest problems that a prediction
market could potentially help solve.
Who should, not only who should we elect for president, but whether we should go to war here
or whether we should begin this initiative
or should we approve this reform bill for medicine, etc.
So that sounds very logical, very appealing.
How realistic is it?
Well, it depends on there being a set of customers
who want this product.
So if prediction markets have an Achilles heel,
it's certainly the possibility that people don't really want accurate forecasts.
Prediction markets put a price on accountability.
If you're wrong, you pay. Simple as that.
Just like the proposed law against the witches in Romania.
Maybe that's what we need more of.
Here's Steve Levitt again.
When there are big rewards to people who make predictions and get them right,
and there's zero punishment for people who make bad predictions
because they're immediately forgotten,
then accountants would predict that that's a recipe
for getting people to make predictions all the time.
Because the incentives are all encouraging you to make predictions.
Absolutely.
If you get it right, there's an upside.
And if you get it wrong, there's almost no downside.
Right.
If the flip side were that if I make a false prediction,
I'm immediately sent to prison for one year term,
there would be almost no prediction.
And all those football pundits and political pundits and financial pundits
wouldn't be able to wriggle out of their bad calls,
saying, my idea was right, but my timing was wrong.
I mean, that's how everybody does it.
That big storm the weatherman called but never showed up.
Oh, it happened all right, he says, but two states over.
Or how about all those predictions for the end of the world,
the apocalypse, the rapture, all that? Well, they say, we prayed so hard that God decided to spare us. You remember back in May when an 89-year-old preacher named Harold Camping declared
that the earth would be destroyed at 5.59 p.m. on a Saturday,
and only the true believers would survive?
I remember it very well because my 10-year-old son was petrified.
I tried telling him that camping was a kook,
that anybody can pretty much say anything they want about the future.
Didn't help.
He couldn't get to sleep at night.
And then the 21st came and went and he was psyched.
I knew it all along, dad, he said. And then I asked him what he thought should happen to Harold
Camping, the false doomsday prophet. Oh, that's easy, he said. Off with his head. My son, he's not a bloodthirsty type.
But he's not a turkey either.
Freakonomics Radio is produced by WNYC, APM, American Public Media, and Dubner Productions.
Our producers include Elizabeth Giddens, Colin Campbell, Susie Lechtenberg, Chris Neary, and Diana Wynn.
We had help from Ellen Horn and Peter Clowney.
This episode was mixed by John DeLore.
If you want more Freakonomics Radio, you can subscribe to our podcast on iTunes or go to Freakonomics.com,
where you'll find lots of radio, a blog, the books, and more.