Freakonomics Radio - 41. The Folly of Prediction

Episode Date: September 14, 2011

Human beings love to predict the future, but we're quite terrible at it. So how about punishing all those bad predictions? ...

Transcript
Discussion (0)
Starting point is 00:00:00 We have found the witch, may we burn her? Burn her! What does it mean to be a witch exactly in Romania? Are these people that we know here as psychics or fortune tellers or are they different somehow? I don't know how is a fortune teller in the United States, but here generally they are a woman of different ages. They say they can cure some diseases, they can bring back your husband or your wife, or they can predict your future. Who is a typical client for a witch?
Starting point is 00:00:37 There are quite a lot of politicians who are going to witches. You know that France president Nicolas Sarkozy, he went to witches last years ago and our president in Romania and very important politicians from different parties, they are going to witches. Some of them they were obliged to recognize they went to witches. Some of them it's an off-the-record information but me me being a journalist, I know that information. Vlad Miksic is a reporter in Bucharest, the capital of Romania. He knows a good bit about the witches there. Quite a lot of them, they are quite rich. They have very big houses with golden rooftops. And a lot of the Romanians, they are living in small apartments in blocks. So just going in such a building will give you a sense of majesty and respect.
Starting point is 00:01:40 But the Romanian witch industry has been under attack. First came a proposed law to regulate and tax the witches. It passed in one chamber parliament before stalling out. But then came another proposal arguing that witches should be penalized if the predictions they make don't turn out to be true. Tell me, what do you do with witches? So if you are one of my clients and if I'm a fortune teller, if I fail to predict your future, I would pay quite substantial fine to the state. Or if this happens many times, I will even go to jail.
Starting point is 00:02:22 The punishment is between six months and three years in jail. What's being proposed in Romania is revolutionary. It strikes me because we typically don't hold anybody accountable for bad predictions. So I'm wondering, in Romania, let's say if a politician makes a bad prediction, do they get fined or penalized in any way? No, not at all. In fact, this is one of the hobbies of our president. He's doing a lot of predictions which are not coming true, of course. And after that, he is reelected or his popularity is rising like the sun in the morning, you know.
Starting point is 00:03:09 No, anyone can do a lot of predictions here in Eastern Europe and not a single hair will move from his or her head. Come on, people, that doesn't seem fair, does it? I don't care if you're anti-witch or pro-witch or witch agnostic. Why should witches be the only people held accountable for bad predictions? What about politicians and money managers, sports pundits? And what about you? From WNYC and APM American Public Media, this is Freakonomics Radio. Today, the folly of prediction.
Starting point is 00:03:58 Here's your host, Stephen Duffner. All of us are constantly predicting the future, whether we think about it or not. Right now, some small part of your brain is trying to predict what this show is going to be about. Now, how do you do that? You factor in what you've heard so far, what you know about Freakonomics. Maybe you know a lot. Maybe you've never heard of it. You might think it's some kind of communicable disease. When you predict the future, you look for cognitive cues, for data, for guidance. Here's where I go for guidance. I think to an economist,
Starting point is 00:04:38 the best explanation for why there are so many predictions is that the incentives are set up in order to encourage predictions. That's Steve Levitt. He's my Freakonomics friend and co-author, an economist at the University of Chicago. So most predictions we remember are ones which were fabulously, wildly unexpected, and then came true. Now, the person who makes that prediction has a strong incentive to remind everyone that they made that crazy prediction which came true. You look at all the people, the economists who talked about the financial crisis ahead of time. Those guys harp on it constantly. I was right. I was right. I was right. But if you're wrong, there's no person
Starting point is 00:05:20 on the other side of the transaction who draws any real benefit from embarrassing you by bringing up the bad prediction over and over. So there's nobody who has a strong incentive usually to go back and say, here's the list of the 118 predictions that were false. And I remember growing up, my mother, who's somewhat of a psychic, would predict a stock. Wait, somewhat of a psychic? She's a self-proclaimed psychic. And she would predict a stock market crash every single year. Right. And she's been right a couple times.
Starting point is 00:05:52 And she has been. She's been right twice in the last 15 years. And she would talk a lot about the times she was right. I'd have to remind her about the 13 times that she was wrong. And without any sort of market mechanism or incentive for keeping prediction makers honest, there's lots of incentive to go out and to make these wild predictions. And those are the ones that are remembered and talked about. Think about one of the predictions that you hear echoed more often than almost anyone is Joe Namath's famous pronouncement about how the
Starting point is 00:06:23 Jets were going to win the Super Bowl. And it was unexpected, and it happened. Now, if the Jets had lost the Super Bowl, nobody would remember that Joe Namath made that pronouncement. And conversely, you can probably find at least one player on every team that's lost the Super Bowl in the last 40 years who did predict that his team would win. That's probably right. Exactly right. Now, the flip side, which is perhaps surprising, is that in many cases, the goal of prediction is to be completely within the pack. And so I see this a lot with pension fund managers or endowment managers, which is if something goes wrong, then as long as everybody else made the same prediction, you can't be faulted very much. Pension managers, football players, psychic moms, Romanian witches.
Starting point is 00:07:14 Who doesn't try to predict the future these days? Go ahead and make your predictions. Start with the Masters. I'm going to lean towards Phil. Could I light his new putting stroke? The old farmer's almanac predicts a colder than usual winter. Bear Stearns is fine. Do not take your money. If there's one take, Bear Stearns is not in trouble. Fannie and Freddie are fundamentally sound, but they are not in danger of going under. Paul the Octopus is making yet another World Cup prediction, betting on Spain. We will in fact find evidence of weapons programs.
Starting point is 00:07:52 Gaddafi will ultimately step down. And you know the worst thing? There's almost nobody keeping track of all those predictions. Nobody. Except for this guy. Well, I'm a research psychologist who... Don't forget your name, though. I'm Phil Tetlock, and I'm a research psychologist.
Starting point is 00:08:11 I've spent most of my career at the University of California, Berkeley, and I recently moved to the University of Pennsylvania, where I'm cross-appointed in the Wharton School in the psychology department. Philip Tetlock has done a lot of research on cognition and decision-making and bias, pretty standard stuff for an Ivy League psych PhD. But what really fascinates him is prediction. There are a lot of psychologists who believe that there is a hardwired human need to believe that we live in a fundamentally predictable and controllable universe.
Starting point is 00:08:40 There's also a widespread belief among psychologists that people try hard to impose causal order on the world around them, even when those phenomena are random. This hardwired human need, as Tetlock puts it, has created what he calls a prediction industry. Now, don't sneer. You're part of it, too. I think there are many players in what you might call the prediction industry. In some sense, we're all players in it. Whenever we go to a cocktail party or a colloquium or whatever, where opinions are being shared, we frequently make likelihood judgments about possible futures and the truth or falsity of particular claims about futures. The prediction business is a big business on Wall Street. And we have futures markets and so forth
Starting point is 00:09:25 designed to regulate speculation in those areas. Obviously, governments have great interest in prediction. They create large intelligence agency bureaucracies and systems to help them achieve some degree of predictability in a seemingly chaotic world. Let me read you something that you have said or written in the past, that this determination to ferret out order from chaos has served our species well. We are all beneficiaries of our great collective successes in the pursuit of deterministic regularities and messy phenomena, agriculture, antibiotics, and countless other inventions. So talk to me for a moment about the value of prediction. Obviously, there's much has been gained, much to be gained. Do we overvalue prediction though, perhaps? I think there's an asymmetry of supply and demand. I think there
Starting point is 00:10:13 is an enormous demand for accurate predictions in many spheres of life in which we don't have the requisite expertise to deliver. And when you have that kind of gap between demand and real supply, you get the infusion of fake supply. Fake supply. I like this guy, this Philip Tetlock. He's not an economist, but he knows the laws of supply and demand can't just be revoked. So if there's big demand for prediction in all realms of life and not enough real supply to satisfy it, what does this fake supply sound like?
Starting point is 00:10:49 One could say it could. There could be terrorist threats. The economy could remain the driving factor. Could, could. Could, could. Some of that radioactivity could carry in the atmosphere to the west coast of the United States. There could be an interesting battle on the cards when the Oscar nominations are announced. Gold could be peaking its head above $1,600.
Starting point is 00:11:06 It could certainly get to $2,000. Gold could go to $3,500 an ounce. There's a punditocracy out there, a class of people who predict ad nauseum, often on television. They can be pretty good at making their predictions tough to audit. It's the art of appearing to go out on a limb
Starting point is 00:11:25 without actually going out on a limb. So for example, the word could, something could happen. So the room you happen to be sitting in could be struck by a meteor in the next 23 seconds. That makes perfect sense. But the probability is, of course, 0.00001. It's not zero, but it's extremely low. In fact, the word could, the possible meanings people attach to it range from about 0.01 to about 0.6, which covers more than half of the probability scale right there. It could happen to you. Look, nobody likes a weasel. So more than 20 years ago, Tetlock set out to conduct one of the largest empirical studies ever of predictions.
Starting point is 00:12:07 He chose to focus on predictions about political developments around the world. He enlisted some of the world's foremost experts, the kind of very smart people who have written definitive books, who show up on CNN or on the Times' op-ed page. In the end, we had close to 300 participants, and they were very sophisticated political observers. Virtually all of them had some postgraduate education. Roughly two-thirds of them had PhDs. They were largely political scientists, but there were some economists and a variety of other professionals as well. And they all participated in your study anonymously, correct? That was a very important condition for obtaining cooperation. Now, if they were not anonymous, presumably we would recognize some of their names. These are prominent people at political science departments and economic departments at,
Starting point is 00:12:53 I'm guessing, some of the better universities around the world. Is that right? Well, I don't want to say too much more, but I think you would recognize some of them, yes. I think some of them had substantial Google counts. The study became the basis of a book Tetlock published a few years ago called Expert Political Judgment. There were two major rounds of data collection, the first beginning in 1988, the other in 1992. These nearly 300 experts were asked to make predictions about dozens of countries around the world. The questions were multiple choice. For instance, in Democracy X, let's say it's England,
Starting point is 00:13:31 should we expect that after the next election, the current majority party will retain, lose, or strengthen its status? Or for undemocratic country Y, Egypt maybe, should we expect the basic character of the political regime to change in the next five years? In the next ten years? And if so, in what direction? Into what effect? The experts made predictions within their areas of expertise and outside.
Starting point is 00:13:55 And they were asked to rate their confidence level for their predictions. So, after tracking the accuracy of about 80,000 predictions by some 300 experts over the course of 20 years, Philip Tetlock found that experts thought they knew more than they knew, that there was a systematic gap between the subjective probabilities that experts were assigning to possible futures and the objective likelihoods of those futures materializing. Let me translate that for you. The experts were pretty awful. Now, you may think, awful compared to what? Did they beat a monkey with a dartboard?
Starting point is 00:14:35 Oh, the monkey with the dartboard comparison. That comes back to haunt me all the time. With respect to how they did relative to, say, a baseline group of Berkeley undergraduates making predictions, they did somewhat better than that. Did they do better than an extrapolation algorithm? No, they did not. They did, for the most part, a little bit worse than that. How did they do relative to purely random guessing strategy? Well, they did a little bit better than that, but not as much as you might hope. That extrapolation algorithm that Tetlock mentioned, that's simply a computer programmed to predict no change in current situation. So it turned out that these smart, experienced, confident experts predicted the political future about as well, if not slightly worse, than the average daily reader of The New York Times. I think the most important takeaway would be that the experts think they know more than they do. They were systematically overconfident.
Starting point is 00:15:31 Some experts were really massively overconfident. And we were able to identify those experts based on some of their characteristics of their belief system and their cognitive style, their thinking style. Okay, so now we're getting into the nitty-gritty of what makes people predict well or predict poorly. What are the characteristics, then, of a poor predictor? Dogmatism. It can be summed up that easily. I think so. I think an unwillingness to change one's mind in a reasonably timely way in response to new evidence, a tendency when asked to explain one's predictions to generate only reasons that favor your preferred prediction and not to generate reasons opposed to it.
Starting point is 00:16:20 And I guess what's striking to me, and I'd love to hear what you have to say about this, is that it's easy to apply one word, prediction, to many, many, many different realms in life. But those realms all operate very differently. So politics is different from economics and predicting a sports outcome is different than predicting an agricultural outcome. But it seems that we don't distinguish so much necessarily and that there's this modern sense almost that anything can be and should be able to be predicted. Am I kind of right on that or no? I think there's a great deal of truth to that. I think it is very useful in talking about the predictability of the modern world to distinguish those aspects of the world that show a great deal of linear regularity, and those parts of the world that seem to be driven by complex systems that
Starting point is 00:17:07 are decidedly nonlinear and decidedly difficult, if not impossible, to predict. Talk to me about a few realms that generally are very, very hard to predict and a few realms that generally are much easier. Predicting Scandinavian politics is a lot easier than predicting Middle Eastern politics. Yes, that was the first one that came to my mind, too. All right. But keep going. The thing about the radically unpredictable environments is that they often appear for very long periods of time to be predictable. So, for example, if you had been a political forecaster predicting regime longevity in the Middle East, you would have done extremely well predicting, you know, in Egypt, that Mubarak would continue to be president of Egypt year after
Starting point is 00:17:50 year after year, in much the same way that if you'd been a Sovietologist, you'd have done very well in the Brezhnev era predicting continuity. There's an aphorism I quote in the Expert Political Judgment book from Karl Marx. I'm obviously not a Marxist, but I thought that's a beautiful aphorism that he had, which was that when the train of history hits a curve, the intellectuals fall off. Coming up, who do you predict we'll hear from next? A bunch of people who are awesomely good at predicting the future? Yeah, right. Maybe later. First, we'll hear some more duds from Wall Street, the NFL, and the cornfield. Went to the fortune teller.
Starting point is 00:18:35 Had my fortune read. I didn't know what to tell. I had a good time. Sometimes I can see the future. I don't know what to tell. Welcome to the future. Here's your host, Stephen Dubner. So Philip Tetlock has sized up the people who predict the future, geopolitical change for instance, and determined that they're not very good at predicting the future. He also tells us that their greatest flaw is dogmatism, sticking to their ideologies even when presented with evidence that they're wrong. You buy that? I buy it. Politics is full of ideology. Why shouldn't the people who study
Starting point is 00:19:31 politics be at least a little bit ideological? So let's try a different set of people, people who make predictions that theoretically at least have nothing to do with ideology. Let's go to Wall Street. The experts on Wall Street are falling all over themselves to predict the year ahead. Rising inflation and rates will send the U.S. in a downward spiral. Higher interest rates are going to hurt the housing market. They're going to make money over time. I would bet on it. Amazon.
Starting point is 00:20:00 Cisco. Pfizer. Healthcare mutual funds. General Motors. Even good old IBM. Don't hit the sell button just yet. I'm Christina Fang, a professor of management at New York University's business school. Christina Fang, like Philip Tetlock, is fascinated with prediction.
Starting point is 00:20:20 Well, I guess generally forecasting about anything, about technology, about product, whether it will be successful, about whether an idea, a venture idea could take off. A lot of things, not just economic, but also business in general. Feng wasn't interested in just your street-level predictions, though. She wanted to know about the big dogs, the people who make bold economic predictions that carry price tags in the many millions or even billions of dollars. Along with a fellow researcher, Yerker Denrell, Fang gathered data from the Wall Street Journal's Survey of Economic Forecasts.
Starting point is 00:20:55 Every six months, the paper asked about 50 top economists to predict a set of macroeconomic numbers, unemployment, inflation, gross national product, things like that. Fang audited seven consecutive surveys with an eye toward a particular question. When someone correctly predicts an extreme event, a market crash maybe, or a sudden spike in inflation, what does that say about his overall forecasting ability? In the Wall Street Journal survey, if you look at the extreme outcomes, either extremely bad outcomes and extremely good outcomes,
Starting point is 00:21:29 you see that those people who correctly predicted either extremely good or extremely bad outcomes, they're likely to have overall lower level of accuracy. In other words, they're doing poorer in general.
Starting point is 00:21:42 Uh-oh. You catching this? Those people who happen to predict accurately the extreme events, they happen to also have a lower overall level of accuracy. So I can be right on the big one, but if I'm right on the big one,
Starting point is 00:21:55 I generally will tend to be more often wrong than the average person. On average. On average. Across everyday predictions as well. And our research suggests that for someone who has successfully predicted those events, we are going to predict that they are not likely to repeat their success very often. In other words, their overall capability is likely to be not as impressive as their apparent success seems to be. So the people who make big, bold, correct predictions
Starting point is 00:22:26 are, in general, worse than average at predicting the economic future. Now, why is this a problem? Maybe they're just like home run hitters. You know, a lot of strikeouts, but a lot of power too. All right, I'll tell you why it's a problem. Actually, I'll have Steve Levitt tell you. The incentives for prediction makers are to make either cataclysmic or utopian predictions, right?
Starting point is 00:22:51 Because you don't get attention. If I say that what's going to happen tomorrow is exactly the same as what happened today. You don't get on TV. I don't get on TV. If it happens to come true, who cares? I don't get any credit for it coming true either. There's a strong incentive to make extreme predictions. Because seriously, who tunes in to hear some guy say that next year will be pretty much like last year?
Starting point is 00:23:13 And then once you have been right on an extreme forecast, let's say you predicted the 2008 market crash and the Great Recession, even if you were predicting it every year like Steve Levitt's mother, you'll still be known as the guy who called the big one. And even if all your follow-up predictions are wrong, you still got the big one right. Like Joe Namath. The third annual Super Bowl game between the Baltimore Colts and the New York Jets. Baltimore has far too many weapons. Well, I like the Baltimore Colts.
Starting point is 00:23:44 I've been so impressed by the Colts. Guys, well, wait a minute. We're predicting the political future, those are hard. Those are big, complex systems with lots of moving parts. So how about football? If you're an NFL expert, how hard can it be to forecast, say, who the best football teams will be in a given year? We asked Freakonomics researcher Hayes Davenport to run the numbers for us. Well, I looked at the past three years of expert picking from the major NFL prediction outlets, which are USA Today, Sports Illustrated dot com and ESPN dot com. We looked at one hundred and five sets of picks total. They're picking the division winner for each year as well as the wild card for that year. So they're basically picking the whole playoff picture for that year.
Starting point is 00:24:42 So talk about just kind of generally the degree of difficulty of making this kind of a pick. Well, if you're sort of an untrained animal making NFL picks, you're going to have about a 25 percent chance of picking each division correctly because there are only four teams. All right. So, Hayes, you're saying that an untrained animal would be about 25 percent accurate if you just pick one out of four. But what about a trained animal like me, a casual fan? How do I do compared to the experts? Right. So if you're cutting off the worst team in each division, if you're not picking from among those, you'd be right about 33 percent of the time, one in three. And the experts are right about 36 percent of the time. So just a little
Starting point is 00:25:21 better than that. OK, so if you're saying they'll pick at about 36 percent accuracy and I or someone by chance would pick at about 33 percent accuracy. So that's a three percentage point improvement or about 10 percent better. Maybe we should say, well, you know, that's not bad. If you beat the stock market by 10 percent every year, you'd be doing great. So should we think of these NFL pundits as picking 36% right being really wonderful? I wouldn't say that because there's a specific fallacy that these guys are operating from, which is they tend to rely much too heavily on the previous year's standings in making their picks for the following year. They play it very conservatively, but there's a very high level of parity in the NFL right now, so that's not exactly how it works. Tell me some of the pundits who, whether by luck or brilliance and hard work,
Starting point is 00:26:15 turn out to be really, really good. Sure. There are two guys from ESPN who are sort of far ahead of the field. One is Pat Yacinkas, and the other is John Clayton, who is actually pretty well known. He makes a lot of appearances on SportsCenter. He's kind of a nebbishy, professorial type. And they perform much better than everyone else because they're excellent wildcard pickers. They are the only people who have correctly predicted both wildcard teams in a conference in a season. But they're especially good because they actually play it much safer than everyone else. Now you say that they're very good. Persuade me that they're good and not lucky. I can't do that.
Starting point is 00:26:56 There's a luck factor involved in all these predictions. For example, if you pick the Patriots in 2008 and Tom Brady gets injured and they drop out of the playoffs, there's very little you can do to predict that. So injuries will mess with predictions all the time and other like turnover rates in football that are sort of unpredictable. So there's a there's a luck factor to all of this. So whether it's football experts calling Sunday's game or economists forecasting the economy or political pundits looking for the next revolution, we're talking about accuracy rates that barely beat a coin toss. But maybe all these guys deserve a break. Maybe it's just inherently hard to predict the future of other human beings. They're so malleable, so unpredictable. So how about a prediction where human beings are incidental to the main action?
Starting point is 00:27:53 I'm Joe Prusaki, and I am director of statistics division with USDA's National Agricultural Statistics Service, or NASS for short. You grew up on a farm, yeah? Yep, I grew up in, I always call it deep southern Illinois. I'm sitting here in Washington, D.C., and where I grew up in Illinois is further south than where I'm sitting today. We raised, we had corn, soybeans, and raised hogs. You've heard of Anna Wintour, right? The fabled editor of Vogue magazine? Joe Prisocki is kind of like Anna Wintour, right? The fabled editor of Vogue magazine? Joe Prusaki is kind of like Anna Wintour for farmers.
Starting point is 00:28:29 He puts out publications that are read by everyone who's anyone in the industry. Titles like acreage and prospective plantings and crop production. Prusaki's reports carry running forecasts of crop yields for cotton, soybeans, wheat, and corn. Most of the time, our monthly forecasts are probably within, I can guarantee you, within 5%, and most of the time I can say within 2% to 3% of the final. And someone would say, well, that seems very good. But in the agricultural world, the day users expect us to be much more precise in our forecast. So how does this work?
Starting point is 00:29:13 How does the USDA forecast something as vast as the agricultural output of American farmers? Like at the beginning of March, we will conduct a large survey of farmers and ranchers across the United States. The sample size this time this year was about 85,000. The farmers are asked how many acres they plan to devote to each crop, corn, let's say. Then, in late July, the USDA sends out a small army of enumerators into roughly 1,900 cornfields in 10 states. These guys mark off plots of corn, 20 feet long by two rows across. They're randomly placed. We have randomly selected fields in random location within the field. So you may get a sample that's maybe 20 paces into the field
Starting point is 00:29:59 and 40 rows over, and you may get one that's 250 paces in the field and 100 rows over. The enumerators look at every plant in that plot. And then they'll count what they see or anticipate to be ears based on looking at the plant. A month later, they go back out again and check the corn stalks, check the ears. Well, you could have know animal loss you know animal might cheer the plant off you may the plant may die so all along we're updating the number of plants all along we're updating the number of ears. The other thing we need you need an estimate of ear weight or fruit weight. So they go out again cut off a bunch of ears and weigh them. But wait, still not done. After the harvest, there's one more round of measurement.
Starting point is 00:30:53 Once the field is harvested, the machine has gone through the field, the enumerator will go back out to the field. They'll lay out another plot just beyond our harvest area where we were. And they will go through and pick up off the ground any kernels that are left on the ground, pieces of ears of corn and such on the ground, so we get a measure of harvest loss. So this sounds pretty straightforward, right? Compared to predicting something like the political or economic future, estimating corn yield based on constant physical measurements of corn plants is pretty simple. Except for one thing. It's called the weather. Officials declare a drought
Starting point is 00:31:34 watch for the entire state. Weather remains so hard to predict in the long term that the USDA doesn't even use forecasts., uses historic averages instead. So Joe, talk to me about what happened last year with the USDA corn forecast. You must have noticed this was coming from me. So the Wall Street Journal's headline was USDA flubs in predicting corn crops. Explain what happened. Well, this is the weather factor that came into play. It turned off pretty hot and
Starting point is 00:32:05 pretty dry. And I had asked a few folks that are out and about in Iowa, what happened? They said, this is just a really strange year. We just don't know. Now, if someone says, did we flub it? Well, I don't know. I mean, it was the forecast based on the information I had as of August 1. September 1, I had a different set of information. October 1, I had a different set of information. Could we have did a better job? A lot of people thought they could have. Last June, the USDA lowered its estimate of corn stockpiles, and in October, it cut its estimate of corn yield. After the first report, the price of corn spiked 9%. The second report, another 6%. Joe Prusaki got quite a few emails. Okay, the first one is, this was,
Starting point is 00:32:54 thanks a lot for collapsing the grain market today with your stupid, and the word is three letters, begins with an A and then it has two dollar signs. Gotcha, I know that word. USD report. As bad as the stench of dead bodies in Haiti must be, it can't even compare to the foul stench of corruption emanating from our federal government in Washington, D.C. It strikes me that there's room for trouble in that your forecasts are used by a lot of different people who engage in a lot of different markets, and your research can move markets. I'm wondering what kind of bribes maybe come your way. I have – it's really interesting. I have people that call – we call them fishers. They call maybe a day or two days before, and it's like I tell them, I says, why do you do this?
Starting point is 00:33:45 We've had this discussion before. This could do neither one of us good because I have to sign. There's a couple things. One, I sign a confidentiality statement every year that says I shall not release any information before it's due time or bad things happen. There's like a hundred, it's a hundred thousand dollar fine or time in prison. And it's like, you know, the dollar fine. Okay.
Starting point is 00:34:12 That's the prison part that bothers me. But there's got to be a certain price at which. So let's say I offered you, I came to you and I said, Joe, $10 million for a 24-hour head start on the corn forecast. I'm not going to do it. Trust me, somebody would track me down. I hear you. Again, the prison time, it bothers me. All right, so Joe Prusaki probably can't be bought,
Starting point is 00:34:40 and the USDA is generally considered to do a pretty good job with crop forecasts. But look how hard the agency has to work, measuring cornfields row by row, going back to look for animal loss and harvest loss. And still, its projection, which is only looking a few months into the future, can get thrown totally out of whack by a little stretch of hot, dry weather. That dry spell was essentially a random event. Kind of like Tom Brady's knee getting smashed. I hate to tell you this, but the future, it's full of random events.
Starting point is 00:35:21 That's why it's so hard to predict. That's why it can be scary. Now, do we know this? Of course we know it. Do we believe it? Some scholars say that our need for prediction is getting worse. Or, more accurately, that we get more upset now when the future surprises us. After all, as the world becomes more rational and routinized, we often know what to expect. I can get a Big Mac not only
Starting point is 00:35:53 in New York, but in Beijing too, and they'll taste pretty much the same. So when you're used to that, and when things don't go as expected, watch out. Our species has been trying to foretell the future forever. Oracles and goat entrails and roosters pecking the dirt. The oldest religious texts are filled with prediction. I mean, look at the afterlife. What is that if not a prediction of the future? A prediction that, as far as I can tell, can never be categorically refuted or confirmed. A prediction so compelling that it remains all these years later a concept around which billions of people organize their lives. So what do you see when you gaze into the future? A yawning chasm of random events?
Starting point is 00:36:47 Or do you look for a neat pattern, even if no such pattern exists? It's much more costly for someone to not detect a pattern. That's Nassim Taleb, the author of not seeing a leopard than have the illusion of pattern and imagining a leopard when there's none. And that error, in other words, mistaking the non-random for the random, which is what I call the one-way, it's a bias.
Starting point is 00:37:23 Now, that bias works extremely well because what's the big deal of getting out of trouble? It's not costing you anything. But in the modern world, it is not quite harmless. This illusion of certainties makes you think that things that haven't exhibited risk, for example, the stock market, are riskless. We have the turkey problem. The butcher feeds the turkey for a certain number of days, and then the turkey imagines that this is permanent.
Starting point is 00:37:54 The butcher feeds the turkey, and the turkey imagines this is permanent. So you've got to ask yourself, who am I, the butcher or the turkey? Coming up, hedgehogs and foxes and a prediction that does work. Here's a hint. If you like this song, you'll probably like this one, too.
Starting point is 00:38:45 Decision 2000 Election Month From American Public Media and WNYC, this is Freakonomics Radio. The Sunshine State will have plenty of sunshine for Al Gore. NBC News projects that he wins the 25 electoral votes in the state of Florida. You can pretty much take it to the bank. Hey, guess what, Sunshine? Al Gore didn't win Florida. Didn't become president either. Try walking that one back. We don't just have egg in our face. We've got omelet all over our suits. So we are congenital predictors, but our predictions are
Starting point is 00:39:18 often wrong. What then? How do you defend your bad predictions? I asked Philip Tetlock what all those political experts said when he showed them their results. He had already stashed their excuses in a neat taxonomy. So if you thought that Gorbachev, for example, was a fluke, you might argue, well, my understanding of the Soviet political system is fundamentally right, and the Soviet Politburo, but for some quirky statistical aberrations, the Soviet Politburo would have gone for a more conservative candidate. Another argument might be, well, I predicted that Canada would disintegrate, that Quebec would secede from Canada, and it didn't secede, but the secession almost succeeded
Starting point is 00:40:01 because there was a 50.1 percentage vote against secession, and that's well within the margin of sampling error. Are there others you want to name? Well, another popular prediction is off on timing. That comes up quite frequently in the financial world as well. Many very sophisticated students of finance have commented on how hard it is. They're saying the market can stay irrational longer than you can stay liquid, I think is a George Soros expression. So off on timing is a fairly popular belief system defense as well. And I
Starting point is 00:40:31 predicted that Canada would be gone. And you know what, it's not gone yet. But just hold on. You answered very economically when I asked you what are the characteristics of a bad predictor, used one word, dogmatism. What are the characteristics then of a good one? Capacity for constructive self-criticism. How does that self-criticism come into play and actually change the course of the prediction? Well, one sign that you're capable of constructive self-criticism is that you're not dumbfounded by the question, what would it take to convince you you're wrong? If you can't answer that question, you could take that as a warning sign.
Starting point is 00:41:07 In his study, Tetlock found that one factor was more important than any other in someone's predictive ability, cognitive style. You know the story about the fox and the hedgehog? Isaiah Berlin tells us that the quotation comes from the Greek warrior poet Archilochus 2,500 years ago. And the rough translation was, the fox knows many things, but the hedgehog knows one big thing. So talk to me about what the foxes do as predictors and what the hedgehogs do as predictors. Sure. The foxes tend to have a rather eclectic, opportunistic approach to forecasting. They're very pragmatic.
Starting point is 00:41:48 A famous aphorism by Deng Xiaoping was he didn't care if the cat was white or black as long as it caught mice. And I think the attitude of many foxes, while they really didn't care whether ideas came from the left or the right, they tended to deploy them rather flexibly in deriving predictions. So they often borrowed ideas across schools of thought that hedgehogs viewed as more sacrosanct. There are many subspecies of hedgehog. But what they have in common is a tendency to approach forecasting as a deductive top-down exercise. They start off with some abstract principles, and they apply those abstract principles to messy real-world situations. And the fit is often decidedly imperfect.
Starting point is 00:42:28 So foxes tend to be less dogmatic than hedgehogs, which makes them better predictors. But if you had to guess, who do you think is more likely to show up on TV or in an op-ed column? The pragmatic, nuanced fox? Or the know-it-all hedgehog? You're going to be paying $7 or $8 a gallon for your oil very soon. Doomsday scenario. The policies are terrible. The beginning of the end.
Starting point is 00:42:52 It's bullcrap. You got it. Hedgehogs, I think, are more attractive to the media. Head hedgehogs are more likely to offer quotable soundbites, whereas foxes are more likely to offer rather complex, caveat-laden soundbite. They're not soundbites anymore if they're complex and caveat-laden. So if you were to gain control of, let's say, a really big media outlet, New York Times or NBC TV, and you said, you know, I want to dispense a different kind of news and analysis to the
Starting point is 00:43:27 public. What would you do? How would you suggest building a mechanism to do a better job of keeping all this kind of poor expert prediction off the airwaves? I'm so glad you asked that question. I have some specific ideas about that, and I don't think they would be all that difficult to implement. I think they should try to keep score more. I think there's remarkably little effort in tracking accuracy. I mean, if you happen to be someone like Tom Friedman or Paul Krugman or someone who's at the top of the pundit pecking order,
Starting point is 00:44:02 there's very little incentive for you to want to have your accuracy tested because your followers are quite convinced that you're extremely accurate, and it's pretty much a game you can only lose. Can you imagine every time a pundit appeared on TV, the network would list his batting average right after his name and affiliation? You think that might cut down on blowhard predictions just a little bit? Looking back at what we've learned so far, it makes me wonder. Maybe the first step toward predicting the future should be to acknowledge our limitations. Or,
Starting point is 00:44:33 at the very least, let's start small. For instance, if I could tell you what kind of music I like, and then you could predict for me some other music I'd want to hear, that actually already exists. It's called Pandora Radio. Here's co-founder Tim Westergren. So what we've done is we've broken down recordings into their basic components for every dimension of melody and harmony and rhythm and form and instrumentation down into kind of the musical equivalent of primary colors. The Pandora database includes more than a million songs across every genre that you or I could name. Each song is broken down into as many as 480 musical attributes, almost like genetic code. Pandora's organizing system is in fact called the Music Genome Project.
Starting point is 00:45:26 You tell the Pandora website a song you like, and it rummages through that massive genetic database to make an educated guess about what you want to hear next. If you like that song, you press the thumbs up button, and Pandora takes note. I wouldn't make the claim that Pandora takes note. within those limitations, I think that we make it much, much more likely that you're going to find that song that just really touches you. So Tim, you were good enough to set up a station for me here. It's called Train in Vain Radio. So the song we gave you was Train in Vain by The Clash. So let me open up my radio station here and I'll hit play and see what you got for me. Oh yeah. Yeah, I like them. That's the jam. So I'm going to give it a thumbs up. All right. So I like Town Called Malice. I think there are a couple more songs in my station here. Television, Tom Verlaine.
Starting point is 00:46:45 He was always too cool for me. I can see why you would think that I would like him. And I appreciate your effort, Mr. Pandora. How about you? Were you a television fan? Yeah, yeah. And you know, one thing, of course, is that these songs are all rooted in guitar riffs.
Starting point is 00:46:59 Yeah. Those are repetitive motifs played on the guitar. And they're a similar sound, and they've got a little twang, and they're played kind of rambly, sort of a little bit rough, which is that sort of punk element in there. I got to tell you, even though when this song came up, and I heard the song a few times, and I told you I didn't like television very much of this song,
Starting point is 00:47:23 I'm kind of digging it now. See, there you go. That's exactly what we're trying to do. It's a really great thing to do, but it's not really predicting the future the way most people think of it as predicting the future, is it? Well, I certainly wouldn't put our mission in the same category as predicting the economy or geopolitical futures. But, you know, the average American listens to 17 hours of music a week. So they spend a lot of time doing it. And I think that if we can make that a more enjoyable experience and more personalized, I think maybe we'll make some kind of meaningful contribution to culture. So Pandora does a pretty good job of predicting the music you might want to hear
Starting point is 00:48:09 based on what you already know you like. But again, look how much effort that takes. 480 musical attributes. And it's not really predicting the future, is it? All Pandora does is breaks down the confirmed musical preferences of one person today and comes up with some more music that'll fulfill that same person's preferences tomorrow. If we really want to know the future, we probably need to get much more ambitious. We probably need a whole new model.
Starting point is 00:48:40 Like, how about prediction markets? A prediction market mechanically is basically like a betting market or a speculative market like orange juice futures or stock markets, things like that. The mechanics is that there's an asset of some sort that pays off if something's true, like whether a person wins the presidency or a team wins a sporting contest. And people trade that asset and the price of that asset becomes a forecast of whether that claim is likely to be true. That's Robin Hansen. He's an economics professor at George Mason University and an admitted advocate of prediction markets. As Hansen sees it, a prediction market is far more reliable than other forecasting methods because it addresses the
Starting point is 00:49:25 pesky incentive problems of the old-time prediction industry. So a prediction market gives people an incentive, a clear personal incentive, to be right and not wrong. Equally important, gives people an incentive to shut up when they don't know, which is often a problem with many of our other institutions. So if you as a reporter call up almost any academic and ask them various vaguely related questions, they'll typically try to answer them just because they want to be heard. But in a prediction market, most people don't speak up. Every one of your listeners today had the right to go speak up on orange juice futures yesterday. Every one of you could have gone and said, orange juice futures forecasts are too low or too high.
Starting point is 00:50:10 And almost no one did. Why? Because most of you don't think you know. And that's just the way we want it. So in most of these prediction markets, what we want is the few people who know the best to speak up and everybody else to shut up. Prediction markets are flourishing. Some of them are private.
Starting point is 00:50:24 A multinational firm might set up an internal market And everybody else to shut up. Prediction markets are flourishing. Some of them are private. A multinational firm might set up an internal market to try to forecast when a big project will be done. And there are for-profit prediction markets like Intrade, based in Dublin, where you can place a bet on, say, whether any country that currently uses the euro will drop the euro by the end of the year. As I speak, that bet has a 15% chance on in-trade. Here's another in-trade bet. Whether there will be a successful WMD terrorist attack anywhere in the world by the end of 2013, that's got a 28% chance. Now, that's starting to sound a little edgy, no? Betting on terrorism? Robin Hanson himself has a little experience in this area on a U.S. government project he worked on. All right, so back in 2000, DARPA, the Defense Advanced Research Projects Agency,
Starting point is 00:51:17 had heard about prediction markets, and they decided to fund a research project. And they basically said, listen, we've heard this is useful for other things. We'd like you to show us that this can be useful for the kind of topics we are interested in. Our project was going to be forecasting geopolitical trends in the Middle East. We were going to show that prediction markets could tell you about economic growth, about riots, about perhaps wars, about whether changes of heads of state and how these things would interact with each other. In 2003, just as the project was about to go live, the press heard about it. Monday morning, two senators had a press conference
Starting point is 00:51:57 where they declared that DARPA and the military were going to have a betting market on terrorism. And so there was a sudden burst of media coverage. And by the very next morning, the head of the military basically declared before the Senate that this project was dead. And there was nothing more to worry about. What do you think we collectively, you in particular, would know now
Starting point is 00:52:23 about that part of the world, let's say, if this market had been allowed to take root. Well, I think we would have gotten much earlier warning about the revolutions we just had. And if we would have had participants from the Middle East forecasting those markets, not only we would get advanced warning about which things might happen, but then how our actions could affect those. So, for example, the United States just came in on the side of the Libya rebels to support the Libya rebels against the Qaddafi regime. What's the chances that will actually help the situation as opposed to make it worse? But give me an example of what you consider among the hardest problems that a prediction market could potentially help solve. Not only who should we elect for president, but whether we should go to war here or whether we should begin this initiative or should we approve this reform
Starting point is 00:53:10 bill for medicine, et cetera. So that sounds very logical, very appealing. How realistic is it? Well, it depends on there being a set of customers who want this product. So, you know, if prediction markets have an Achilles heel, it's certainly the possibility that people don't really want accurate forecasts. Prediction markets put a price on accountability. If you're wrong, you pay. Simple as that. Just like the proposed law against the witches in Romania.
Starting point is 00:53:45 Maybe that's what we need more of. Here's Steve Levitt again. When there are big rewards to people who make predictions and get them right, and there's zero punishment for people who make bad predictions because they're immediately forgotten, then accountants would predict that that's a recipe for getting people to make predictions all the time. Because the incentives are all encouraging you to make predictions. Absolutely. If you get it right, there's an upside.
Starting point is 00:54:10 And if you get it wrong, there's almost no downside. Right. If the flip side were that if I make a false prediction, I'm immediately sent to prison for one year term, there would be almost no prediction. And all those football pundits and political pundits and financial pundits wouldn't be able to wriggle out of their bad calls, saying, my idea was right, but my timing was wrong. I mean, that's how everybody does it. That big storm the weatherman called but never showed up. Oh, it happened all right, he says, but two states over. Or how about all those predictions for the end of the world, the apocalypse, the rapture, all that? Well, they say, we prayed so hard that God decided to spare us. You remember back in May when an 89-year-old preacher named Harold Camping declared that
Starting point is 00:55:01 the earth would be destroyed at 5.59 p.m. on a Saturday, and only the true believers would survive. I remember it very well because my 10-year-old son was petrified. I tried telling him that camping was a kook, that anybody can pretty much say anything they want about the future. Didn't help. He couldn't get to sleep at night. And then the 21st came and went and he was psyched.
Starting point is 00:55:30 I knew it all along, Dad, he said. And then I asked him what he thought should happen to Harold Camping, the false doomsday prophet. Oh, that's easy, he said. Off with his head. My son is not a bloodthirsty type. But he's not a turkey either. Freakonomics Radio is produced by WNYC, APM, American Public Media, and Dubner Productions. Our producers include Elizabeth Giddens, Colin Campbell, Susie Lechtenberg, Chris Neary, and Diana Nguyen.
Starting point is 00:56:12 We had help from Ellen Horn and Peter Clowney. This episode was mixed by John DeLore. If you want more Freakonomics Radio, you can subscribe to our podcast on iTunes or go to Freakonomics.com where you'll find lots of radio, a blog, the books, and more.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.