Planet Money - Don't hate the replicator, hate the game

Episode Date: February 27, 2026

The world of science has been stuck in an existential crisis over whether we actually know the things we thought we knew. Re-running an old study today doesn't always yield the same result. Same with ...re-enacting old experiments. Collectively, this is known as the “replication crisis.” Economist Abel Brodeur has come up with one way to help fix this crisis: he’s invented an internationally crowdsourced surveillance system, designed to keep social scientists honest. He calls it the “Replication Games.” Further Listening:Fabricated data in research about honesty. You can't make this stuff up. Or, can you? The Experiment Experiment How Much Should We Trust Economics?This episode was hosted by Mary Childs and Alexi Horowitz-Ghazi. It was produced by James Sneed and Emma Peaslee, with help from Willa Rubin. It was edited by Jess Jiang, fact-checked by Sam Yellowhorse Kesler, and engineered by Ko Takasugi-Czernowin. Alex Goldmark is Planet Money’s executive producer. Find more Planet Money: Facebook / Instagram / TikTok / Our weekly Newsletter.Listen free at these links: Apple Podcasts, Spotify, the NPR app or anywhere you get podcasts.Help support Planet Money and hear our bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney.To manage podcast ad preferences, review the links below:See pcm.adswizz.com for information about our collection and use of personal data for sponsorship and to manage your podcast sponsorship preferences.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy

Transcript
Discussion (0)
Starting point is 00:00:00 This is Planet Money from NPR. Alexi Horowitz-Ghazi. Mary Childs. Yes, you and I took a little trip up to scenic Montreal, one of the jewels of French Canada, for a little Planet Money mission. Yes, we did. And even though it was a little bit sad
Starting point is 00:00:18 that that mission did not entail joining the maple harvest or, you know, like infiltrating a Putin cartel. Next time. Dare I say next time, it did have much bigger implications for anybody and everybody whose life is impacted by science, which I think is basically all of us.
Starting point is 00:00:36 I think that's right, yeah. We were there to meet a guy named Abel Broder. Abel is this very energetic economics professor in his late 30s at the University of Ottawa. And we found him bounding around the halls of this modernist school building in downtown Montreal. He was getting ready to host an event he's become sort of famous for it, something called the replication games. It's getting exciting now. How are you feeling? I'm feeling good.
Starting point is 00:01:03 It's the beginning of the event, so this is the moment I'm full of energy and full of enthusiasm. In seven hours from now, it's going to be a different conversation. Abel is going to be tired in seven hours because at a replication game, he is running around between 16 teams of three to five people in a kind of hackathon. People will work all day to replicate recently published social science papers to reproduce the results and see if the findings hold up. Because ever since technology has made it easy to crunch data, we've been able to go back and check old research. And turns out it wasn't great.
Starting point is 00:01:38 Rerning an old study today a lot of the time does not yield the same result. The research no longer proves its conclusion. And the same thing often happens when we re-conduct whole experiments. Altogether, these problems have become known as the replication crisis. A lot of people across academia have been trying to fix this. So we can trust research, so we can actually know what we know. In this event, the replication games, it's part of Abel's attempt to help solve this crisis. The idea is to change norms through monitoring.
Starting point is 00:02:11 And just giving a small percentage, a small chance that we will monitor, can massively change the behavior of everyone. You know, change the way to behave, change the way they code, change the way to do research. So that's the goal. After a few minutes, we head into a big lecture hall where Abel takes center stage. All right, folks, we're going to get started. Welcome to the replication games. Thanks for being here in Montreal with us.
Starting point is 00:02:35 Let's get started. Today we have 16 papers that are being reproduced. A couple of small things... Around the room, dozens of social scientists are gazing up at a bell, looking a little bit nervous. Most of them have come from across Canada, and most of them are first-timers, who now have to undergo this kind of awkward initiation right.
Starting point is 00:02:54 I'm going to put the music, because I know you guys need, like, you know, a bit of motivation. but you need to do the body movement. Everybody has to do it. All right, does it sound good? So we do it? I need you to do... It's pretty easy.
Starting point is 00:03:08 Abel starts didactically clapping like an elder millennial camp counselor and his audience joins in. Guys, thank you so much for being here. I hope you enjoy it. It should be fun. And thanks everyone. Hello and welcome to Planet Money.
Starting point is 00:03:25 I'm Alexi Horowitz-Gazi. And I'm Mary Childs. Over the past couple decades, the world of science has been stuck in an existential crisis. Over whether we know the things we think we know. It started in psychology, spread to medicine and economics. Now people across disciplines are trying to figure out how to solve it. Today on the show, the story of one economist,
Starting point is 00:03:46 how he set out to learn what exactly has broken in the way social scientists create new knowledge, and how he came up with his own daring and kind of wacky way to help fix it. by building an internationally crowdsourced surveillance system to keep social scientists honest. Okay, so the replication crisis has been a pretty big deal for almost 20 years at this point. We've covered it on Planet Money before. The story of how economist Abel Broder first encountered the problem and why he set out to help fix it begins back in 2011. Abel was getting his master's in economics, and he was writing a paper on whether smoking bans in restaurants and workplaces actually. made people smoke less.
Starting point is 00:04:37 He collected this huge data set. I had like amazing data from the CDC, which is public. I had smoking prevalence at the county level. Abel says that all the established research at the time indicated that smoking bans were hugely effective, that they'd gotten lots of people to stop smoking. But when Abel crunched his numbers? I was finding absolutely no effect.
Starting point is 00:04:58 None. There was like nobody to stop smoking. I've played with the data for six months and I find nothing. And Abel was trying to make a name for himself in academia, which means getting his research published in an academic journal. And it's harder to get published if you find no effect, especially given that the existing literature did show an effect. So what a bell needed was something statistically significant. For the statistically uninitiated, significant means the result would be produced by chance less than 5% of the time.
Starting point is 00:05:29 So the probability that the result is just random is 5% or less. That is the cutoff for whether your findings count or not. There's this 95%, 5% cutoff that really matters. We're obsessed with these thresholds. So Abel kept tinkering with his dataset, changing his computer code to contort the data one way and then another, until eventually one day he found a way to analyze one subset of his data that gave him what he'd been looking for,
Starting point is 00:05:55 a result demonstrating that smoking bans had decreased smoking, and a result that was significant. He's like, there you go. I was so happy. I was in the library. I was like, sing it again. I was so happy. Finding a significant result meant that if his paper was published, he would get to put a little asterisk or star next to his results. And the more statistically significant the result, the more stars you got to claim. But Abel's happiness did not last long. Because the more he thought about how he'd gotten that significant result, the more it started to seem like it was working against the whole goal of social science. you know, to actually discover true new knowledge about human behavior.
Starting point is 00:06:35 For example, policymakers need to know whether smoking bans work to make sound policy decisions. But here he was torturing the data to match the preconceived hypothesis. He thought, this is so stupid. What am I doing? I'm writing a piece saying that smoking bans are decreasing smoking prevalence because I managed to find one at work. I was like, this is dumb. I'm doing something wrong. Abel ultimately decided not to use his tortured results.
Starting point is 00:07:02 He wrote up his paper showing that he'd found no effect, even if it meant his paper, was less exciting. And at first, he thought what he'd done to his data might have just been a one-off mistake on his part. But then you start talking to other students, and people were like, oh, yeah, that's how you publish. Abel started to see that this was a problem of incentives. In order to advance their careers,
Starting point is 00:07:23 academics have to publish papers in peer-reviewed journals, and the journals want to publish work that's statistically significant and novel. These papers can win big prizes and define new research agendas for decades. But because of all that, people were doing what he had done, trimming and squeezing and coaxing the data towards significant results. And that can easily cross over into a kind of data manipulation called P-hacking, P as in probability. And Nabel says it can happen almost subconsciously.
Starting point is 00:07:56 Because the project took like three, four years of back and forward between co-authors, discussion. Then six months later, you go back, you exclude again these other people, you do something different. And then over time, all these decisions, actually, when you look at it from the outside, it's like, this is crazy what you've done. To figure out how widespread this problem might be, Abel decided to research the research. He and a couple of his colleagues scraped the significance data from a bunch of the top academic journals, the distribution of stars that published researchers had racked up. And when they looked at the distribution, they found a noticeable hump just above that 5% significance threshold. Now, some of this could be because some people whose research only hit 6% didn't bother submitting.
Starting point is 00:08:39 But it could also be because some researchers were tweaking their data analysis to just barely get results that would be more likely to get published. But when Abel and his colleagues started submitting the research for publication, they got a resounding series of nose. Academic publishing seemed hesitant to open up an empirical reckoning. After a few years, they did manage to publish their paper in 2016. They called it Star Wars, the Empirics Strike Back. Do you get it? Oh, you definitely get it.
Starting point is 00:09:13 Thank you, Alexi. So Obell puts aside this whole idea of an empirical reckoning, and he moves on to other economic projects. He gets tenure, and eventually he learns that his, little paper has become kind of a sleeper hit. It took a long time before I realized actually the paper was, like, well-known. Before people started talking to me at conferences, like, are you just Star Wars guy? That's a moment. Like, I needed someone senior to tell me, like, no, this is really important
Starting point is 00:09:37 what you're doing. There had been efforts to solve parts of the replication crisis. Some of the top journals had started asking their contributors to release replication packages with their papers. That's basically the data and code they'd used to find their results. And researchers were also starting to pre-register their hypotheses before actually doing the research. So that if the data didn't support it, they couldn't futz around and pretend like they'd been looking for something else all along. For his part, Abel wondered if there was anything he could do.
Starting point is 00:10:06 Like, not just study the problem, but actually help fix it. How do I change the incentives? How do I potentially have an impact on the norms? How people do research. The second I think about the norms, I think about, oh, it needs to be large-scale. Nobody's going to change their behavior if it's a small-scale thing. So it needs to be big. Journals do have peer review systems where they try to poke holes in research,
Starting point is 00:10:30 but they didn't know as totally get under the hood to scrutinize all the code and data. So researchers weren't necessarily worried that their stuff would get checked. A nice analogy, I think, is imagine you go on a date. You might shave. It might take care of your body. You might take care of yourself. A bit of deodorant, you know, perfume, maybe. if it's your thing, if you're going to make an effort to look prettier than you are usually.
Starting point is 00:10:56 The other person fully understand that this is a nice version of you. We're fully aware of that, but I don't know about how much. And perhaps it's not. Or maybe you made a massive effort, and usually you're a disaster. You never clean nothing. So when you know you go to the apartment, it's like, oh, my goodness, this is your apartment. So research is a bit like this. The published research is the cleaned-up version.
Starting point is 00:11:21 So when I see a published paper, I know it's been, you know, it's beautiful, it looks nice. But there's an information as symmetry. I don't know how dirty it is, actually. Abel thought one thing that might help this problem was to make researchers care as much about the cleanliness of their data analysis as the significance of their results. And to do that, he'd have to go full-on room raters on people's published papers, to shine and, a fluorescent spotlight on the backrooms of their research. If you could take all of the data that somebody had gathered for a given paper and meticulously retrace their coding steps, you could see if it was possible to replicate their findings.
Starting point is 00:12:02 You could make sure there weren't any errors, conscious or unconscious, and what they'd done. The first, he'd have to get the code. People weren't in the habit then of publishing all their data and code. And when he emailed researchers asking, nobody responded. So he decided to create an official seeming institution. It needs to be a big institution with a website, with tons of famous people on it. And when you send the email, people will be like, what the hell is this thing? I need to respond.
Starting point is 00:12:31 It's legit. So in 2022, he creates a website for a thing he starts calling the Institute for Replication. A friend of mine, his wife, did the logo for free, like a design, like, you know what I mean, like just bare bones. He recruits some serious, famous economists for the board to put on his legit-looking website. And pretty soon, he does start to get responses to some of his emails. He's able to get some data sets and coding packages. And he convinces some colleagues and junior researchers to start doing some replications one by one in exchange for a co-author credit on one big paper. So, Abel can get the data and the code.
Starting point is 00:13:07 But there's still a second problem, which was the question of scale. replicating one paper at a time was not going to do much to change the system. What he needed was to create the sense within the academic community that anybody's work could be checked at any time. It's like an IRS for the ivory tower. So now I thought, okay, we need to mass-reproduce journals.
Starting point is 00:13:29 So then I was like, okay, I need to get maybe a few hundred replications or reproductions per year. And I'm thinking, how do you do that? The answer, Abel says, came to him kind of by accident. Around the time he got his Potemkin website up and running, he got an unrelated invitation to a conference in Oslo to a couple of seminars. He was planning his trip about a month ahead of time
Starting point is 00:13:49 and he noticed that he had seminars on a Wednesday and on a Friday. And I thought, like, what the hell do I, am I going to do on Thursday? Like, I've never been to Oslo. I'm sure it's pretty and nice. But a full day, like, I'm going to walk around and then I'm going to have like six, eight hours just to relax. So I just emailed the person who invited me. And I said, could we just like,
Starting point is 00:14:09 do a small workshop. It would just be like 10, maybe 15 people. Abel posted about it on social media. You can come to Oslo. It should be fun. If you come, you're going to get co-altorship to a meta paper. We're going to reproduce papers. Let's have fun.
Starting point is 00:14:21 And then, I don't know, like 70, 80 people ended up registering really fast. I closed registration because I have no money. We don't want to have food. I didn't tell the guy who would be 80. I said it would be 10. So Abel is sitting there a couple months before the conference with this sudden, unexpected surge of interest and no plan. I have 80 people. Some coming from Ireland, others come from Sweden, others coming from France. Like, what do I do with these people?
Starting point is 00:14:47 He starts collecting papers that people could replicate, and he puts everyone into teams by their field, health economics, development economics. The first time I had no idea what was going on. I was super stressed. He had no idea what was going to happen, what they would find. Abel heads to Oslo and convenes the first ever replication game in October of 2022. And when checks in on one of the first teams of replicators working on the first paper. I go talk to them and they're like, I built, there's a problem. Like, there's tons of duplicates. I'm like, what? It's like, yeah, you want to the data set. There's a ton of people with the same age. And then I come back later on and it's like, okay, 75% of one data sets. Everybody's 60 years old, all woman, all living in the same village, all doing the same thing.
Starting point is 00:15:30 It's the same. It's duplicates. And it's a big part about inequality. If everybody is the same, there's no inequality. and that was driving some of the mechanism. The underlying data, upon which this entire paper rested, had been merged improperly, like a big copy and paste error. To Abel, this was disconcerting. And I was like, oh boy, that's the first paper. That's the first game. What did I create?
Starting point is 00:15:56 It's going to be like this all the time, people finding crazy mistakes. And did I just open a can of worms that actually most papers are just like terrible, full of crazy golden errors? Abel was a little afraid he might be about to discover that all papers were full of worms and that science wasn't real. But luckily, by the end of the day, like many teams had like good day, everything was clean and so on. And it was like, it's like not terrible.
Starting point is 00:16:21 He could relax. It turns out most of the papers were not terrible. And even better, with that first event in Oslo, Abel had found a way to crowdsource this massive academic auditing project, essentially for free. If he could host enough replication games every year, he just might be able to scare the social sciences into acting right. But what actually happens on the ground during these things? After the break, we enter the 51st replication game. So we are at a replication game in real life in Montreal. A bellbrador says that the game part is a little bit of a branding exercise.
Starting point is 00:17:08 There are no winners or prizes. It's more like an all-day hack. The teams are mostly economists with a few groups of psychologists, and they've already chosen the papers they'll focus on. Using just what they have in the replication package, they will have seven hours to check the code, examine the decisions their papers authors made, and see if the results reproduce. And then they'll report on whatever they find, so it'll be out there on the record, whether that's a nothing burger or a bombshell. After everyone claps their rendition of We Will Replicate You, the researchers start stream.
Starting point is 00:17:43 streaming out of the lecture hall, and we run after them. Jolene. Did I talk to you for a sec? Hey. I'm Alexi. Hi, Alexi. Just set the scene for me. So we just finished clapping a cheesy opening song, and we're about to split up into rooms.
Starting point is 00:18:00 The groups are scattering into classrooms across the building to start digging into their papers. Economics PhD student Jolene Hunt and her team are looking at a paper about education. They're all education economists. And so Jolene has sort of a pedagogical view of the day. In PhDs, we often don't get a chance to actually work together. We're usually just kind of on your own in your silo and then you talk to each other when you're having problems. But it'll be nice to actually work together
Starting point is 00:18:25 and see if my friends are actually any good at their jobs. Rolling up their sleeves, getting down to the actual coding. Because they're only going to have seven hours, each group has a little list of the things they've decided they're going to try to get through today. There's one group led by a guy named Thibaut Dupre, who was sitting alert and ready to unpack a paper about pensions in different countries. Essentially, the paper focuses on 10-something countries,
Starting point is 00:18:49 but then the data set seems to have a few more countries in there. So why some countries were included, others were not. What if you drop a few countries out of the data sets? Maybe there's something to be explored there. And we wanted to understand the stakes for the day. You know, why people would attend this event to do a full day of, like, manual economic labor for no dollars. So we asked them.
Starting point is 00:19:12 What are you doing here today? Well, we're trying to see if you can replicate the results from a paper that took a look into the effects of negotiation. I've started with a group in the lecture hall huddled around their laptops. Freyaul Lassoued is a researcher at the University of Saskatchewan, and she's in a group of economists focused on agriculture with Shikiawu from the University of Ottawa. You want to find that the paper. checks out? Yes, you can think like that.
Starting point is 00:19:45 Yeah. Okay. In terms of your personal incentives, would it be cooler to find, like, oh no, this paper's messed up? Friel starts laughing, seemingly at the premise of the question. You're laughing so hard. Why? That's mean. I don't know, like how to answer it.
Starting point is 00:20:05 It's be bad for Diego and Juan here. Those are the authors of the paper. Other things. No. No. You just have sympathy for them. Yeah, because we've been, we're all been in their shoes. Okay, fair.
Starting point is 00:20:18 But we go up to another group, and they're kind of like, duh. Yeah, we are trying to find something. That's Felix Fosu, a postdoc at Queens University. His group is digging into a paper about cartels in Mexico. I tell him what the other researchers said, that maybe it isn't very nice to want to find something terribly wrong in someone else's research. But it seems like, to Felix, I have now misunderstood things in the opposite direction. No, we definitely want to find something. Why?
Starting point is 00:20:52 I think replication is something that we have to take very important in economics. We need to make sure that our results are indeed claiming what they claim to be. We need to know what worse and what does not work. Now, regardless of their specific goals, the actual work of replication, is divided into two main phases. Phase one is the same for every team, pure and simple replication. They will all check the paper's code, the programmed instructions that take some raw data
Starting point is 00:21:22 and put it into a bunch of tables that comprise the foundations for the paper's conclusions. So now each team takes the original code, copy and paste it, and basically hits Enter to see if it runs. And one type of mistake that they might find is if the code is really broken. They might find that when they push the button, the code just doesn't run. The computer just says error. Or another kind of mistake they might find.
Starting point is 00:21:46 Maybe the code runs, great, but it spits out a different answer than what the authors wrote. Not so great. Or maybe the raw data is messed up in some way, like cells merged or transposed or erased or accidentally filled down the whole column. So we ask the agriculture team
Starting point is 00:22:01 to show us exactly what they are doing. So I can't code. I don't know what I'm looking at. What am I looking at? Well, actually, it's kind of nothing here because I just started. This is Chichia again. The paper her team picked by Diego and Juan Pablo is about the price of eggs at big firms versus small firms. How much pricing control they have.
Starting point is 00:22:22 I look at her laptop over her shoulder. So what you can see here is the variables they have. We have the firms. We have the price. We have the day, months, and year. Now Chichia pulls out her iPad to scroll through the published paper. So we're going to firstly check. whether we can perfectly reproduce all the numbers and using the original data and codes.
Starting point is 00:22:44 If I can run part of this, maybe you can see it. Okay, she's pushing a little blue arrow, a little play button. So basically if I run this code, you'll see the results. Oh, a little box appeared in a different window. Yes, so if you check the numbers... minus 18.11432, and I'm looking at the published version, it says, minus 18.114 star star. So they're basically exactly the same.
Starting point is 00:23:13 It's the same. Yeah, it's the same. That's good, you know. So we have a win. Yeah, yes, one. And we have more to check. A lot more. But we got one.
Starting point is 00:23:23 That's great. Shushia will keep plugging in all the data and checking the results. Though so far, it looks like the paper is checking out. And if the paper passes the whole first phase, if the code does spit out all the answers that the author said it would, Then the replicators move on to phase two, robustness checks. For robustness check, we kind of like change some parts of the model to see whether the original conclusion is still kind of makes sense. This phase is less objective and requires more context and thought.
Starting point is 00:23:55 It requires the economists to consider the questions that the paper authors didn't think of or didn't write about. The decisions the authors made and the decisions they could have made but didn't. It's like trying to see the negative space in and around the paper. The kind of things they might find in this phase, you know, did the authors say that this dataset represent something it doesn't? Did they use an appropriate dataset? And did they use that data in a way that made sense? Did they include or exclude certain specifications or factors in order to have a result that
Starting point is 00:24:26 looked exciting? There are infinite potential choices that researchers make or don't make. And the replicators have such limited time. So they're not going to be able to consider and analyze everything. They're just going to get through as much as they can. And as the hours start to tick by, it becomes clear that most teams are not turning up major issues. Until mid-afternoon, we check in with this one group looking at a paper about government policies. The basic premise is when people trust the government, do they tend to comply with policy more?
Starting point is 00:24:59 This is Simon Prevo. He's an econ master's student and a public sector researcher. The paper found that when people, trust in government, they comply with policies more readily. So those policies cost the government less money. And Simon and his teammates are now trying to unravel a mystery. Because when they went to look at the raw data that underlies the paper's findings, it looked a little funny. This is Scott Morier, another econ master student on the team. There was a folder called raw for the raw data, but the files were all labeled clean. So we were a bit confused to how, you know, it's counterintuitive, right? So Florian downloaded the data straight from the source and followed the instructions to create the one data set.
Starting point is 00:25:38 They recreated what should be the same dataset following the instructions that the authors left. They ran the code. And then that's when we started getting the errors because variables were missing. And then as we kept going through, we kept finding more variables that were being used in the regression, but weren't necessarily included in the supposedly what is meant to be the raw data set. Some variables are missing from the raw dataset. The authors seem to have used data in their analysis that they did not account for. Not good.
Starting point is 00:26:07 And then we visited the group looking at that paper about cartel behavior in Mexico. That group has found something too. So in this paper, they look at the presence of different cartels. They tell us, the paper looks at 20 cartels and data about what types of crimes were happening and when. To see if cartels changed the types of crime they did after the government ramped up a big war on drugs. What we've found so far is that if you exclude one of the... the cartels, then the results become insignificant. So it's just the one cartel making the results?
Starting point is 00:26:39 With cartel making the results. So if you remove only one, then the result collapsed. Oh, no, you found something. Yeah. They found something in the first test they tried. Is that luck? Would you call that luck? No, I think it's something that we thought about it.
Starting point is 00:26:54 That's why we place it one on the list. We thought it's a good place to search. So partly luck, but partly because we talked, thought about it carefully. That sounds like not luck. They're going to keep investigating, and depending on what they find, this paper is maybe not passing this phase, the robustness check phase. Can you draw a big sweeping conclusion about the effectiveness of war on drugs from a change
Starting point is 00:27:18 in just one cartel? They suspect this paper will not hold up. Over lunch, the cartel team starts puzzling through, like, how does this sort of thing even happen? You have to be honest. For sure, when you do this kind of papers, you do this. these kind of things, right? You check whether when you have this, you know, you do this type of robustness checks. David Benatia, a professor on the team, says this is a robustness check that
Starting point is 00:27:43 he would have tried if he had been the author. At the end of the day, our researchers limped back into the auditorium to present what they'd all found. So the way we'd like to finish is to give each team about one minute to tell us how your day went, the different challenges you face. Maybe we can start from the beginning, move around. We didn't find anything too major. There was a lot of missing variables and attrition. Did it represent well? Like all the code run, but...
Starting point is 00:28:13 Everything run fine. We tried to poke holes in it, but we couldn't really do it. For the 71 replicators in the Montreal game, 14 teams got to uphold science by double-checking some published work. They spent a day coding with their friends and peers, learned some new coding hacks, and new ways to make choices in research. And they'll get a little authorship credit on a meta paper in a real journal.
Starting point is 00:28:35 The other two teams, the group who discovered the missing numbers, the cartels group, they've gotten like a toxic golden ticket. Now they'll get to write their report, polite and formal, but nonetheless, kind of a bombshell, saying just how flawed the research is. Maybe that makes a splash and everyone thinks they're brilliant. Or maybe it makes a splash and everyone hates them. Next, Abel will write an email to the authors, a somewhat standardized note saying, hey, here's who we are and what we do.
Starting point is 00:29:06 We found some mistakes in your paper. Would you like to respond? He does not assume nefarious intentions, and the authors get an opportunity to try to fix the problem and prepare their formal response before anything goes public. And because Abel handles it from his position at the Institute for Replication, it doesn't feel so personal. And the replicators have a little bit of insulation. We asked Felix from the cartels group what this might mean for him as a more junior person, a person earlier in his career. It's kind of throwing rocks towards the top of the profession.
Starting point is 00:29:39 He'd wanted to find something, and now he has. I think it's a good work that we are doing, but what the implications are, I don't know. Yeah, I know. So after a few months, Abel sends his neutral-toned, official email to the authors of the paper that Felix and his team had replicated in Montreal, saying that the code had worked, but that they found the results don't hold up. And for the authors of that paper, getting that email? When we opened that email, we were actually epic because we actually read your paper replicates.
Starting point is 00:30:16 This is Jacomo Battistone, a researcher at the Rockwell Foundation in Berlin, and one of the four co-authors of the paper. He says they were thrilled to have their coding results public. And when it came to the bigger problem, the fact that their results had fallen apart when the replicators removed that one cartel. We were not particularly worried about the content because it was kind of self-evident that this was not really challenging. Not really challenging their findings because they think the replicators misunderstood the basic hypothesis of their study. They say they started with this idea that there was this one big new cartel in Mexico, Los Santos, and it had been doing a lot of, a lot of crimes, generating a lot of data points. Here's another author, Marco LeMolier, a researcher at Bocconi University in Milan.
Starting point is 00:31:04 When we start to think about this project, actually had in mind the specific cartel of Los Zetas. They say they set out to investigate if the cartel Los Zetas had changed the types of crimes they did after the war on drugs. And their papers succeeded at proving that. What the Montreal replicators did, in the opinion of the paper authors, was to remove the main part of the data set and then say the conclusion was broken. You can do that, but why would you? To be blunt, it doesn't make any sense. That is Paolo Pinotti, a professor also at Bacconi University. He said it was like doing a study on the effect of spreadsheets on productivity
Starting point is 00:31:41 and then saying, oh, but the results don't hold up if you exclude Microsoft Excel. We looked at their paper. And to be fair, to the replicators, the original paper does not say explicitly, hey, it's just Losetas we're focusing on. The data from Losetas is lumped in with several other new cartels. So if the paper authors meant to study the behavior of just Losetas, that was never quite spelled out. Mary, when we first rocked up to the replication games back in May, I think we were both excited at the idea that we might watch some junior economists uncover some major problem with a published paper in real time. But Abel had a date take when we asked him about the problems that the teams there had uncovered.
Starting point is 00:32:26 Like the team, for example, that had found issues in the government trust paper. That seems like success. Success. It depends what you define as success. Well, the process working, is it supposed to? I mean, in a world in which science works, I think this should have been picked up before it's published, cited and disseminated. So I don't think it's a success. That's fair.
Starting point is 00:32:49 These papers that are replicating have been published. Meaning they got past journal referees, professional economists who were supposed to be gatekeeping the quality of what they publish. Some of the top journals do check that the code runs, they press play. But in the government trust case, the journal referees apparently didn't catch that numbers were missing. That when the paper said, oh, the documentation is in the replication package, it was pointing to nothing. The journal declined to comment, though they said they have a robust process to investigate concerns. To me, this is a failure of this system. which is fine. There's always going to be failures.
Starting point is 00:33:24 I just think that the rate of failures is higher than what a lot of people think. Yeah. And it shouldn't happen that often. In every replication game so far, they have found something. Though not yet any career-ending fraud. It's more like major data or coding errors or robustness fails. So the broader system is still broken, even after putting on more than 50 games and replicating about 300 papers. Still, there are signs.
Starting point is 00:33:52 that the games are having an effect. Several replication gamers told us their experience here will change how they do their research, because they know that their papers, too, might someday end up under Abel's spotlight. Abel says the more games he can put on, the more the rest of the academic world will start to shift. Because the evidence shows that people don't actually change their behavior based on the severity of the potential punishment, like losing their job or public shaming or whatever. They change behavior based on the odds of enforcement, the odds of actually getting caught. Just the idea that someone might walk through their apartment one day, that's enough of a threat to keep it clean.
Starting point is 00:34:35 Hey listeners, what are you doing on the evening of Monday, April 6th? Are you free? Because if you are, I think you should come to the 90 Second Street Y to hang out with me and some of my friends. It is the first debut stop on our 12-city book tour to celebrate. the publication of our first ever book, Planet Money, a guide to the economic forces that shape your life. Every stop on this tour will be unique with different hosts and guests, and if you get a ticket, you can get a tour exclusive tote bag with your purchase while supplies last.
Starting point is 00:35:09 So at the 92nd Street Y on Monday, April 6th, it'll be me, Amanda Oranjic, Daring Woods, book author Alex Miasi, and the economist Emily Oster, who is most famous, I think, for letting pregnant women know that they can actually drink coffee. So please come and bring your very best. economic questions for us. We can't wait to hang out. Find the show nearest you at the link in show notes or go to planetmoneybook.com. And thank you. If you want to hear more about the replication crisis, we've done a few
Starting point is 00:35:34 episodes about it and the efforts to fix it. We'll link to those in the show notes. If you want to support our work, you can donate at npr.org slash donate. And thank you. This episode was produced by Emma Peasley and James Sneed with help from Willa Rubin. It was edited by Jess Jang, fact-checked by Sam Yellowhawriss-Kessler, and engineered by Koh Takasugi-Chirnoen. Alex Gould Mark is our executive producer. I'm Alexei Horowitz-Gazi. And I'm Mary Childs. This is NPR. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.