Hidden Brain - Encore of Episode 32: The Scientific Process

Episode Date: December 20, 2016

There is a replication "crisis" in psychology: many findings simply do not replicate. Some critics take this as an indictment of the entire field — perhaps the best journals are only inter...ested in publishing the "sexiest" findings, or universities are pressuring their faculty to publish more. But this week on Hidden Brain, we take a closer look at the so-called crisis. While there certainly have been cases of bad science, and even fraudulent data, there are also lots of other reasons why perfectly good studies might not replicate. We'll look at a seminal study about stereotypes, Asian women, and math tests.

Transcript
Discussion (0)
Starting point is 00:00:00 So here's the deal. Researchers recently tried to replicate a hundred experiments in psychology that were published. The Center for Open Science recruited colleagues from around the world to try and replicate them on the study. And found that most of them could not be reproduced with the same results. In fact, I'm... Welcome to Hidden Brain. I'm Shankar Vedanta. Today we're going to talk about what is being called a replication crisis in science. The replicators in this recent study fail to get the same findings from the original experiment. From cancer medicine to psychology, researchers are finding that many claims made in scientific studies
Starting point is 00:00:36 fail to hold up when those studies are repeated by an independent group. Later in this episode, we're going to explore one provocative study that looked at stereotypes about Asians, women, and math tests and explain what happened when researchers try to reproduce the finding. We're going to use this story to explore a deeper question. What do scientists really mean when they talk about the truth? Let's take a moment to thank and share a message from our sponsor LearnVest. LearnVest is an online financial advice company focused on empowering people nationwide to make good decisions with their money. Studies show that writing down your goals makes you 49% more likely to achieve them.
Starting point is 00:01:21 That's why when you work with LearnVest, you tell them what you want to accomplish, and they create a customized financial plan to help you get there. Plus, they pair you with a financial planner to help keep you on track. To see a sample plan and get a $50 credit, go to LearnVest.com slash brain. The crisis has actually been a long time coming.
Starting point is 00:01:43 In 2011, for example, Dutch researchers claimed that broken sidewalks and courage racism. They published their findings in one of the most prestigious academic journals, Science Magazine. A couple of years later, another article in Science showed that when a gay person shows up at a stranger's door and speaks openly about what it's like to be gay, this has an extraordinary effect. It was the personal connection between the gay person who they were trying to show, you know, they're in person. People who are against gay marriage change their minds after these emotional encounters.
Starting point is 00:02:19 It was the combination of, you know, contact with a minority coupled with a discussion of issues pertinent. The results were written up in the New York Times, the Wall Street Journal, the Washington Post. They were featured on public radio programs such as the Clip Your Hearing on Science Friday. Finally, in 2009, researchers claimed that by lingualism, the ability to speak more than one language is better for your brain. All these claims had serious problems. The Dutch claim was based on fabricated data.
Starting point is 00:02:50 One author of the gay marriage claim asked for the paper to be withdrawn after concerns were raised about fraud. Both claims were retracted. Might be true? Might not. We don't know, because it turns out the researchers made up the data. The bilingual advantage paper wasn't fabricated, but it was missing important context. The researchers had conducted four experiments. Three failed to show that bilingualism was better for
Starting point is 00:03:16 the brain. Only one experiment showed a benefit. It was the only one that was published. Angela DeBrown was on the team that worked on the bilingual Advantage study. It is troubling because we like to believe that what we see is actually it's truth. But if it's only half of the results we find and we're in fact hiding the other half of the results, then we will never really find out what's going on.
Starting point is 00:03:40 At the University of Virginia, psychologist Brian Nosec decided something had to be done. Brian felt the problem was that too many researchers and too many scientific journals were focusing on publishing new and unusual findings. To a few was spending time cross-checking earlier work to make sure it was solid. One of the key factors of science is that a claim becomes a credible claim by being reproducible, that someone else can take the same approach, the same protocol, the same procedure, do it again themselves, and obtain a similar result.
Starting point is 00:04:15 Brian launched an effort to reproduce dozens of studies in psychology. He published a report in 2015. We found that we were able to reproduce the original results in less than half of the cases across five different criteria of evaluating whether a replication was successful or not. Over the last year, there have been many debates about what this means. Some critics say it proves that most studies are worthless. At many universities, researchers feel it's their integrity, not just as scientific conclusions that are being called into question. At Harvard University, psychologists Dan Gilbert recently
Starting point is 00:04:52 published a paper calling Brian's conclusions into question. So I think we just have to use our heads to figure out which kinds of things we expect to replicate directly, in which kinds of things we would only expect a conceptual replication. And we need to calm down when we don't see direct replication and ask whether we really should have expected it all. To unpack all of this, let's take a detailed look at one study and what happened when researchers tried to replicate it. I think the story reveals many truths about the ongoing controversy. many truths about the ongoing controversy. When there were graduate students at Harvard, Todd Petinsky and his friend Margaret Shee often went to restaurants together.
Starting point is 00:05:34 They went for the food, but they also spent a lot of time observing human behavior. We often, after class, would go to the cheesecake factory, and she would order strawberry shortcake. I would typically order a salad, and the number of times that the salad was delivered to her, and the strawberry shortcake was delivered to me. She also likes regular coke, and I'm a diabetic, so I drink Diet Coke, and without fail, the Diet Coke would go to her, and the regular coke would go to me. The waiters were stereotyping Todd and Margaret. The guy was probably ordering the less healthy stuff.
Starting point is 00:06:11 The woman was ordering salads and diet drinks. Todd and Margaret knew there was lots of research into the effects of such stereotypes. Now getting a dish you have in order is one thing, but there are more serious consequences. Stereotypes can be hurtful, they can affect performance. But as Todd and Margaret observed the waiters, they realized something was missing in the research. The previous studies had focused on the negative consequences of stereotypes. Could stereotypes also work in a positive fashion?
Starting point is 00:06:42 We thought if we really want to understand how stereotypes operate in the world, we can't simply look at half of it. The young researchers brainstormed how they might study the other half of the equation. The answer came to them as they were, yeah, eating together. We were sitting in Harvard Square over ice cream and we said what we need is a group where the stereotypes go in very different directions. They wanted to study a situation where stereotypes could have both positive and negative effects. And Margotchy happens to be an Asian American and a woman and we were started talking about math identities
Starting point is 00:07:18 and we kept going back and forth and back and forth and then literally at the same moment we said, well why don't we study Asian women and math? The experiment the design was ingenious and simple. There are negative stereotypes about women doing math and positive stereotypes about Asians and math. So what happens when you give a math test to women When you give a math test to women, who are Asian? We hypothesize that when you make different identities salient, you should expect different stereotypes to be applied. Todd and Margaret figure that if they reminded Asian women about their gender, they would
Starting point is 00:07:55 see the negative stereotype at work. But what would happen if they subtly reminded the volunteers about their Asian identity? The researchers recruited Asian women as volunteers and asked some of them to identify their gender on a form before taking a math test. Earlier research had shown that when you make gender salient in this way, this triggers the negative stereotype about women and math. Todd and Margaret reminded other volunteers, selected at random, about their Asian heritage. They wanted to make these volunteers remember the stereotype about Asians being good at math.
Starting point is 00:08:30 After all the volunteers finished the tests, the researchers analyzed their performance. Todd was working down the hall from Margaret one day when he heard her call out to him. She just shouted, holy cow, it worked. So I just sort of ran down there and we started looking at the output together. The study found that when the volunteers were reminded that they were women, they did worse on the math test. When they were reminded that they were Asian, they did better. Same women, same math test.
Starting point is 00:08:59 Negative stereotype, negative result. Positive stereotype, positive result. The study was an instant sensation. Psychologist Brian Nozek. This is one of my favorite effects in psychological science. Something that seems like it shouldn't be flexible, how well we perform in math, is flexible as a function of the identities that we have in mind and stereotypes associated with those identities. Asians being good at math, women being not as good at math. The study quickly became a staple of college textbooks. It says psychologist Carolyn Gibson. It is a pretty amazing finding and I thought about that study for the first time
Starting point is 00:09:36 as an undergrad. It's been used as an example in social psychology courses for four years since it was published in 1999 and it's used as a good example for for stereotype threat and stereotype boost. But from a scientific perspective there was one big problem. It had never been replicated exactly. Somebody had never followed their their steps that they followed and replicated their results but it's been used to support further studies many times over the past 15 years. Brian Nosek agreed, someone needed to replicate the original study. He was pure heading a mammoth effort to reproduce dozens of studies in psychology.
Starting point is 00:10:20 Along with a panel of reviewers, he selected this study for replication and asked Carolyn Gibson at Georgia Southern University to conduct it. Brian wanted the replication to closely match the conditions of the original study. If you don't do that, you're really conducting two different studies. After launching the replication, he had second thoughts about its location in the south. And the reviewers thought this looks like a case where the location might matter. Asians in the southern US might be a more distinct minority than Asians in the northeast or in the west. And so we recruited a second team to do a replication simultaneously at UC Berkeley in the West Coast University, where Asians are much more prominent members of the community. The team in Berkeley was headed by Alice Moon.
Starting point is 00:11:13 Alice was a fan of the original paper. When I heard about it, I just thought it was like one of the very cool demonstrations in social psychology. And so that's why I always liked this paper. She followed the protocol of the original Harvard experiment. She recruited Asian women, reminded them of the female side of their identity, or the Asian side of their identity, and then gave them a math test. So, what happened? When we compared just as the original paper did, when we compared the participants who were in
Starting point is 00:11:46 the Asian identity salient condition with the participants in the female identities salient condition, we found that there was no difference in their math performance. The celebrated study failed to be replicated. When Brian knows that he can announce the finding about this and dozens of other studies that could not be replicated, it caused an uproar. Newspaper articles called it a crisis. Critics called accusations about fraud and scientific misconduct. In a 6,000-word cover story, the conservative magazine Weekly Standard said that liberals
Starting point is 00:12:25 had been making up research into how stereotypes affect women and people of color. The Berkeley study however was not the only replication of Todd Petinsky in Margaret Shee's paper. Remember how Brian had two groups conduct applications? I asked Carolyn Gibson at Georgia Southern University what she found when she ran the experiment on Asian women and math. When primed with Asian identity, Asian females did better on a math test compared to those who had been primed with their female identity and then those primed with their female identity
Starting point is 00:13:04 did significantly worse. Carolyn has no doubt about the meaning of what she found. I believe that it further supports the original finding and that it gives even more robust evidence to this idea, mostly because we followed the same method as the original study and because we collected more participants. And so we have a more powerful study. At Berkeley, Alice is unsure. I do believe that stereotypes in general do have effects on our lives.
Starting point is 00:13:36 But in terms of this particular finding about whether stereotypes can facilitate people's academic performance, I guess it has made me question whether or not that finding is true. Okay, so which is it? Should we trust the results of the Berkeley study and say that Todd Patenske and Margaret Shee's finding was disproved, or should we trust the Georgia study and say the finding was confirmed? What happens when scientific studies disagree with one another? The popular narrative of the replication crisis suggests that scientists are like dueling gladiators.
Starting point is 00:14:11 If two scientists come up with different findings, it must mean one of them is wrong or worse, one of them must have faked her data. When we come back, we'll take a look at why this idea misunderstands how science is supposed to work. Our statistical techniques are probabilistic and not definitive. And so we absolutely need replications. But replications in our current academic climate are also serving the purpose of trying
Starting point is 00:14:38 to vet out academic fraud and are serving as a detection technique. And those two are very different missions for replications. Stay with us. Support for NPR comes from Eli Lilly and Company. For 140 years, Lilly has united caring with discovery to make life better for people around the world. Today, they're working to discover
Starting point is 00:15:02 a life-changing medicines in the areas of diabetes, cancer, autoimmune diseases, and Alzheimer's disease among others. Learn how the people of Lili turn inspiration into action at lilyforbetter.com. Support also comes from the Amazon original series, The Man in the High Castle, which imagines a world where the Allies lost World War II, and America is ruled by Nazi Germany and imperialist Japan. But revelations in secret prophetic films prove our future belongs to those who change it. Based on the award-winning book by Philip K. Dick, executive produced by Ridley Scott
Starting point is 00:15:41 and winner of two Emmy Awards, streamed the new season now on Amazon Prime Video. This is Hidden Brain, I'm Shankar Vedantam. We're taking a look today at how science works and the so-called replication crisis in the social sciences. As I listen to the news reports about the controversy, I found myself drawing an analogy with my own profession. Journalism.
Starting point is 00:16:04 Here's what I mean. A few years ago, a reporter for The New York Times was caught fabricating stories. Instead of traveling to various locations and interviewing people, he simply made stuff up. The newspaper went back and re-reported the story Jason Blair had written. When the facts didn't match, the reporter was fired. Imagine for a second what would happen if we re-reported every story by every reporter at the New York Times. Even when reporters are doing a perfectly good job, the older new stories might not match. A source might not see exactly the same thing again. Sometimes
Starting point is 00:16:43 if the circumstances have changed, a source might say something completely different. So when two reporters don't produce the same story, it could be that one of them is making stuff up. But much more likely, is that both of them are right. Now, I know what you're thinking. Journalism is storytelling, science is about data. I know what you're thinking. Journalism is storytelling, science is about data. But let's look closely at what happened in the replications that Carolyn Gibson and Alice Moon did of Tarpitansky study. In the original study, women administered the experiment. In Georgia, the facilitator was also female. But in Berkeley, where the replication failed, both male and female facilitators administered the study. Could that have made a difference?
Starting point is 00:17:28 Let's be clear. So it was not an exact replication. So here's an example. It mentions clearly in the paper that, and I don't know whether this factor is important or not, that in one study, the experimenter gender were males, and another study, the experimenter gender were females. I have no idea whether that's a factor that could explain the difference between the two studies. And so, let's be clear about what, it's not an exact replication. This is Eric Bradlow from the University of Pennsylvania.
Starting point is 00:17:59 He's eminently qualified to talk about this stuff. Spent four years here at Wharton studying statistics and mathematics. Went on to get my PhD in statistics. And for the last 20 years, I've been applying statistical methods to lots of problems, but I consider myself a mathematical social scientist. Eric believes that requiring studies to achieve statistical replication, to match more or less perfectly, before you conclude that either is true, is like requiring two reporters to cover a basketball game and come back with nearly identical stories.
Starting point is 00:18:30 Exact replication is one of those mythological ivory tower things that doesn't exist. What we really need to think about is if the study doesn't replicate, why doesn't it replicate, and even if it doesn't replicate exactly, it may actually reinforce the original finding. In other words, you may be more certain. This isn't just true about studies and psychology. Eric told me that NIH researchers once found that lab mice, given a sedative, took 35 minutes to recover. When the experiment was repeated, the mice took 16 minutes to recover. The scientists scratched their heads, it made no sense.
Starting point is 00:19:05 It took a while to figure out that something that shouldn't have made a difference did. In between the two experiments, wood shavings in the animal cages were changed. Turns out that red cedar and pine shavings step up the speed at which the sedative was metabolized. Birch or maple don't. This is not to say that repeating experiments is useless or pointless. It's incredibly valuable.
Starting point is 00:19:30 But replications primarily help us understand the nuances around a phenomenon. They're not very useful as a tool to detect fraud. Just because you get different results doesn't mean you shouldn't trust them. How much are they with an margin of error of each other? Are there other variables that would make it so that study done at University A and the study done at University B wouldn't yield exactly the same thing?
Starting point is 00:19:54 I think that's a better way of looking at it than say, if you don't get exactly the same results or even results that are very nearly the same, you can't trust them. I think that's a superficial level of science. I think you need to go below that so when when you yourself look at a study that has not replicated or you looked at what sometimes called a failed replication do you not at the back of your mind say well this disproves the first study do you actually never think that way
Starting point is 00:20:18 uh... oh what never's a long word never says never said never said strong word you know i'm thinking i have to think of that j Bond movie when Sean Connery said, I will never do James Bond again. And then 15 years later, he came out with a movie called Never Say Never Again. No, no, I would never say that, but I would say the following. Let's imagine that you do a study and that you find that, you know, people that take an SAT prep course do 15% better on the SAT.
Starting point is 00:20:45 And let's imagine that someone else does a study and the answer's only 3%. Now, there's two possibilities. One is the first study for whatever reason, overestimated the effect. That's entirely possible. And therefore, 3% is less than 15%. But note, if you combine those two studies together, your finding might
Starting point is 00:21:06 actually be stronger in the sense I'm now more sure that SAT prep helps performance on the SAT. Now, the effect size may shrink from 15% to 11%, but also notice I've possibly now doubled or tripled my sample size, so my uncertainty goes down, and now I may even be more sure that SAT prep helps, maybe not to the degree that it helped in the first study, but still I'm more sure that it's actually effective. When you think about different branches of science, though, aren't there branches of science where you can expect the same thing to happen very predictably over and over again
Starting point is 00:21:41 when you look at particle physics, for example, you would expect that if you fire, you know, 20,000 protons out of a gun, that they're basically going to do the same thing pretty much every time. Well, it's been all, you're testing the boundary of my memory of my particle physics class when I took it here at Penn, but my understanding is, of course, and this is what statistician study, right? We study the concept of randomness. And so every science, every discipline, unless you're talking about an equal sign, like E equals MC squared.
Starting point is 00:22:12 E doesn't approximately equal MC squared. It actually equals. Most physical laws and things aren't equal signs. There's approximate signs, and so that means there's randomness to it. I think if you fired 20,000 protons, you would see that there's a deviation in the way they collide with other particles and there's randomness.
Starting point is 00:22:30 I think the same thing is true in the social sciences. You bring in 500 subjects, you bring them in at university A, you bring them in at university B. There's randomness in people's answers. Of course, you would hope the overall patterns would be similar, but the fact that this belief that you're gonna get exactly the same findings, I'm not sure that something science should be striving for. Any individual study is just that, an individual study. It isn't the truth.
Starting point is 00:22:55 Every observation is a point. It's a dot. And we observe dots. And then we observe more dots. And if those dots replicate, great, then we have more belief. Science is about the evolution of knowledge, right? But the process is never ending. There will always be more things to uncover, more nuances. We get more certain about what it is we know, and we also get more certain about what are called boundary conditions or moderators like, for example. Maybe this effect holds in urban areas versus not.
Starting point is 00:23:25 Maybe it holds in California and not in Alabama. Maybe it holds for people that are hold these stereotypes and maybe it doesn't hold for people that don't. That's, to me, that's an advance of science. We have found what's called a main effect, which is, you know, stereotypes have an effect on outcomes or priming has an effect on outcomes. And then we say, oh, and by the way, it doesn't hold in these conditions. That's not a failure to replicate.
Starting point is 00:23:50 That's a more nuanced view of the original finding. At Harvard, Dan Gilbert says you can expect some studies to replicate nearly perfectly every time, but in other cases, the very thing your studying is changing. So exact replications aren't possible. There are many findings in psychological science that we would expect to replicate quite exactly years later and on different populations. I-blink conditioning is a very nice example. If I blow in your eye enough, you're going to start blinking as I purse my lips. And that's not going to be very different across cultures, across times, across age groups.
Starting point is 00:24:27 Other kinds of findings certainly are. There's one of my favorite experiments in social psychology shows that when young men who are from the north or the south of the United States are insulted, they react very differently because northerners andoutherners have very different codes of honor. Now you can't take that experiment and expect to do it in Italy or expect to do it 25 years from now. It's an experiment that's of its moment and of its time. Every researcher I spoke with told me there's lots of agreement within the scientific community. There are certainly many scientific studies that are poorly designed.
Starting point is 00:25:05 There are researchers who do shoddy work. There is great pressure at universities and scientific journals to publish striking findings. But the solution to all these problems, say Eric Bradlow, Brian Nosek and Dan Gilbert, is more and better science. Eric Bradlow. The truth will come out. More dots will come out. More dots will come out. And if it turns out that what I published, it's not because I did anything fraudulent, just isn't true because of sample size the
Starting point is 00:25:32 way I collected the data, then you know what? Science will eventually figure out that what I'm saying is not true. So if you'd like a more... Brian Nosek is bemused that his findings about replicability have been taken to mean that the studies that fail to reproduce are worthless. He started a new system where researchers register protocols for their studies and commit to sticking to them. Scientific journals commit to publishing the findings of these studies, regardless of whether the results are sexy.
Starting point is 00:25:58 Science is the slow march of accumulating evidence, and it's very easy to want a simple answer, is it true, and it's very easy to want a simple answer. Is it true? Is it false? But really, replication is just an opportunity to accumulate more evidence to get a more precise estimate of that particular effect. To most people, the debate over scientific truth is an abstract issue. Most of us turn to scientists for answers. Should I drink a glass of red wine in the evening? Is this drug safe to give to my ailing mother?
Starting point is 00:26:30 Should I give my kid a dollar every time she does something while it's cool? In reality, science is more in the question business than the answer business. There's a reason nearly every scientific paper ends with a call for more research. Especially when it comes to human behavior, nearly every conclusion you can draw about human beings has tons of exceptions. Are people selfish?
Starting point is 00:26:54 Yeah, except millions act altruistically every day. Are humans kind? Yes, except that few species are capable of greater cruelty. If you want answers that never change, definitive conclusions and final truths, odds are, you don't want to ask a scientist. This episode of Hidden Brain was produced by Karamurk-Allison, Max Nestrak and Maggie Penman. Our staff includes Renee Clar, Jenny Schmidt, and our supervising producer Tara Boyle. Our unsung hero this week is Camille Smiley, who truly lives up to her name. Camille
Starting point is 00:27:40 is the executive assistant for our department and one of our favorite people. Whether you want to bounce around ideas, order office supplies, or just procrastinate next to her desk, Camille is always game to help out. For more Hidden Brain, you can find us on Facebook and Twitter and listen for my stories on your local public radio station. And speaking of your local public radio station, this is the season for giving. If you enjoy this program, please consider supporting your local station and tell them hidden brain sent you.
Starting point is 00:28:10 It'll take just a few minutes and it stacks deductible. Go to stations.npr.org. And thanks. I'm Shankar Vedantam and this is NPR. As you look back on the past year, you can listen back to. New NPR podcasts and old favorites are all waiting for you on the NPR One app. Dive back in and listen to embedded, invisibility, code switch, or even the hidden brain archive. It's perfect for a long road trip or a break from the holiday parties.
Starting point is 00:28:41 Listen anytime on the NPR One app or at npr.org slash podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.