Freakonomics Radio - 269. Bad Medicine, Part 2: (Drug) Trials and Tribulations
Episode Date: December 8, 2016How do so many ineffective and even dangerous drugs make it to market? One reason is that clinical trials are often run on "dream patients" who aren't representative of a larger population. ...On the other hand, sometimes the only thing worse than being excluded from a drug trial is being included.
Transcript
Discussion (0)
In the mid-20th century, an exciting new drug hit the market.
It's a small molecule that was produced in West Germany in the late 50s and early 60s.
It was a sedative, but not a barbiturate.
So it wasn't addictive, didn't clash with alcohol or other drugs,
and, according to its manufacturer, was entirely safe.
They based this claim on the fact that no matter how much of it they fed the lab rats, the rats did not die.
Once this new sleeping pill was made available, doctors discovered it did more than help people sleep.
It would combat, for pregnant women, morning sickness.
And so, pregnant women all over the world were given the drug.
It was called thalidomide.
The problem was the thalidomide would actually cross the placenta and impact the baby.
And it would cause a whole series of malformations and probably a lot of fetal death.
Fetal deaths were thought to number at least 10,000.
Among the babies who survived, there were serious birth defects.
Children that survived were deaf and blind, had a number of disabilities.
They had shortened or lacked limbs.
Babies born with horribly malformed limbs, with missing or malfunctioning organs,
because of the putatively super safe drug their mothers took to prevent morning sickness.
Thalidomide was on the market for roughly five years before it was banned.
Its German manufacturer, Chemie Grunenthal,
first denied the disastrous side effects before ultimately accepting blame.
The history of medicine is full of tragic missteps, but thalidomide, coming as it did during a boom in global mass media,
made more noise than most. The problem of tighter controls to prevent the distribution of dangerous
drugs such as thalidomide is a matter of concern to the president at his news conference.
Concern over the tragic effects of the new sedative thalidomide prompts President Kennedy...
Already more than 7,000 children have been born with some or all of their arms and legs missing.
Every doctor, every hospital, every nurse has been notified...
Although a few million thalidomide tablets had been distributed to doctors in the United States
for trial use, it was never approved for sale here.
That was thanks to a doctor at the Food and Drug Administration
named Frances Oldham Kelsey.
She did not believe that the application from the American distributor
offered complete and compelling evidence of the drug's safety.
President Kennedy later hailed Dr. Kelsey as a hero.
The alert work of our Food and Drug Administration,
and particularly Dr. Francis Kelsey,
prevented this particular drug from being distributed commercially in this country.
Even though the U.S. was an outlier in blocking thalidomide,
the disaster had a number of lasting effects on American drug regulation.
For one, the FDA established much more stringent rules for drug approval.
It also rewrote the rules on what kind of people should be included in clinical trials.
Because of the effects on young women and on the fetus,
it suggested that women shouldn't be included in clinical trials
because of the potential adverse events to the fetus. Meaning women were summarily excluded from early clinical trials for new drugs.
On one level, this might make sense. It's a protective impulse. But this impulse had a
downside.
The study of women in general became part of the collateral damage of that pregnancy
conversation. So there certainly
are young women who are not pregnant who could be included in clinical trials, and women in general
could be included in clinical trials, to really understand some of the effects of drugs on their
own health. And they were labeled as broadly vulnerable because of the potential to become
pregnant. And I think that was part of a very rapid response to a very, very visible tragedy. mores that respond to the terrible thing but often wind up overcorrecting.
Think about the Three Mile Island nuclear reactor accident in 1979.
No one was killed and the lasting health and environmental effects were negligible.
But it was so frightening that it essentially killed off the nuclear power expansion in the U.S.
Even as other countries embraced nuclear as a relatively
clean and safe way to make electricity, often using American technology, by the way, the
U.S. retreated.
What did we do instead?
We burned more and more coal to make electricity.
Now, from an environmental and health perspective, coal is almost indisputably worse than nuclear, but that's where the
correction took us.
That's where the fear took us.
And the fear of another thalidomide took us to exclude most women from early stage drug
trials and also to underrepresent women for a time in phase two and three trials, even
if the drugs market included women.
And, as you'll hear today on Freakonomics Radio, that had some severe unintended consequences.
It's just heartbreaking to know that so many women had to wake up in the morning, and they still got up, but they went out and drove into the side of a mailbox
because we didn't have sex as one of the variables that we would study.
Also, when the only thing worse than being excluded from a medical trial was being included.
The use of vulnerable populations of African-Americans, people in prison, children in orphanages,
vulnerable populations like these have been used for medical experimentation a fairly long time.
And what happens when a new class of drugs comes to market with great clinical trial results?
But none of them have got evidence showing that they reduce your risk of heart attack
or renal failure or any of the actual real stuff that patients actually care about.
From WNYC Studios, this is Freakonomics Radio,
the podcast that explores the hidden side of everything.
Here's your host, Stephen Dubner.
This is the second episode in a three-part series we're calling Bad Medicine.
It's about the many ways in which the medical establishment,
for all the obvious good they've done in the world, has also failed us.
Last episode, we talked about how much we still don't know
from a medical perspective about the human body.
I would say maybe 30%, 40% that we don't know.
We talked about the fact that medicine hasn't always been,
and often still isn't, as empirical as you might think.
You know, medical practice was based on bits and scraps of evidence,
anecdotes, bias,
preconceived notions, and probably a lot of psychological traps. We went over some of
medicine's greatest hits and its worst failures. You take a sick person, slice open a vein,
take a few pints of blood out of them and think that that was a good thing.
On any list of medical failures, thalidomide is near the top. Although we should
point out that long after it was found to have disastrous side effects on pregnant women,
it's had a productive renaissance. Thalidomide and its derivatives have been used to successfully
treat leprosy, AIDS, and multiple myeloma. That said, its effect on pregnant women, as we heard,
contributed to women being excluded from many drug trials,
thalidomide and another good-seeming drug that went bad called DES.
Diethylstobesterol, or DES, was manufactured, you know, in the early part of the 1900s.
That's Teresa Woodruff, who's been telling us the thalidomide story.
I'm the Watkins Professor of Obstetrics and Gynecology at Northwestern University. Woodruff also founded and directs the Women's Health Research Institute
at Northwestern, and she's an advocate for something called oncofertility. It's a word
that was coined only about 10 years ago. So what we did was to bring together both oncologists and
fertility specialists. So many young people are surviving
that initial diagnosis of cancer that we've really converted it over the last 20 years from a death
sentence to a critical illness. Many of the young people will actually survive that initial diagnosis
and live long lives. And so when they return from that cancer experience, many of them are sterilized
by those same life-preserving treatments. And so we want to provide fertility options to both
males and females. So we developed not only kind of the corridors of communication between oncology
and fertility, but we also created new technologies that could provide new options for young women and for pediatric males and females.
So for Teresa Woodruff, as for many in the medical community, the future holds great promise.
But so many decisions are informed by mistakes of the past, like thalidomide and DES, which first became available in the 1930s.
So DES, it's an estrogenic compound, was being prescribed to pregnant women to prevent miscarriage.
Miscarriage was thought at the time medically to be caused at some level by low estrogen.
And so supplying this estrogenic-like factor was thought to correct a really difficult problem.
Makes perfect sense.
Yeah.
Was miscarriage in fact caused by an estrogen shortage?
It's probably not.
It's multifactorial.
There may be some cases where low estrogen would have a modest effect, but in general,
that's not the case.
DES, as it turned out, wasn't very effective in preventing miscarriage.
Worse yet, it sometimes produced side effects that would become manifest only years later
in the offspring of the women who'd taken DES. It affected boys and especially girls.
Well, the physicians just started reviewing the medical records of these young women who were
now coming up with this very, very rare vaginal cancer.
The onset of that disease is clearly estrogen dependent and probably a very narrow window
during pregnancy when estrogen would have that effect. You know, DES and thalidomide are both
tragedies, but it wasn't that the physicians were going out to try and create an adverse problem for
women who were pregnant.
But as you look back across medicine, across science, we're always learning.
In 1977, because of the tragic consequences of DES and thalidomide, the FDA made a big change.
It recommended excluding from early clinical trials all premenopausal females
capable of becoming pregnant, unless they had life-threatening diseases, which meant that many
of the drugs that later came to market had been tested only on male subjects, which could cause
some real trouble for women. A great example of this is the drug Ambien, which was just the latest
of the large number of drugs that had
adverse events in females. Ambien is a sleeping pill whose main ingredient is a drug called
zolpidem. Americans love their sleeping pills. About 60 million prescriptions are written each
year for roughly 9 million people. Some two-thirds of these medications contain zolpidem, which was
approved by the FDA in 2007. But as it turned out,
men and women metabolize the drug differently. The drug maker actually have in the FDA filing
the metabolism of this drug in males and females, and in fact knew that it cleared the circulation
of males faster than it did females. But they only studied the efficacy
on males, had no females in that efficacy study. When you say the clearance, it means how quickly
the body is metabolizing. Yes, essentially. That's right. How long that drug is available
in the body. Can you just explain that a little bit? Let's say it's a 150-pound male
and 150-pound female. I assume those will be different clearance rates. Can you explain why
that is? Right. So it's going to depend on individuals. And so some drugs will go into
the fat and will be available for longer. So how much fat exists and what kind of fat can take up
some of the drugs. But probably the most important part of drug metabolism is the liver. And so males and females have different enzymes and different
P450s that are on the liver. And so that can alter the way drugs get cleared. For example,
you know, women wake faster from sedation with anesthetics. So they recover much more slowly
and have more reported pain events in hospital. Talk for a few moments about the differences between females and males in medicine and
or medical science.
Well, so I think one is hormones.
And that's what we often think about.
Males have testosterone and females have estrogen and progesterone.
And so those hormones influence a lot of the biology of males and females in a very distinct and different way.
But the fundamental way males and females differ is that every cell in a male's body is XY and
every female cell is XX. And the sex chromosomes actually also inform, just like the other
chromosomes within the cell, the overall function of that particular cell. And so
understanding how chromosomal sex informs the biology of kidney cells or of eye cells or of
muscle cells is really important. In addition, there are anatomical differences between males
and females. So heart size might differ, and that's relevant to cardiovascular disease.
And then the environment, the microbiome. We now know from a variety of studies that there is a
sex to the gut microbiome that inhabits all of us. So I would think, therefore, if I want to be a
doctor or a medical researcher or running the FDA or anywhere up and down the ladder.
I would like to think that for the past 100, if not for the past 1,000 years,
I've been very careful to consider any treatment
and how different people would accept it differently based on their biology.
Right. And I think that's the surprise to everyone,
that in fact sex has not been a fundamental part of the way we look at biological systems.
And, you know, at some level, this is just the way biology has always been done.
And then science keeps building on what was done in the past.
And I think that's the really critical question.
Are there real adverse events that occur when you only use one sex? And
the answer is, of course, yes. So something like eight out of the last 10 drugs pulled from market
by the FDA were because of this profound sex difference. In the case of Ambien, the FDA was
getting complaints for years from users who were sleepwalking, even sleep driving.
It's just heartbreaking to know that so many women had to wake up in the morning and they still got up,
but they went out and drove into the side of a mailbox because we didn't have sex as one of the variables that we would study. Eight hours after taking an Ambien, 10 to 15 percent of women still had enough Zolpidem
in their system to impair daily function compared to 3 percent of men.
The FDA's ultimate recommendation?
Women should take a smaller dose than men.
The federal government had acknowledged for years the problem of excluding women from
medical trials. In 1993, Congress required that women be included in all late-stage clinical trials funded by the National Institutes of Health unless it was a drug taken only by men.
But what that didn't do was include males and females in the animal studies and the cell studies that are the precursor to all, that's the engine to all of
medicine. Meaning drugs that might be useful for women, but not for men, might not even get to the
earliest stages of testing. It wasn't that they were thinking, well, let's make it hard on women
to have this drug down the line. I think they were thinking of trying to do the most clean study they could imagine. And the study group that they imagined were the simplest were the males.
Males were considered simple because they don't have menstrual cycles that change hormone levels,
they don't get pregnant, and they don't go through menopause. As one researcher puts it,
studying only men reduces variability and makes it easier to detect the effect that you're studying.
But ultimately, the exclusion of women was deemed inappropriate.
In 2014, the NIH spent $10 million to include more women in studies.
And in 2016, they decreed that all studies had to include sex as part of the equation.
And that date, January 25th, 2016, to me, there's a before and there's an after.
And before that time, sex wasn't a variable in the way time or temperature or dose has always been.
And I think we're going to see an enormous number of new discoveries
simply because science now has an entirely new toolbox to work with.
So that's progress.
But as we'll hear later, drug companies still like to use very narrow populations for their drug trials.
The better to prove efficacy, of course.
So exclusion still exists.
On the other hand, it wasn't so long ago that exclusion from a certain kind of medical trial would have been a blessing.
The use of vulnerable populations of African Americans, people in prison, children in orphanages,
vulnerable populations like these have been used for medical experimentation a fairly long time.
That is Evelyn Hammonds, a professor of the history of science and African American studies at Harvard. And this is Keith Wailoo, an historian at Princeton. You see it in the era when the birth
control pill is being tested in Puerto Rico in the 1950s. And you see it in things like the
Tuskegee syphilis study, which extended from the 30s into the 1970s.
The Tuskegee study of untreated syphilis in the Negro male, as it was called,
is one of the most infamous cases in U.S. medical history.
Its goal was trying to understand the long-term effects of venereal disease
as it developed through its various stages.
And the study was being
conducted on a group of really poor African-American men.
White government doctors working for the U.S. Public Health Service found approximately
400 African-American men presumed to all have syphilis.
The problems emerge after penicillin is discovered and more widely used.
And the question that should have been asked is,
now that we have a series of effective treatments for venereal disease,
ought we to continue a study of untreated syphilis or ought we to provide treatment?
So even though a syphilis treatment became available, it was withheld from the men
in the study. Put aside for a moment the short-term elements of this maneuver, the cruelty, the ethical
failure, consider the long-term implications. What happens when one segment of the population
is so willfully exploited by the mainstream medical establishment? Well, that part of the
population might develop a deep mistrust of said establishment.
A recent study by two economists found that the Tuskegee Revelation seriously diminished African Americans' participation in the healthcare system.
They were simply less willing to go to a doctor or a hospital. The result? A decrease in male African-American life expectancy of about
1.4 years, which at the time accounted for roughly one-third of the life expectancy gap
between blacks and whites. Coming up on Freakonomics Radio, with such a fraught history of inclusion and exclusion in medical studies, who does end up in clinical trials?
When you look at the evidence, what you often find is that trials are conducted in absolutely perfect dream patients.
Also, how good are the new drugs that typically make it to market? I think if we're honest with ourselves, we'll have to admit that the majority of new cancer drugs offer sort of very small gains at tremendous prices.
And what happens if you write about conflicts of interest among oncology researchers and then you go to an oncology conference?
I always wear a bulletproof vest.
My name is Stephen Dubner.
This is Freakonomics Radio,
and this is the second of a three-part series we're calling Bad Medicine.
We don't mean to be ungrateful for the many marvels that medicine has bestowed upon us, nor do we mean to
pile on or to point out the avalanche of obvious flaws and perverse incentives, but, well, it's
just so easy. Doctors do something for decades. It's widely done. It's widely believed to be beneficial.
And then one day, a very seminal study contradicts that practice. That's Vinay Prasad. He's an
oncologist and an assistant professor of medicine at Oregon Health and Science University. He also
co-authored a book about what are called medical reversals, when an established treatment is overturned,
which happens how often?
It's widespread, and it's resoundingly contradicted.
It isn't just that it had side effects we didn't think about.
It was that the benefits that we had postulated turned out to be not true or not present.
How can it be that so many smart, motivated people,
physicians and medical researchers,
come up with so many treatments that go all the way through the approval process
and then turn out to be ineffective or even harmful.
A lot of it simply comes down to the incentives.
So much of the research agenda, even the randomized trial research agenda,
is driven by the biopharmaceutical industry.
And that's not necessarily a bad thing.
I think there's many good things about that
that really drives many, many trials.
It drives a lot of good products.
It also drives a lot of marginal products
or products that don't work.
And the people who design those trials are,
I think, very clever.
You can sort of tilt the playing field a little bit
to favor your drug.
And the incentive to do so is often tremendous.
Billions of dollars hinge on one of these pivotal trials. And to some degree, that's because it's a human pursuit.
But to some degree, we could have policy changes that could more align the medical
research agenda with what really matters to patients and doctors.
Let me ask you in your own field, in oncology and in the particular cancers that you treat,
how much more effective generally would you say the new cancer drugs are than the ones that they are replacing or augmenting?
Let me say that there are a few cancer drugs that have come out in the last two decades that are really wonderful drugs, great drugs.
One drug came out of work here in the Oregon Health and Science University by Dr. Druker, Gleevec, and that's a drug that transformed a condition where maybe 50
or 60% of people are alive at three years to one where people more or less have a normal life
expectancy. So that's a really wonderful drug. But I think if we're honest with ourselves,
we'll have to admit that the majority of new cancer drugs are marginal, that they offer sort
of very small gains at tremendous prices. And to give you an example of that, among 71 drugs approved
for the solid cancers, the median improvement in overall survival or how long people lived
was just 2.1 months. And those drugs routinely cost over $100,000 per year of treatment or course
of treatment. But that points to one of the tricks that works so well, which is if it's 2.1 months
extra, and if the expected lifespan was, let's just
pretend for a moment, it was six months, then on a percentage basis, that's a massive improvement.
And so I don't, as the patient or as the pharma representative, I'm not talking about that length
of time, which might be lived under physical duress and financial duress, but rather I'm
thinking about, goodness gracious,
a huge 33% life expectancy extension.
Right, a new drug improves lifespan 33% longer.
And who doesn't want that,
especially when you're sitting there with your loved one
in a horrible situation facing the end?
The other thing I'd point out is those 2.1 months,
these clinical trials that are often conducted
by the biopharmaceutical industry,
they really choose sort of the healthiest patients, the people who are the fittest of the patients.
On average, the age is almost 10 years younger in pivotal trials for the FDA drug approval than in the real world.
And then when you start to extrapolate drugs that have real side effects and very carefully selected,
and small benefits in carefully selected populations to the average patient that
walks into my clinic who's older, who has other problems, who's taking heart medicine. There was
a paper that came out about one of those costly, expensive drugs for liver cancer. And in the
pivotal trial, it had a benefit of about two, three months, something like that. But in the
real world, in the Medicare data set, it had no improvement in survival over just giving somebody
good nursing care and good supportive care. And I think that's the reality for many of these
marginal drugs, that when you actually use them in the real world, they start to not work so well
and maybe not work at all. You've written and spoken out about cronyism and conflicts of
interest between drug makers and the doctors who prescribe drugs. I'm curious, what happens when you go to an oncology conference?
Are you an unpopular person there?
Stephen, I always wear a bulletproof vest when I go to the...
But this has really been sort of the way medicine has operated for many years,
that to some degree, practicing doctors in the community,
having ties to the drug makers, that's one thing.
But increasingly, we see that the leaders in the field,
the ones who design the clinical trials,
who write up the manuscripts,
who write the review articles,
who sort of guide everyone on how to practice
in those fields,
they have heavy financial ties to drug makers.
And there's a large body of evidence
suggesting that that biases the literature
towards finding benefits where benefits may not exist, towards
more favorable cost-effective analyses when drugs are really probably not cost-effective.
It's a bias.
Yes, well, we have a great deal of empirical data showing that funding sources and author
financial conflicts of interest are associated with over-optimistic data.
That's Lisa Barrow. She's a professor of medicine. She's also co-chair of the Cochrane
Collaboration, which is a global consortium of medical professionals and statisticians.
Cochrane promotes evidence-based medicine by performing systematic reviews of medical research.
And in fact, we have a Cochrane review on this very question. And this finding shows that if a drug study is funded by a pharmaceutical company whose drug is
being examined, they're much more likely to find that the drug is effective or safe.
How much more likely? It's about 30%.
Did you catch that? An industry-funded study is 30% more likely to find the drug is effective and safe than a study with non-industry funding.
And they're likely to find this even if they control for other biases in the study.
So by that what I mean, it could be a really well-done study.
It could be randomized.
It could be blinded.
But if it's industry- funded, it's still more
likely to find that the drug works. But if a study is well done, how can the results be so skewed?
So it's everything from, I mean, the question they're actually asking to how they frame the
question, the comparators they use, how they design the study, how it's conducted behind the scenes.
Trials are very often flawed by design in such a way that they're no longer the gold standard,
no longer a fair test of which treatment is best.
That's Ben Goldacre.
I'm an academic in Oxford working in evidence-based medicine,
and I also write books about how people misuse statistics.
He's also a doctor.
Yeah, that's right. So I qualified in medicine in 2000,
and I've been seeing patients on and off in the NHS for 15 years now. One of Goldacre's books is
called Bad Pharma. He echoes what Vinay Prasad was telling us about the people who were chosen
for clinical trials. When you look at the evidence, what you often find is that trials are conducted in absolutely perfect dream patients, people who are by definition much more likely to get better quickly.
Now that's very useful for a company that are trying to make their treatment look like it's
effective. But actually for my real world treatment decisions, that kind of evidence can be
really very uninformative.
Imagine you're a doctor who's treating a patient with asthma. Not hard at all to imagine.
Now, asthma is obviously a very common condition. It's about one in 12 adults.
With such strong demand for asthma treatment, there's been a bountiful supply from drugmakers
with dozens of clinical trials. A 2007 review of these studies looked at the characteristics of real-world asthma patients
and how they compared to the people who'd been included in the trials.
They said, okay, let's have a look and see, on average, what proportion of those real-world
asthma patients would have been eligible to participate in the randomized trials that are
used to create the treatment guidelines, which are then in turn
used to make treatment decisions for those asthma patients? And the answer was, overall, on average,
6%. So 94% of everyday real-world patients with asthma would have been completely ineligible to
participate in the trials used to make decisions
about those very patients. Of course, it isn't only with asthma patients where this happens.
It's very common for randomized trials of antidepressants, for example,
to reject people if they drink alcohol. Now that sounds superficially sensible, but actually I can tell
you as somebody who's prescribed antidepressants to patients in everyday clinical practice,
it's almost unheard of to have somebody who is depressed and who warrants antidepressants who
doesn't also drink alcohol. So you need trials to be done in people who are like the people that you actually treat.
If you look at the overall efficacy rate of most antidepressants,
you'll find it to be very, very low if there's any efficacy at all.
And, of course, there's the opportunity cost to consider.
Because you tend to prescribe one antidepressant at a time.
Which means while a patient is on one drug that may not be working,
they can't try another that might.
Plus which, there are the side effects to consider.
So a lot of drugs that look great on paper
don't do very well in the real world.
Why?
Part of it is what Ben Goldacre and Vinay Prasad
were talking about,
cherry-picking subjects for clinical trials.
But Goldacre says there are plenty of other ways to manipulate trial numbers in the drugmaker's favor.
What do you do, for instance, when research subjects quit a trial because of the treatment side effects. And what you see is people inappropriately using a statistical technique like last observation carried forward to account for missing data from patients who
dropped out of a study because of side effects. Last observation carried forward is a statistical
extrapolation, pretty much what it sounds like and worth looking up if you're interested in that kind
of thing. You can see how an inappropriate use of such a technique
would tilt things in the drug maker's favor. There's also the widespread use of what are
called surrogate outcomes as opposed to real world outcomes. Consider many of the drugs recently
approved by the FDA to treat diabetes. All of those drugs have been approved onto the market
with only evidence showing that they improve your blood sugar.
But none of them have got evidence showing that they reduce your risk of heart attack or renal failure or eye problems or any of the actual real stuff that patients with diabetes care about.
Since all of those outcomes would be hard to test for in a clinical trial, and by hard, what I really mean is time-consuming and expensive, instead, the researchers go for the simple surrogate outcome of whether their pill lowers blood sugar. and it's absolutely littered with examples of where we have been given false reassurance
by a treatment having a good impact on a surrogate outcome, a laboratory measure,
and then discovered that actually it had completely the opposite effect on real-world outcomes.
As in the case of the infamous CAST trial that we covered in part one of the series,
in which the drug that suppressed
aberrant heart rhythms actually worsened survival outcomes.
Now, we should point out that Ben Goldacre, and everyone we've been speaking with for
our Bad Medicine episodes, fully appreciates that medicine is science and that failure
is part of science. The human body is an extremely complex organism with lots to go wrong.
Diagnosing and treating even a simple problem can be very difficult.
So it's easy to take pot shots from the sideline at good ideas that went bad.
It's even easier to criticize pharmaceutical companies who seem much more intent on making
money than on making good medicines. But as
Goldacre points out, those companies are simply responding to the incentives that are placed
before them. Incentives that don't necessarily encourage them to do the right thing.
Goldacre points to a massive eight-year study called the ALLHAT trial, in which academic
researchers compared various
drugs from a number of drug makers that were intended to lower blood pressure and cholesterol.
Two of these drugs were made by the American pharmaceutical company Pfizer.
Pfizer came along and they said, look, we've got this fantastic new
blood pressure lowering drug, and we've got various grounds for believing that it's going
to be better than old-fashioned blood pressure lowering drugs. But at the moment, all we can tell you is
it's roughly as good at lowering blood pressure. So Pfizer asked the All Hat researchers to test
whether their drug actually reduced the real-world outcomes that really matter, heart attack,
stroke, and death. So the researchers said what all academic researchers have said to drug companies since
the dawn of time, which was, thank you very much, that sounds like a fabulous idea, that'll
be about $175 million, please.
Actually, Goldacre misspoke.
It was only $125 million, and Pfizer's share was just $ million dollars but still 40 million dollars. And it was
so expensive simply because measuring real world outcomes like that especially before the era of
electronic health records was extraordinarily expensive. So Pfizer pays in and the trial
begins. It was timetabled to run for a very, very long time, many, many, many years. But it was stopped early because the Pfizer treatment,
which was just as good at lowering blood pressure,
was so much worse at preventing heart attack, stroke and death
that it was regarded as unethical to continue exposing patients to it.
The Pfizer drug we're talking about was called Cardura.
So where does Pfizer come out in all this?
It's really important, I think, to recognize that Pfizer did nothing wrong here. Pfizer
did exactly what we would hope all companies should do. They didn't just say, oh, that's fine,
we've got some surrogate endpoint data, we've got laboratory data showing it lowers blood pressure,
and that's all we need. Instead, they went out and they did the right thing.
They exposed themselves to a fair test.
They said, we want to see if this treatment improves real world outcomes that matter to
patients, heart attack, stroke and death.
And they were unlucky and it flopped.
The real problem, Goldacre says, is when drugs aren't subjected to the real world test. The real bad guys here are the people who continue to accept weak surrogate endpoint data,
like, for example, on the new diabetes drugs. It may well be that they lower the laboratory
measurement on a blood test, but that doesn't necessarily mean that they reduce your risk of
heart attack, stroke and death.
And to find that out, we need to do proper randomized trials,
which are admittedly longer and more expensive.
But there's yet another problem. What happens if a proper randomized trial doesn't show the efficacy a drug maker was hoping to show?
Well, there's a good chance the world will never
know about it because of publication bias. Ian Chalmers, a co-founder of the Cochrane
Collaboration, is a major player in the evidence-based medicine movement.
About half of the clinical trials that are done never see the light of day. They don't get
published. Isn't that outrageous?
Which trials do get published?
Trials that show results that are so-called statistically significant are more likely to get published than those that don't have those results.
Now, you might think, well, yes, it makes sense to publish trials where a medicine seems to work.
And if it doesn't seem to work, why is that
important to publish? Ben Goldacre again. So if you cherry pick the results, if you only
publish or promote the results of trials which show your favoured treatment in a good light,
then you can exaggerate the apparent benefits of that treatment.
As Chalmers tells us, there are all kinds of reasons why the results of an unsuccessful
trial might not get published.
It may threaten commercial enterprises' interests to publish a trial which is disappointing.
It may be something which someone who has had a favorite hypothesis and been known for
writing and speaking about it for years,
finds out that the first really good study to test the hypothesis doesn't find any support for it.
There's laziness. And that's the real scandal here is that you are allowed to legally withhold
the results of these trials. And so people do. So the results of trials are routinely and legally withheld from doctors,
researchers and patients, the people who need this information the most.
That is a systematic structural failure.
The structure that is imposed by government regulators, by funding sources, by the markets
themselves, all of which can be very hard to change. So how does Ben Goldacre see the situation
improving? We set up something called the All Trials campaign a couple of years ago. And the
All Trials campaign is a global campaign to try and stop this problem from happening. So asking
companies, research institutes, academic and medical professional bodies, patient groups, and all of the rest, to sign up and to say, all trials should be registered. So you publicly post on a
publicly accessible register the fact that you've started a trial, because that means we know which
trials are happening. So at least we can see if some of them aren't being published.
The All Trials campaign also urges the publication of what's called a clinical study report.
And a clinical study report is a very long, very detailed document, hundreds, sometimes thousands of pages long, that describes in great detail the design of the study and the results of the study.
And that's really important because often a trial can be flawed by design in a way that is sufficiently technical that it is glossed over in the brief report that you get in an academic journal article about a trial.
And those design flaws can only be seen in the full-length clinical study report.
There's also a growing momentum to curb conflicts of interest in medical research.
Well, I think we've already had great improvements in transparency.
And what's really pushed the disclosure of funding sources has been the journals.
That's Lisa Barrow again from the Cochrane Collaboration.
So if you publish something, you are required to disclose the funding source.
And, you know, this is still not 100% enforced, but it's getting pretty close.
On the other hand, Barrow says,
a given researcher or investigator
may have undisclosed biases or conflicts of interest.
And one sort of loophole is that
the investigator themselves have to decide
if something's relevant to the particular study.
And so they may say,
well, I just don't think it's relevant.
There's another quirk in the medical industry that probably doesn't serve the particular study. And so they may say, well, I just don't think it's relevant. There's another quirk in the medical industry
that probably doesn't serve the public good.
Drug companies evaluate their own products,
whereas in software, you usually get somebody external
to check the quality of your project.
Engineers get people to do the earthquake checks for them
who are independent from the people who built the bridge.
So it's a very odd system that we have where the companies with an interest or stand to gain
financially from testing a product are testing it themselves. So I think we need to change that.
And finally, there are the doctors themselves, the endpoint in this complicated,
conflicted infrastructure that's meant to deliver better medicine. Ben Goldacre,
the gadfly physician who knows so much about bad pharma and bad medicine, acknowledges that the
entire system is due for reform. And it's a structural failure that persists because of
inaction by regulators, by policymakers, by doctors and researchers as much as because of industry.
And none of us can let ourselves off the hook.
So next time on Freakonomics Radio in our third and final episode of Bad Medicine,
what's a doctor to do?
I see the opioid story as part of the recurring sense of hope and despair
associated with these drugs that are supposed to solve problems,
but they end up being problems in themselves.
What to do about the troubling finding
that more experienced doctors have worse outcomes than young doctors?
So I would think that you are a downright danger to your patients.
How is it that you're not?
No comment.
And finally, yes, finally, lots of reasons to be optimistic, at least cautiously so,
about the future of medicine.
Where science and medicine is going in the future is to more and more precision medicine
so that we can get closer to an autonomous and individualized
diagnosis.
That's next time on Freakonomics Radio.
Freakonomics Radio is produced by WNYC Studios and Dubner Productions.
This episode was produced by Stephanie Tam with help from Irva Gunja.
Our staff also includes Shelley Lewis, Christopher Wirth, Jake Howitt, Merritt Jacob, Greg Rosalski, Noah Kernis,
Alison Hockenberry, Emma Morgenstern, Harry Huggins,
and Brian Gutierrez.
You can subscribe to this podcast on iTunes
or wherever you get your podcasts.
And why don't you come visit Freakonomics.com,
where you will find our entire podcast archive,
as well as a complete transcript of every episode ever made, which includes
music credits and lots of extras.
Thanks for listening.