Freakonomics Radio - Bad Medicine, Part 1: The Story of 98.6 (Rebroadcast)
Episode Date: August 17, 2017We tend to think of medicine as a science, but for most of human history it has been scientific-ish at best. In the first episode of a three-part series, we look at the grotesque mistakes produced by ...centuries of trial-and-error, and ask whether the new era of evidence-based medicine is the solution.
Transcript
Discussion (0)
Hey there, Freakonomics Radio listeners. We are taking advantage of August to replay you a special three-part series we did last year called Bad Medicine.
Today, part one, the story of 98.6. And it starts right now.
We begin with the story of 98.6. You know the number, right?
It is one of the most famous numbers there is because the body temperature of a healthy human being is 98.6 degrees Fahrenheit.
Isn't it?
So now I'm going to take your temperature.
If you don't mind, just open your mouth and I'll insert the thermometer.
Perfect.
The story of 98.6 dates back to a physician by the name of Carl Wunderlich.
This was in the mid-1800s.
Wunderlich was medical director of the hospital at Leipzig University.
In that capacity, he oversaw the care and the taking of vital signs on some 25,000 patients.
Pretty big data set, yes some 25,000 patients. Pretty big data set, yes?
25,000 patients.
And what did Wunderlich determine?
He determined that the average temperature of the normal human being
was 98.6 degrees Fahrenheit or 37 degrees centigrade.
This is Philip Makoviak, a professor of medicine
and a medical historian at the University of Maryland.
Well, I am an internist by trade
and an infectious disease specialist by subspecialty.
So my bread and butter is fever.
There's one more thing Makoviak is.
I am by nature a skeptic, and it occurred to me very early in my career that this idea that 98.6 was normal,
and that if you didn't have a temperature of 98.6, you were somehow abnormal, just didn't sit right.
Philip Makoviak, you have to understand,
cares a lot about what is called clinical thermometry. And if you care a lot about clinical thermometry,
you care a lot about the thermometer
that Carl Wunderlich used to establish 98.6.
His thermometer is an amazing key to this story of 98.6.
So you can imagine how excited Makoviak was when,
on a tour of the weird and wonderful Mütter Museum in Philadelphia, the curator told him
they had one of Wunderlich's original thermometers. I said, good heavens, may I see it? And she said,
sure, would you like to borrow it? And I said, of course.
And so I was able to take this thermometer back to Baltimore and do a number of experiments. The Wunderlich thermometer, Makoviak realized, was not at all a typical thermometer.
First of all, it was about a foot long, fairly thick stem, and registered almost two degrees centigrade higher
than current thermometers or thermometers of that era.
Two degrees higher? Centigrade? Uh-oh.
In addition to that, it is a non-registering thermometer,
which means that it has to be read while it's in place.
So it would have been awkward to use.
Makoviak noticed something else about the original Wunderlich research.
Investigating further, it became apparent that he was not measuring temperatures
either in the mouth or the rectum.
He was measuring axillary or armpit temperatures, and so that in many, many ways,
his results are not applicable to temperatures
that are taken using current thermometers and current techniques.
As it turns out, the esteemed Dr. Carl Wunderlich
was not the most careful investigator ever to come on the scene.
The more Makoviak looked into the Wunderlich data and how the story of 98.6 came to be,
the more he wondered about its accuracy.
So he set up his own body temperature study.
He recruited healthy volunteers, male and female, and took their temperature one to
four times a day around the clock for about two
days using a well-calibrated digital thermometer in the patient's mouths. What did they find?
Of the total number of temperatures that were taken, only 8% were actually 98.6. And so if you
believe that 98.6 is the normal temperature, then 92% of the time the temperature was abnormal.
Obviously, that's not even reasonable.
In his study, Makoviak found the actual normal temperature to be 98.2 degrees, not a huge difference.
And yet, the whole notion of a normal body temperature was looking more and more suspect.
Why? A lot of reasons.
Temperature varies from person to person, sometimes so much that one person's normal would register as nearly feverish for another person.
It's almost like a fingerprint.
Temperature varies throughout the day.
It's roughly one degree higher at night than in the morning, sometimes even more.
And an elevated temperature isn't necessarily a sign of illness. In women, it goes up with ovulation during the menstrual cycle.
The temperature goes up during vigorous exercise, and this is not a fever.
And so, Makovey concluded, looking at a rise in temperature as a reliable sign of infection or disease is inappropriately simplistic thinking.
Inappropriately simplistic thinking. It makes you wonder, if the medical establishment believed for so long in an inappropriately simplistic story about something as basic as normal body temperature, what else have they fallen for?
What other mistakes have they made?
I hope you've got some time. It's a long list.
You take a sick person, slice open a vein, take a few pints of blood out
of them. Drilling holes into people's skulls. It was literally taking someone to hell and back.
And it would cause a whole series of malformations and probably a lot of fetal death. Lobotomies.
The overuse of a mercury compound. The Tuskegee case. Losing your teeth and having your gums bleed.
DES and thalidomide. We use sort of a cement.
Hormone replacement therapy.
The OxyContin and opioid problem.
As a medical historian, it is patently obvious to me that future generations will look at what we're doing today and ask themselves, what was grandpa thinking of when he did that and believed that?
And they'll have to learn all over again that science is imperfect.
And to maintain a healthy skepticism about everything we believe and do in life in general,
but in the medical profession in particular.
On today's show, part one of a special three-part series of Freakonomics Radio,
we'll be talking about the new era of personalized medicine, the growing reliance on evidence-based
medicine, and especially, pay attention now, I'm going to use a technical term,
we will be talking about bad medicine. that explores the hidden side of everything. Here's your host, Stephen Dubner.
We have a lot of ground to cover in these three episodes.
Medicine's greatest hits, the biggest failures, where we are now, and where we're headed.
In the interest of not turning a three-part series about bad medicine into a 20-part series,
we're not even going to
touch adjacent fields like nutrition and psychiatry. Maybe another time.
Let's start very briefly at the beginning. Nearly 2,500 years ago, you had the Greek
physician Hippocrates, who is still called the father of modern medicine. You've heard,
of course, of the Hippocratic Oath,
the creed recited by new doctors. And you know the oath's famous phrase, first do no harm,
even though, as it turns out, that phrase isn't actually included in the oath. It came from
something else Hippocrates wrote. Nor do many contemporary doctors recite the original
Hippocratic Oath, there's a modern version
written in 1964 by the prominent pharmacologist Louis Lasagna. The pledge begins,
I swear to fulfill to the best of my ability and judgment this covenant. It is a fascinating,
inspiring document. And I think before we go too far, it's worth hearing some of it. I will respect the hard-won scientific gains of those physicians in whose steps I walk,
and gladly share such knowledge as is mine with those who are to follow.
I will remember that there is art to medicine as well as science, and that warmth, sympathy,
and understanding may outweigh the surgeon's knife or the chemist's drug.
I will not be ashamed to say, I know not.
Nor will I fail to call in my colleagues when the skills of another are needed for a patient's recovery.
Above all, I must not play at God.
I will remember that I do not treat a fever chart, a cancerous growth, but a sick human being,
whose illness may affect the person's family and economic stability.
My responsibility includes these related problems if I am to care adequately for the sick.
I will prevent disease whenever I can, for prevention is preferable to cure.
And may I long experience the joy of healing those who seek my help.
It's comforting to think about the thoughtfulness, the nuance, the massive responsibility that
doctors pledge before they attempt to diagnose or heal us.
How well has that pledge been upheld throughout medical history?
We'll talk to a variety of people about that today, starting with this gentleman.
My name is Anupam Jena.
I'm a healthcare economist and physician at Harvard Medical School.
So Jena, as both a practitioner and an analytical researcher, is especially useful for our purposes
because one of the themes we'll hit today several times is that medicine, even though it's scientific
or at least scientific-ish, hasn't always been as empirical as you might think, and sometimes not very empirical at all.
Here's an easy question. Can you tell me, please, the history of medicine,
or at least Western medicine in, I don't know, three or four minutes?
Well, let me first answer the meaning of life.
Is that going to be easier?
That'll take about five to six minutes you know i would say how
about three words trial and error so i think if you think about medicine what's how it's evolved
let's just say in the last 100 to 200 years the sorts of practices that at some point in history
people thought were actually medically legitimate included drilling holes into people's skulls,
lobotomies. Even as late as the 1940s to 1950s, lobotomies were thought to actually have a
treatment effect in patients with mental illness, be it schizophrenia or depression. The practice
of bloodletting, which is basically trying to remove the quote-unquote bad humors from the body,
was thought to be therapeutic in patients.
Things like mercury, which we know are downright toxic, were used as treatments in the past.
And, you know, that was in a time and place where I think it was very difficult to get evidence.
But not only that, there was probably a perception of the field
that didn't allow for the ability to question itself.
And in the last 50 plus years, probably 50 to 75 years, I think we've seen tremendous strides in the ability of the profession to constantly question itself.
So it's easy to get indignant over the idea of these treatments that turn out to be so wrong.
But understanding wellness and
illness is hard, obviously. So when you look back at the history of medicine, did those
interventions strike you as kind of shameful? You can't believe you're in a profession that
tried things like that? Or is that just part of the trial and error process that you accept?
I certainly wouldn't call it shameful. The only
thing that's shameful is when someone doesn't believe that they have the potential for being
wrong and they don't have that desire to inquire further about whether something actually works or
doesn't work. But the idea of trying things, particularly trying things that have a really
strong, plausible pathophysiologic basis, I think that there's nothing wrong with that.
In fact, that's what spurs scientific discovery and many of the treatments that we have now.
So I have a broad question for you.
The human body is, I think you and I would agree, an extraordinarily complex organism.
And over history, doctors and others have learned a great deal about it.
But if we
consider the entire human body from a medical perspective only, let's leave out metaphysics
and theology and what have you, from a medical perspective, how would you assess the share of
the body and its functions that we truly understand and the share that we don't really yet understand?
Huh. That's a tough one. We've made a lot of headway,
but to put a number on it, I would say maybe 30%, 40% that we don't know.
Ooh, that's a tough question for me to quantify. I asked the same question of someone else.
My name is Jeremy Green. I'm a physician and a historian of medicine at Johns Hopkins.
So what's Green's answer? There's a Rumsfeldian answer of, you know, known, knowns.
There are known, knowns.
Known, unknowns.
Known, unknowns.
That is to say, we know there are some things we do not know.
There are also unknown, unknowns.
A different way of answering that question would have to do with
what our idea of relevant science of medicine is.
For example?
If you take, for example, the moment in the
Renaissance, the Vesalian moment when the opening of cadavers and description and rendering and
precise three-dimensional chiaroscuro engravings of the human body was an exciting area for research
that actually this humanist process of opening up cadavers, showing that the
innards were not exactly what the ancient Greeks had described. So as a historian, rather than
giving you a fixed percent of where we are, I can give you a Zeno's paradox that we keep on
getting close to that finite moment and then reinvent a new, broader room for us to inhabit.
And that's because there's been a lot of progress in how we're able to explore the human body.
There's the gross anatomy of the body, which you can see with your own eyes.
Anupam Jena again.
Then go a layer further, and we're now at the microscopic anatomy of the body.
So now, what do the cells of the body look like when they are diseased under a microscope?
And now...
Now go a layer further where you are now trying to understand things about the body
that you can't even see with a microscope,
and that's at, let's say, the level of the proteins in the cell,
or even further down, the level of the DNA that encodes
the protein.
By the end of the 20th century, there's a very strong genetic imaginary, which really
helps to then fuel the excitement behind the Human Genome Project.
It's thought once we know the totality of the human genome, we'll know all that we need
to know about bodies and health and disease. Of course, we already know a great deal.
And to be fair, for all the mistakes and oversights in medicine,
there's been extraordinary progress.
What are some of medicine's greatest hits?
I'm sure every historian of science and medicine would give you a different set of hits.
That's Evelyn Hammons. She's a professor of the history of science and African would give you a different set of hits. That's Evelyn Hammons.
She's a professor of the history of science and African-American studies at Harvard.
The ones that I typically think about are
the introduction of more efficacious therapeutics and medicines.
I would put something like the discovery of insulin right up there near the top.
That's Keith Wailoo.
He's a Princeton historian who focuses on health policy.
It transformed diabetes from an acute disease into a disease that you live with.
And to me, that is much more the story of what medicine has been able to do in the 20th century.
The medicine that comes to my mind is statins.
They've been shown to have benefit in preventing heart attacks,
in the prolongation of life among people who've had heart attacks,
and the same thing for stroke and other forms of cardiovascular disease.
But there are many, many drugs that are like that.
These are truly awesome interventions for which we should all be thankful.
One of the most remarkable developments over the past century and a half
is the unbelievable gain in life expectancy.
In the U.S. and elsewhere, it nearly doubled.
Now, it might be natural to ascribe that gain primarily to breakthrough medicines, but in fact, a lot of it had to do with something else.
A lot of the advances in mortality and morbidity haveacterials came along in the mid-century because of improvements
in housing, sanitation, diet, and sort of tackling urban problems that really created
congestion and produced the circumstances that made things like tuberculosis the leading
cause of mortality.
For example, if you think about the reversal of the Chicago River, it used to flow into Lake
Michigan in the 19th century, and people were dumping their waste into it. So every summer,
there would be hundreds of deaths of babies and children from infant diarrhea because the water
was so contaminated. They reversed the flow of the river, so it flowed downriver toward the Mississippi,
and that significantly improved the health
of the people who lived there.
So we've got public health improvements to thank,
and yes, better therapeutics and medicines.
Also, new and better ways of finding evidence.
I actually think it's the technology
that really revolutionized how we think is the use of controlled experiments.
That's Vinay Prasad. He is an assistant professor of medicine at Oregon Health and Sciences University.
Prasad treats cancer patients, but also...
The rest of my time I devote to research on health policy, on the decisions doctors make,
on how doctors adopt new technologies, and when
those things are rational and when they're not rational.
Which means that Prasad is part of a relatively new, relatively small movement to make medical
science a lot more scientific.
You know, if you think about medical science for thousands of years, what was medicine
but something that somebody of esteemed
authority had done for many years and told others that it worked for me, so you better do it.
Even though medical science seemed to be based on evidence, Prasad says,
The reality was that what we were practicing was something called eminence-based medicine.
It was where the preponderance of medical practice was driven by really charismatic
and thoughtful, probably to some degree, leaders in medicine. And, you know, medical practice was
based on bits and scraps of evidence, anecdotes, bias, preconceived notions, and probably a lot
of psychological traps that we fall into. And largely from the time of Hippocrates and the Romans until, you know,
maybe even the late Renaissance, medicine was unchanged. It was the same for a thousand years.
Then something remarkable happened, which was the first use of controlled clinical trials in
medicine. Coming up on Freakonomics Radio, how clinical trials began to change the game.
It really doesn't matter that the smartest people believe something works.
The only thing that really counts is what is the evidence you have that it works.
How some people didn't have much of an appetite for actual evidence.
There was a great deal of hostility to it from, I'd say, the medical establishment.
And in a strange twist, how better science is pushing medicine not always forward,
but sometimes backwards.
It is quite common to see practices that end up getting reversed.
And the best estimates are that happens about 15% of the time. All right, take a deep breath through your mouth, in and out.
That's good, okay?
One more.
One more.
One more.
Anupam Jena is an MD and a healthcare economist.
All right, I'm going to lift up your shirt and listen to your heart.
In most developed countries, we tend to think of medicine as a rigorous science,
and we think of our doctors as, if not infallible, at least reliable.
I think that the typical patient probably does look to their doctor for answers
and they value very highly what that opinion is. But as we've been hearing, the history of medical
science was often eminence-based rather than evidence-based. When did evidence really start
to take over? Evidence-based medicine has
become hugely important in the last 25 to 30 years. The movement is a result, Jenna says,
of at least two factors. Number one, we're doing more randomized controlled trials,
and that tells us more information about what works and doesn't work. And number two,
improvements in computer technology have now allowed us to study data in a way that we couldn't have done 30 years ago.
There's also been a movement to collect and synthesize all that research and all those data.
So our vision is to produce systematic reviews that summarize the best available research evidence to inform decisions about health.
That's Lisa Barrow, a pharmacologist by training who studies the integrity of clinical and
research evidence.
And I'm also co-chair of the Cochrane Collaboration.
The Cochrane Collaboration was founded in Britain but is now a global network.
The systematic reviews they produce are really the evidence base for evidence-based medicine.
And we've been a leader
in so many ways in developing systematic reviews. We were the first to regularly update these
reviews. We were one of the first to have post-publication peer review and a very strong
conflict of interest policy. And actually, we were one of the first journals that was published
only online. Which means that whatever realm of medical science you're working on,
you can access nearly all the evidence on all the research ever conducted in that realm,
constantly updated, available on the spot.
Compare that to how things used to work,
looking up some five- or ten-year-old medical journal to find one relevant article
that may well have been funded by the pharmaceutical
company whose drug it happened to celebrate. How is Cochrane funded? We're primarily funded
by governments and nonprofits. What about industry money? We don't take any money from
industry to support any official Cochrane groups. Which means, in theory at least,
that the evidence assembled by the Cochrane collaboration. Which means, in theory at least, that the evidence assembled by the
Cochrane collaboration is pretty reliable evidence, as opposed to a whole variety of things.
Opinion, what the doctor had been taught 30 years previously in medical school.
Tradition, what they had been told to do by or advised to do by a drug company representative
that had visited them a week previously.
That is Sir Ian Chalmers, who co-founded the Cochrane Collaboration.
He's a former clinician who specialized in pregnancy, childbirth, and early infancy.
He was a medical student in the early 1960s.
When Chalmers observed his elders in practice,
he was struck by how much variance there was from doctor to doctor.
Okay, so some doctors would, if a woman had a baby presenting by the breech,
would do a cesarean section without any questions asked, as it were.
Or they may take different views about the way that the baby should be monitored during labor,
or the extent to which drugs should be used during pregnancy for one thing or another.
So lots and lots of differences in practices.
It's as long as your arm.
It's madness, isn't it?
When he became a doctor himself, Chalmers worked at a refugee camp in Gaza.
And as he discovered,
Some of the things that I had learned at medical school were lethally wrong.
Like how you were supposed to treat a child with measles.
I'd been taught at medical school never to give antibiotics to a child with a viral infection, which measles is,
because you might induce resistance to the antibiotic resistance.
But these children died really quite fast
after getting pneumonia from bacterial infection,
which comes on top of the viral infection of the measles.
And what was most frustrating was that it wasn't until some years later
that I found that there had been six controlled trials
comparing antibiotic prophylaxis given preventatively with nothing
done by the time I arrived in Gaza.
And those studies suggested that children with measles should be given antibiotics.
But Chalmers had never seen those studies.
So I feel very sad that in retrospect, I let my patients down.
This led Chalmers to embark on a years-long effort to systematically create
a centralized body of research to help attack the incomplete,
random, subjective way that
too much medicine had been practiced for too long.
He was joined by a number of people from around the world, many of whom, by the way, were
more versed in statistics than in medicine.
So we embarked on these systematic reviews, about 100 of us, and that resulted at the
end of the 1980s in a massive two-volume, one-and-a-half-thousand
page book. At the same time, it started publishing electronically.
And so, the Cochrane Collaboration became the first organization to really systematize,
compile, and evaluate the best evidence for given medical questions. You'd think this would have been met with universal praise.
But as with any guild whose inveterate wisdom is challenged,
as unwise as that wisdom may be, the medical community wasn't thrilled.
There was a great deal of hostility to it from, I'd say, the medical establishment. In fact, I remember a colleague of mine was going off to speak to a local meeting of the British Medical Association,
who had basically summoned him to give an account of evidence-based medicine
and what the hell did people who were statisticians and other non-doctors think they were doing,
messing around in territory which they shouldn't be messing around in?
And he asked me before he drove off, what should I tell them?
I said, when patients start complaining about the objectives of evidence-based medicine,
then one should take the criticism seriously. Up until then,
assume that it's basically vested interests playing their way out.
It took a long while, but the Cochrane model of evidence-based medicine did become the new
standard. I would say it wasn't actually until this century. So one way you can look at it is
where there is death, there is hope. As a cohort of doctors who rubbished it moved into retirement and then death, the opposition disappeared.
Yeah, so that's been the slower evolution.
That, again, is Vinay Prasad from Oregon Health and Science University.
The very first studies with randomization concerned tuberculosis.
This was in the late 1940s.
And then from then until really the 1980s,
the end of the 1980s,
we did use randomized trials,
but they weren't mandatory.
They were sort of optional.
One big benefit of a randomized trial
is that you can plainly measure in the data
the cause and effect of whatever treatment
you're
looking at. This may sound obvious, but it is remarkable how many medical treatments of the
past were conducted without that evidence. Anupam Jena again. I think some of the biggest mistakes
in the last century, let's say the first from 1900 to 1950, things like lobotomies to treat
mental illness, either depression or schizophrenia,
those strike me as being some of the most horrific things that could be done to man
without any really solid evidence base at all.
This is one of the trickiest things about practicing medicine day to day.
Let's say you're a doctor and a patient comes to see you with a persistent headache.
You make a diagnosis and you write a prescription.
What happens next?
In many cases, you have no idea.
The feedback loop in medicine is often very, very sloppy.
Did the patient get better?
Maybe.
They never came back.
But maybe they went to a different doctor.
Maybe they died. If they did get better, was it
because of the medicine you prescribed? Maybe. Or maybe they didn't even fill the script. Or maybe
they did fill the script, but stopped taking it because they got an upset stomach. Or maybe they
did take the medicine and they did get better, but maybe they would have gotten better without the medicine.
Like I said, you have no idea.
But with a well-constructed, randomized, controlled trial,
you can get an idea.
Vinay Prasad again.
The moment I think in my mind that kind of set us on a different course was a study called CAST.
CAST stands for Cardiac Arrhythmia Suppression Trial. It was
conducted in the late 1980s. CAST was a study that one of the things doctors were doing a lot for
people after they had a heart attack was prescribing them an antiarrhythmic drug that was supposed to
keep those aberrant rhythms, those bad heart rhythms at bay. That drug actually, you know,
carefully done randomized trial turned out
not to improve survival as we all had thought, but to worsen survival. And that was a watershed
moment, I think, where people realize that randomized trials can contradict even the best
of what you believe. It really doesn't matter in medicine that the smartest people believe
something works. The only thing that really counts is what is the evidence you have that it works.
The rise of randomized controlled trials led to a rise in what are called medical reversals. Vinay Prasad wrote the book on medical reversals, literally. It's called Ending be beneficial. And then one day, a very seminal study,
often better designed, better powered, better controlled than the entirety of the preexisting body of evidence,
it contradicts that practice.
It isn't just that it had side effects we didn't think about.
It was that the benefits that we had postulated
turned out to be not true or not present.
For instance, in the 1990s,
we would recommend to postmenopausal women
to start taking
estrogen supplements because we knew that women before they had menopause had lower rates of heart
disease. And we thought that was because of a favorable effect of estrogen. And then in 2002,
a carefully done randomized control trial found that actually it doesn't decrease heart attacks
and strokes. In fact, if anything, it increases them. I asked Prasad what first got him interested
in studying medical reversal. So I think I started to get interested in this even when I was a student, and I saw that
there were some practices that had been contradicted just in the recent past, but were still being done
day in, day out in the hospital. I mean, the example that comes to mind is the stenting for
stable coronary angina. A stent is a little foldable metal tube that goes in a blocked
coronary artery, and the doctor springs it open, and it opens up the blockage. A stent is a little foldable metal tube that goes in a blocked coronary artery
and the doctor springs it open and it opens up the blockage. And stents are incredibly valuable
for certain things. If you have a heart attack and there's a blockage that just happened a few
minutes ago and the doctor goes in and opens that blockage up, we're talking about a tremendous
improvement in mortality, one of the best things we do in medicine. But stenting, like every other
medical procedure, has something called indication drift where, yeah, it works great for a severe
condition, but does it work just as good for a very mild condition? And so over the years,
doctors had used stenting for something called stable angina. And stable angina is just that slow,
incremental narrowing of the arteries that happens to, sadly, all of us as we get older.
But the bulk of stenting was this indication drift, and we thought it worked. It made perfect sense. And then in 2007, a well-done
study showed that it actually didn't improve survival and didn't decrease heart attacks,
which were, even to this day, studies show that most patients who undergo this procedure believe
it will do those things. And in fact, it's been disproven for eight years.
And yet, while stenting for stable angina did decline, it didn't disappear. The rate of
inappropriate stenting, Prasad says, is still way too high. Now, this obviously starts getting into
doctors' incentives, financial and otherwise, and we'll get more into that in parts two and three
of this series. As Prasad makes clear, there is a long,
long list of medical treatments that simply don't stand up to empirical scrutiny. Some common knee
surgeries, for instance, where orthopedic surgeons take a tiny camera, make a tiny incision, and go
in there and actually sort of debride and remove those sort of scuffed and scraped knees. And in
fact, people sort of felt a lot better.
They had improved range of motion.
There's no argument there.
But you've studied it against maybe just taking ibuprofen
or maybe just doing some physical therapy.
What if you studied it against making the patient believe
that you were doing the surgery, but you don't actually do it?
And in fact, they've done those studies.
Those are called sham studies.
We give the appearance that we're going to do this procedure.
And the only thing we omit is actually the debridement of the menisci and the cartilage.
And in fact, when you do it that way, you find that the entire procedure is a placebo effect.
There's another example where we use sort of a cement we inject into a broken vertebral bone.
And that, again, was found to be no better than injecting a saline solution in a sham procedure.
And the cement
itself costs $6,000. And I said, you know, at a minimum, you can save yourself the $6,000 and you
don't need to use the cement. What would be the incentives for me to do the study that might
result in a reversal? Because we know how publishing works, whether it's in your field,
in any academic field, or in the media as well. It's the juicy, sexy new findings that get a lot of heat.
And it's the maintenance articles or the reversal articles that nobody wants to hear about.
So I would gather that there are fairly weak incentives to doing the studies that would
result in reversals, which also makes me wonder if there is a woeful undersupply of such studies,
which means that there probably would be even
more reversals than there are. Yeah, so I think that's a fantastic question. One of the things
we did in the course of our research was we took a decade worth of articles in probably one of the
most prestigious medical journals, the New England Journal of Medicine. And there's about maybe 1300
articles that concern something doctors do. About 1000 of those articles were something new,
something that came off that, you know, is coming down the pipeline, the newest anticoagulant, the newest mechanical
heart valve. And if you tested something new, exactly as you'd expect, 77% of those published
manuscripts concluded that what's newer is better. But we also discovered about 360 articles tested
something doctors were already doing. But if you tested something doctors were already doing,
40% of the time we found that it was contradicted or a reversal.
I'd love you to talk about the various consequences of reversals,
including perhaps a loss of faith in the medical system generally.
So if you find out something you were doing for decades is wrong,
you harmed a lot of people.
You subjected many people to something ineffective,
potentially harmful, certainly costly, and it didn't work. The second harm, we say, is this
lag time harm. Doctors, you know, we're like a battleship. We don't turn on a dime. We continue
to do it for a few years after the reversal. And the third harm is loss of trust in the medical
system. And that's the deepest harm. And I think we've seen it in the last decade, particularly
with our shifting recommendations for mammography and for prostate cancer screening, where people
come to the doctor and they say, you guys can't get your story straight. What's going on? It's a
tremendous problem. And I'm afraid that probably what we are doing is we are making people feel
like there is nothing that the doctor does that's really trustworthy. And I'm afraid that that's sort of the deepest problem that, you know, we face, this loss of trust.
Okay, so how do you not throw out the baby with the bathwater?
What are some solutions to a practice of medicine and medical research that results in fewer reversals?
So that is a million-dollar question.
One is medical education.
You know, we have a medical education where for two years,
students are trained in the basic science of the body.
Only in the latter years, the third and fourth year of medical school,
are students trained in the epidemiology of medical science
and evidence-based medicine and thinking not just how does something work,
but what's the data that it does work.
And, you know, I've argued that that needs
to be flipped on its head, that the root, the basic science of medical school is evidence-based
medicine. It's approaching a clinical question, knowing what data to seek and how to answer that
in a very honest way. So that's one. The next category is regulation. And this is where you
get into, you know, what is the FDA's role and what does the FDA do? And I think many people
in the community hope that the products that are approved by the FDA are both safe and efficacious for what they do.
But, you know, we were faced with a problem in the 80s and 90s that we had never faced before, which was the HIV AIDS epidemic.
And advocates rightly said that we need a way to get drugs to patients faster, maybe even accepting a little bit more uncertainty.
And I think that was right. And I think that's still right for many conditions that are very dire, for which few other treatment
options exist, and which sometimes have very low incidence. So it's very hard to do those studies
because very few people have it. But what's happened is that mechanism has been extrapolated
to conditions that are not dire, that have very good survival, that don't have few options,
that actually have many options, and that many people do have. So we've had, again,
sort of a slippery slope for what qualifies for this accelerated approval. So I think there's
ways in which regulation can be adjusted. And then I think the last thing is the ethic of
practicing physicians. We have to have an ethic where when we offer something to someone
and there's uncertainty, we should be very clear about communicating uncertainty. I think it's a
tragedy today that, you know, no matter what you think of stenting for stable coronary artery
disease, it's a tragedy that so many people who are having it done believe something that is
clearly not true, that it lowers the rate of heart attacks and death. That's just factually not true.
And the fact that many people believe that, I think speaks to the fact that as doctors,
we allow them to believe it.
And let me ask you one last question.
I have a pretty good sense of having spoken to you now
for a bit of what has prevented in the past medicine
from being more scientific or more evidence-based,
but what do you believe are the major barriers still
that are still preventing it from becoming
as evidence-based as you'd want
it to be. So we should be honest about what medicine is. And in the United States, medicine
is something that now takes nearly or over 20% of GDP. It's a colossus in our economy. We spend more
on medicine than any other Western nation. We probably don't get as much from it from what
we're spending because it's such a large sector of the economy. The entrenched interests, the people who, the companies and the people who really profit from
the current system are tremendously reluctant to change things. I think we see that with just for
one instance, the pharmaceutical drug pricing problem we're having right now. I think no one
will doubt that the pharmaceutical industry has made some great drugs. They've also made some
less than great drugs. But does every drug, great or worthless, have to cost $100,000 per year?
And I don't invent that number.
That's actually the cost per annum of the average cancer drug being approved in the United States in the last year, well over $100,000 per year of treatment.
I think there's got to be a breaking point and people are recognizing that.
Next week on Freakonomics Radio in part two of Bad Medicine, how do those great drugs and the less than great ones to get made?
And then how do they get to market?
We'll look into the economics of new drug trials and how carefully the research subjects are chosen.
Now, that's very useful for a company
that are trying to make their treatment look like it's effective.
But does the population of people in this randomized trial
really reflect the real-world people out there?
We'll look at who's been left out of most clinical trials.
That suggested that women shouldn't be included in clinical trials because of the potential
adverse events to the fetus.
And how sometimes the only thing worse than being excluded from a medical trial was being
included.
The use of vulnerable populations of African-Americans, people in prison, children
in orphanages, vulnerable populations like these have been used for medical experimentation for
a fairly long time. That's next time on Freakonomics Radio.
Freakonomics Radio is produced by WNYC Studios and Dubner Productions.
This episode was produced by Stephanie Tam.
Our staff also includes Allison Hockenberry, Merritt Jacob, Greg Wazowski, Eliza Lambert, Emma Morgenstern, Harry Huggins, and Brian Gutierrez.
You can subscribe to Freakonomics Radio on Apple Podcasts, Stitcher, or wherever you get your podcasts.
You should also check out our archive at Freakonomics.com, where you can stream or download all our episodes. You can also read the transcripts
and find links to all the research and books we've mentioned in this episode. You can also
find us on Twitter, Facebook, or via email at radio at Freakonomics.com. Thanks for listening.