American Thought Leaders - Why Many Scientific Findings Don’t Hold Up Under Scrutiny: Emily Kaplan
Episode Date: July 12, 2024Sponsor special: Up to $2,500 of FREE silver AND a FREE safe on qualifying orders - Call 855-862-3377 or text “AMERICAN” to 6-5-5-3-2“We don’t know who the peer reviewers are. Imagine opening ...a newspaper and there’s no masthead. You don’t know who any of the editors are? There’s no accountability for why this work got through or not?”In this episode, I sit down with Emily Kaplan. She is an investigative journalist, author, business leader, and passionate advocate for women’s health and for a return to what she calls “the roots of true scientific exploration.”“We’re one of two countries where you can directly market pharmaceutical products to consumers,” says Ms. Kaplan.Today, she is the co-founder and CEO of the Broken Science Initiative, an alternative approach to health and science that promotes predictive value, access to data, and prioritizing patient welfare over profit.“My sense is we’re not going to fix these big systematic problems, but you can empower the individual to critically think about things and allow them to make better choices for themselves and their family. So, that’s really the goal of the Broken Science Initiative,” says Ms. Kaplan.We discuss various forms of scientific misconduct, touching on peer review, statistical manipulation, and the over-medicalization of childbirth.“The highest predictor of whether you’re going to have a C-section or not is what hospital you deliver in. It’s not you. It’s not your doctor. It’s the hospital,” says Ms. Kaplan.Views expressed in this video are opinions of the host and the guest, and do not necessarily reflect the views of The Epoch Times.
Transcript
Discussion (0)
The highest predictor of whether you're going to have a C-section or not is what
hospital you deliver it. It's not you, it's not your doctor, it's the hospital.
Emily Kaplan has been an investigative journalist, a business leader, and a
passionate advocate for women's health. Today she's calling for a return to the
roots of scientific exploration. We're one of two countries where you can
directly market pharmaceutical products to consumers. She is the co-founder and CEO of the Broken Science Initiative.
We're not going to fix these big systematic problems,
but you can empower the individual to critically think about things
and allow them to make better choices for themselves and their families.
So that's really the goal of the Broken Science Initiative.
This is American Thought Leaders, and I'm Jan Jekielek.
Before we start, I'd like to take a moment to thank the sponsor of our podcast,
American Hartford Gold. As you all know, inflation is getting worse. The Fed raised
rates for the fifth time this year, and Fed Chairman Jerome Powell is telling Americans
to brace themselves for potentially more pain ahead. But there is one way to hedge against inflation. American Hartford Gold makes
it simple and easy to diversify your savings and retirement accounts with physical gold and silver.
With one short phone call, they can have physical gold and silver delivered right to your door
or inside your IRA or 401k. American Hartford Gold is one of the highest rated firms in the country
with an A-plus rating
with a Better Business Bureau and thousands of satisfied clients. If you call them right now,
they'll give you up to $2,500 of free silver and a free safe on qualifying orders. Call 855-862-3377 or text American to 65532. Again, that's 855-862-3377 or text American to
65532. Emily Kaplan, such a pleasure to have you on American Thought Leaders.
Thank you for having me. I'm thrilled to be here.
Well, you have an initiative called the Broken Science Initiative, and this is something I've been thinking about for a long time.
I mean, certainly over the last four or five years.
There are real problems in what's being accomplished in science today,
what's being portrayed at science that isn't particularly,
and even this replicability crisis that John Ioannidis pointed out in his huge paper on
the topic that a lot of science just simply doesn't pan out.
You couldn't demonstrate it again, which is a foundational issue.
Tell me about what's going on.
We think of science as the empirical branch of
knowledge, right? It should be where we look most for truth, knowing we're never going to find
ultimate certainty or ultimate truth, but it's the pursuit of that. And I think when you look
at things like the replication crisis, these are really important symptoms of a larger break.
So we're focusing a lot on medicine, but there are systemic problems because I think our
feeling is that predictability has been replaced by consensus.
And so you have sort of this groupthink model that allows for really easily, you know, statistical
manipulation which we can get into, as well as corruption.
The pharmaceutical industry spent more money lobbying Congress last year than any other
industry. So, I mean, that's huge, more than manufacturing, more than finance. It kind of
blows your mind. And if you look at the spending by the pharmaceutical companies on media, you see
a very similar playbook. We're one of two countries where you can directly market
pharmaceutical products to consumers. All of this falls under this idea of broken
science in the sense that, and this isn't political, right? And so I think people
try to move this into the political sphere because it becomes, people feel
very defensive about it. You want to trust your doctor. Your doctor didn't go
to medical school to mistreat you.
But I think the power structure has been inverted.
And so we're seeing all these symptoms.
And so I think one of the things, Greg Glassman, who's my partner on Broken Science, and I have both been looking at these problems for about 20 years. And so when COVID happened and there was this sort of just, you know, real polarization and disagreement about, you know, what you could trust or what information was valid,
I actually think it was sort of a gift because I think it basically brought science to the dinner
table in ways we hadn't seen before. And a lot of people woke up to this notion of regardless of
which side you're on, that they didn't really
know who to trust. And my sense is we're not going to fix these big systematic problems,
but you can empower the individual to critically think about things and allow them to make better
choices for themselves and their family. So that's really the goal of the Broken Science
Initiative is to expose these problems. We have a tremendous amount of scientific misconduct that's
going on. I mean, we lost Harvard's president and Stanford's president within six months of each
other to charges of scientific misconduct. This is a real issue that I think Americans need to
inform themselves on. We're going to launch an education society in the fall. Kids should
understand the importance of asking good questions and challenging authority in a polite, respectful
way. But the only way that we make progress scientifically or as a society is trying to
think of better ways to accomplish things and solve problems. And I fear we've gotten away from
that. And I think our, you know, sort of the isolation and polarization that we see politically
is, it's terrifying. I think we lose curiosity about
things that we don't. We may have confidence we understand, but if you're not open to listening
to other people's ideas, we're all in big trouble. And truth is, we all want better for our kids than
we want for ourselves, right? I don't care what political party you're involved in. From my stance,
health is a root of happiness. And so this is a common denominator
for all of us. And the chronic illness epidemic, which Bobby Kennedy is taking on in a very
thoughtful way, and I said I wasn't going to get political, but I do think he deserves a lot more
attention. He has successfully sued many branches of the government. He knows how they work. He
knows where the corruption lives. But the chronic disease epidemic in this country, I think, is our
number one vulnerability. And there's a lot of stuff that doctors and patients can learn and do,
and moms and dads can help their kids in terms of lifestyle changes and getting the sugar out
of the diet. Some sort of rudimentary things that we don't really think of as medicine because they're preventative, but they're predictable.
And so again, like at the base of the Broken Science Initiative is this idea of predictable
outcomes as the demarcation between science and not science. So I think you've created a kind of
a roadmap here for us for what we're going to do today. Because we need to talk about predictability, absolutely.
So you mentioned a few things. You mentioned critical thinking, you
mentioned openness to new ideas, you mentioned this distinction
between science by consensus versus science by predictability. So explain that distinction to me.
It might not be obvious to everybody.
Sure.
So, I mean, I think consensus is just we all agree, right?
Let's take a vote.
There is no voting in science.
And so that should stop us right there.
But what do you mean there's no voting in science?
I mean, I think a lot of people would say, hey, like, if you get the smartest people
in the room and these people vote that it's this way, then it probably is.
Doesn't that make sense?
Well, no.
There's a Supreme Court case.
It's actually, I think, three cases that makes up the Dahlbert standard.
And it determines who can be a scientific expert in a court case.
And two of those tests are you have to have been published in peer review
and you have to be accepted by the scientific community.
Now, let's think about that for a second, right? If you take things like space exploration,
is that not science? A lot of that's top secret. It's not been peer reviewed.
Are those people accepted within their scientific community? Yes. But what about the person who
builds some rocket and blasts it off from their backyard and they hit their target and
then they do it again, they're not a scientist because nobody knows that they're doing that.
So we have these sort of ways of thinking and creating standards or like a litmus test for
how we define science. And I don't think that definition is right. We have this with peer
review profoundly where I think there's a general confusion about
peer review being similar to good journalism. Peer review is consensus, right? It's a group
of people who have basically decided that if you get a p-value that's a statistically significant
result, that that means that you have validated your hypothesis. And the way the p-value works
is that's not right, actually.
You're just looking at the data sets of the null hypothesis
and comparing them to the data you gather through the intervention.
And you're saying, is there a relationship between these two, right?
Did we prove or disprove the null hypothesis?
There's no testing the null hypothesis.
That's an assumption.
And you're not saying anything about your hypothesis with a p-value.
You're also not saying that you can replicate the work, which from the Broken Science Initiative,
that's the standard.
You have to be able to replicate your results so that you know that you have a predictable
outcome.
So, you take that as a standard to get published.
You have to have a significant p-value.
You have peer reviewers who are not paid they're told when they
get a manuscript to assume that there's no scientific misconduct and that all the sort of
facts and information are right i've had peer reviewers tell me that they also are often told
don't comment on the design of the study because the design the study has been concluded so this
is fascinating let me let me jump in here here. As someone who was trained in experimental design some decades ago, I haven't been doing
it for a while, but it's actually quite difficult to come up with a very elegant design.
And one thing that I discovered years ago, and this actually made me very concerned,
was that I could really stack things in the favor of getting a result that I kind of hoped
would be the result if I design things in a particular way.
My point is the design is incredibly important.
It's very easy for a skilled experimental designer to help the person that's funding
them, for example, get the idea they want, whether even sometimes perhaps unconsciously. Right. Because, you know,
when your funding depends on something, you're a bit conflicted, right?
Right. Part of the scientific misconduct that we're seeing, so we had this big
scandal that happened at Dana-Farber, which is the Harvard Cancer Research Institute
preeminent hospital in the United States, where they were copying and pasting
images. So they were taking an image that was day one of the control group, and they were pasting
it into later in the intervention group, as if to say the tumors didn't grow, we've suppressed
tumors. I mean, that's obviously not a mistake, right? And that's published. And so it gets
through these gatekeepers. Now, is that a design failure? Well, it's an execution
failure. And, you know, I think there's so much of this kind of image manipulation that's going on.
And it doesn't seem like anyone's actually looking at the images except for these sort of
image sleuths like Elizabeth Bick, who's a big hero of mine. And they're all doing this on Pub
Pier and it's a hobby for them. But my point is that the statistical significance is so easy
to manipulate that you can come up with whatever outcome you want. And we're not going a step
farther to say, okay, you're claiming this is a statistically significant result, but let's look
at the images. What do the images say? And some of the stuff, you don't need sophisticated technology.
You can literally see that it's been copied and pasted. Why is this not something
that's causing just complete unrest in our society? You have these major, major institutions
committing real fraud and the outcome for the patients isn't really being considered.
So you're a doctor. I mean, this is another part of this consensus. And you hear something's coming from Harvard Medical School.
Of course, I'm going to trust it.
I'm not going to challenge that.
I mean, there's even statistical things that are really interesting, like intention to
treat analysis is one that I think, you know, we do this thing called Journal Club, where
you can go on an online Zoom call and we take apart studies.
So we're looking at real hallmark studies and then finding how the information is being portrayed
in the article versus what you really know.
And intention to treat is very common
and it basically means like you take a snapshot
of day one of a trial or some part
and then you have people who drop out of the study
or maybe they die.
So you take their data and you basically supplant it for the time that they're not in the trial,
assuming that it was the same result consistently across that time.
So we just did this.
We were looking at an exercise and aging study that was out of Norway.
And they did this.
So they have people who drop out of the study and they just assumed they kept exercising.
Well, we don't know.
Maybe they did.
Maybe they didn't.
They weren't a part of the study anymore. These are the kinds of things where I feel like we've taken
something that in its purest sense is about drilling down on uncertainty, right? So you want
to be less wrong. And then we've developed all of these sort of statistical mathematical tools
that have made this far more complex. I mean, I think, you know,
there's this whole idea of like person years, which is another statistical thing that will
just make your brain mad if you try to figure out what it is. And it's a way of estimating,
but it doesn't end up being very accurate. And we're not going back and rethinking how do we do
these, you know, what statistical tests are actually meaningful.
And I think part of that is because they've become, as Gerd Gigerenzer, who's a friend of
ours and has spoken at some of our events, he's at Max Planck and he calls it a ritual.
And I love that because it basically, it only has the meaning that we're putting into it.
It's not actually a stopgap. It's not actually a safeguard against any of this stuff.
And I think when you're talking about peer review,
you also have this notion of like,
we don't know who the peer reviewers are.
Imagine opening a newspaper and there's no masthead.
You don't know who any of the editors are?
There's no accountability
for why this work got through or not.
Another great example of all of this was
some research that Begley and Ellis did with Amgen, where they realized that cancer and
hematology drugs weren't as effective as they should be. I think they tested or they tried
to replicate 53 trials and they went to great lengths to do this. So they worked with the
original researchers. They tried to recreate the environment so that all the variables were
as close and as stable to the original work. And they could only replicate 11. So 11 out of 53
hallmark cancer and hematology studies, that's all they could replicate, going to extreme cost
and length to get this done. Now, the other thing they did was that they promised the researchers
anonymity because they needed their buy-in, right?
They needed them to help.
And so the researchers know that their work couldn't be replicated, but none of those
have been retracted.
So those studies still live in high-impact journals as though they're sound research,
even though the people involved know their work couldn't be replicated.
Amgen knows, but they haven't shared it.
I mean, that's fascinating. And of course, I referenced John Iannitti's work here. I mean,
his sort of giant analysis, right? Meta-analysis, which is just kind of shocking. I can't remember
the exact statistics right now, but it was so low. Yeah. I mean, it's called most research findings
are false. Right. And I
think you know that's shocking to people but I also think if you look at people
like Marsha Engel who was the head of the New England Journal of Medicine
editor-in-chief, Richard Smith who was at the BMJ editor there, and Richard Horton
who was the head of the Lancet. These are top top top medical journals. They all
three independently have come out and
basically said, like, we cannot trust anything in these publications. And I mean, Marsha Engel
wrote a book about it. It hasn't changed anything. So I mean, when I say I have no hope for the sort
of systematic way that we're handling these things, it's because of that. You have people
who are calling foul on their own industry at the highest position and nothing changes. Why is that really?
You know, Greg and I, he sold CrossFit in 2020 and we've spent about four years trying to really
find the root cause rather than looking at these symptoms, right? So there's a lot of organizations
who are doing great work looking at, you know, research that won't replicate or looking at COVID. What would you call, so lay out the symptoms to
me, because I think that for a lot of people, what you call a symptom, they might see it as a cause
even. So, you know, there's a lot of people who are really focused on like what happened with COVID
and, you know, this shows that science is broken or that medicine has been corrupted or captured.
And I think that's a
symptom. I think the amount of money going to the government and to the media from pharmaceutical
companies is a symptom. I think the fact that, I mean, for me, I was interested during COVID that
Fauci and Collins control all the money. They should not be involved in policy. I mean,
that's such a conflict of interest, right? So if somebody disagrees with you,
they're potentially not going to be funded. That should be church and state.
That's a symptom. I think peer review is a symptom. I think this Dahlbert case, which I would love to see overturned, is a symptom. From our perspective, really looking at the philosophy
of science, this goes back to Karl Popper and his denial of induction.
Induction in a very clear or simple way is being able to take information from the past or that
you know and apply it to a future sort of prediction. And so there was concern. I mean,
Hume actually is the first one who sort of calls into question induction. And it's reasonable to
do that. There's a bias that's inherent if you take
your past information and you apply it to something in the future. However, you're not going to ever
go in and get the best outcome if you don't take into account, let's say, someone's medical history.
So you're looking at an image or an MRI. You want to know, well, what led this person to get the MRI?
That's hugely important. That's all inductive reasoning.
But what's happening is that we've led, this has sort of led to this frequentist approach where you're really like ones and zeros, yes and no, very binary. And our goal is to, you know,
sort of return predictive value to science because it isn't a one or a zero. It's a scale.
And you want to know how close you are to certainty.
Let me see if I'm getting you right.
So let's say someone is getting an MRI, right?
And so basically, the way these MRIs are looked at
don't involve the patient's history, the way
they're being compared or assessed.
They do.
Right.
No, so that's an example of how we absolutely need to take
the patient history into account moving forward. But I think what has happened more along the lines
of this denial of induction is this notion of we have to be unbiased about how we process
information. So this is where a lot of the statistical tests come from. So null hypothesis,
significant testing is really a
product of this frequentist mindset. A frequentist, that's a word that a lot of people won't
make sense to them. There's a frequentist approach, which is basically like,
we're going to have certainty about things. We're going to be, again, it's like in computer
language, it would be like one or zero, right? And then there's a Bayesian approach and we're more in the Bayesian camp,
which is about predictive value. And it's actually that if you use Bayes' theorem,
you can test the hypothesis outcome. So let me think of an example. I actually just did
something on Instagram looking at mammograms. So people often talk about mammogram
sensitivity and specificity, which is just sort of the rate of false positives or true positives
or false negatives, right? And that's not telling you anything about you. It's telling you about the
test. What do I as a patient want to know? I want to know what is the likelihood that I have a
positive mammogram that I have breast cancer. So I need to have prevalence
rate, which is prior information, right? That's larger information than just what I'm looking at.
So sensitivity and specificity, all you need to know is the outcomes of the test.
But if you want to know something for the patient, you really need to know what the prevalence rate
is. And then you can come up with positive predictive value. Suppose you get a mammogram
and receive a positive result.
What is the probability that you actually have breast cancer?
Let's calculate it.
Positive predictive value is a measure of how often someone who tests positive for a disease
actually has the disease.
Positive predictive value can also be expressed as a conditional probability.
The probability you have breast cancer given you have a positive mammogram. We can use Bayes' theorem to calculate the positive
predictive value. First, we need to know how well mammograms correctly diagnose cancer. This is
known as sensitivity. It is the percentage of true positives out of all the mammograms done.
Sensitivity for a mammogram is around 84%. Next, we need to know
how common breast cancer is in the population. This is the disease prevalence represented in
our formula with the letter P. For breast cancer, 1.25% of women have it. Next, we need to know the
percentage of the time a mammogram correctly identifies an individual who does not have a disease. A test with a high specificity means there are few false positives.
For mammograms, around 91% of the time the test is negative when the person in fact
does not have cancer. So the specificity of mammograms is 91%. Let's put it all together.
So what does this mean?
If you get a mammogram and receive a positive result, the probability that you actually have breast cancer is only 10.56%.
At first, this may seem shocking, but let's think about it.
If 1.25% of women have breast cancer and a mammogram gives a false positive 8.88% of
the time, for an uncommon disease, most of the positive test results will be wrong.
Now, that's hugely important.
Nobody tells patients that.
Right.
But the amount of stress it causes, I mean, say, like, you could do this for anything, right?
Like, you get an AIDS test, and you're like, am I dying?
I'd like to know that.
And it might be two weeks before I can go back and get another test.
Right.
So wouldn't it be nice to know the likelihood that you actually are dying?
Five, ten percent.
I don't care about the sensitivity and specificity of the test other than how it relates to me.
That requires predictive power, which you need to have other information.
And with Bayes' theorem, truly, with something like the mammogram or the AIDS test, you can
refine it.
So you could say, like, I have a genetic predisposition.
So we don't want the general prevalence rate.
We actually want my cohort rate, right?
So I'm more likely than the general
population to have breast cancer. So then you factor that in and you'll still be able to figure
out how likely this test is to predict that I have it. There's things like that that we're not doing
that we could be doing. The other thing for me is that I don't want to take, I'm not trying to like
take down medicine, right? I think Dr. Mor at an all-time low and I understand why. I
think they got into this profession to heal and to treat and they have no time with patients.
So, I mean, I think, you know, I have a friend who's a doctor who says that it used to be when
the doctors walked down the hallway, the administrators would like run and hide out of
fear, right? And now it's the other way around. And it's because the doctors are being yelled at for not filing the form right or not coding something right. They're not
accountants. That's not why they got into this. They got into this to hold the hand of the patient
and help the person heal or prevent illness. And we've taken that power away from them. So,
I mean, I like to remind doctors they're the only one with any moral authority.
They take the Hippocratic Oath and that really means something. And we,
as patients, are absolutely dependent on them because the pharmaceutical companies are beholden
to their investors. They have a fiduciary responsibility to deliver returns. So do
hospital systems, even academia. So the only safeguard I see in the system is the doctor who needs to be able to stand up and say,
you know what? If I put you on this drug, you're going to get these side effects. And then we're
going to have to put you on that drug. And then you're going to have these side effects. And then
we're going to put you on that drug. So let's just try something different. Let's see if we can get
50 pounds off of you and how that goes. Then maybe you won't need any of these. That's a conversation I think we
would all value and respect. And it's not happening to near the degree that it should be happening.
And I think you see this in federal funding. I don't remember exactly what the statistic is,
but let's say it's more than 80% of medical research goes to treatment, and this is our taxpayer dollars, goes to treatment,
not prevention. I think like, let the pharmaceutical companies pay for the treatment.
Why doesn't the government pay for the prevention? You're working for me, right?
I don't want the treatment. I'd like to prevent the disease.
You know, what you're describing made me think of, you know, like holistic approaches to health or
functional approaches to health.
Before we go there, tell me a little bit about yourself because how is it that you got into all
this? Actually, you have a journalism background as well. Yep. So I got a master's at Northwestern
in journalism and I've written for newspapers, magazines. I worked at 2020 and Primetime,
mostly covering murder. And I think I've always been just very curious about problems, corruption, how things work
or don't fall apart, interplay between different groups, and thinking about how to take complex
information and make it accessible to people.
And I think telling truth to power was really important.
And you weren't trying to be friendly with the power set
when I was coming up as a journalist.
I had a lot of really great mentors that were really rigorous with me.
And then I've launched a couple of startup companies and helped with that,
which I love, and I think it's actually quite like journalism
in that you have to be kind of scrappy and resourceful,
compile lots of different data and figure out what's going to work
and what's not going to work. Those companies all had a heavy tech focus. And then Greg Glassman
was somebody who I had been working on a long form story about. He'd sued the CDC. He'd taken
on the NSCA, which was the rival sort of know, sort of personal training certification company. And
they had used peer review to publish a journal article in this sort of preeminent exercise
physiology journal that said that CrossFit caused injuries. And Greg had recognized that they'd
falsified all their data. So he sued, he won, a federal judge called it the biggest case of
scientific misconduct and fraud she'd ever seen in all her years on the bench. And I was in the process of writing up that story and he was canceled. So he was called a
racist because he put out a tweet, George Floyd was murdered. Now this whole tweeting situation
happens because the IHME was the modeling body for COVID. And Greg is a math guy, was raised by a
rocket scientist. All of CrossFit, his methodology
that he designed and developed was based on Newtonian physics. And so he recognized during
COVID that these models were wrong. We didn't have a death rate. We didn't have a denominator.
How are we making these projections? And so again, predictive value. So in the United States,
it was the IHME that was doing that work. And he had been tweeting at them for a while, being like, guys, your math is wrong.
You're leading us to financial despair in this country with these policies that aren't
based on solid math.
So when they came out, they said that they were going to start modeling racism as a public
health issue.
And I think Greg lost his mind.
He thought these guys have led us into quarantine, right? Lockdown is going to
disproportionately impact minority groups. Why in the world would we trust you all to model racism?
It's too important of an issue. And so he wrote, what is this Floyd 19? And then had this quote
from this medical journal that nobody bothered to put into Google because they would have found the
article, but they didn't. And I think because,'s a 60-something-year-old white man,
it was easy to just call him a racist.
And so I was working on the story
about the case that he had against the NSCA,
and he called me and he was like, I need you to help me.
And I was like, I can't.
I'm not a PR person, and if I do that,
I won't be able to ever be a journalist again
the allegations against him escalated turned into toxic workplace and then sexual harassment and I
knew him very well and he had more female executives than he had male which is basically
unheard of for a male-run company he CrossFit as a sport was the only company or sport to ever pay women the same prize money as men
and I'd asked Greg Gears before like why did you do that and he was like what do you mean why did
I do that it's like the right thing to do it's weird that that's not standard so he none of
these stories matched with the man that I knew and so as the allegations escalated I felt a moral
obligation to to jump in and help him.
And I had done some negotiation training at Harvard Law School and I ran a business that was in the Middle East, in the United States.
I was comfortable in high stakes environments.
And I felt like I couldn't stand on the sidelines and watch him be destroyed for stuff he hadn't done.
So I kind of jumped in, got on the phone with the New York Times who was running a story the next day
and explained like I've written for the New York Times
and I am not a PR person,
but I knew they had the story wrong.
And so the reporter there to her credit worked,
she had to run the piece that she ran the next day,
but then we were able to work together.
And to Greg's credit, I said to him,
I'm gonna do a deep investigation into this who's behind it
what you know because this is now escalated to this point that that seems
very much like a smear campaign and he gave me access to everything I asked for
so all his credit cards all his photographs all that you know anything
that I could use as verifiable information you can forensically rip you
know date time location and I was able to prove the allegations were all false Anything that I could use as verifiable information, you can forensically rip date, time, location.
And I was able to prove the allegations were all false.
And that launched a sort of crisis management strategic communications firm that I have.
So I do the broken science stuff and then I help people who have been wrongly accused
in the media or businesses that are trying to launch products into markets that they
know are going to be tricky.
And it's a lot like being an investigative reporter.
I mean, I sort of go in and I try to figure out who was behind it, what's true, what's
not.
It's very hard to prove a negative, right?
It's really easy for somebody to throw an allegation at you.
It's very hard to prove that didn't happen.
And especially with things like sexual harassment, these things have been weaponized to the point
where I had a client who was accused of sexual harassment, had things have been weaponized to the point where I had a client
who was accused of sexual harassment, had never met the woman, but it was a big corporate board
and somebody wanted him off the board. So they spread this rumor around and the fear that it
was going to be leaked to the media was enough to threaten his job. We all need to take a minute
and think a little bit about why somebody's saying something,
what their motivations are, how they might just be wrong by accident,
or it might be something much more nefarious.
And not just jump on these sort of like social bandwagons,
because my feeling is like that does the biggest disservice to the real victims.
There are women who are sexually harassed.
So I think it's really important to be thoughtful, ask good questions, the biggest disservice to the real victims. There are women who are sexually harassed.
So I think it's really important to be thoughtful, ask good questions, and be conscious of the impact that you have when you decide to cancel or decry somebody, or in the medical sphere,
you decide to take a treatment. Do you really understand what that treatment is? We just had
this recently, and I think we're writing it up for Broken Science. So we have original pieces that we put out. And then we also have ones that are curated news
pieces, and that's a part of our newsletter. But there was headlines around the world saying that
this new cancer therapy called CAR-T was amazing. It had stopped tumor growth in the brain.
This would be remarkable. We really do need some sort of
cancer breakthrough. But when we looked at the study, all-cause mortality was no different.
So people weren't living longer. They were stopping the tumors from growing,
but they were dying at the same rate. That's fascinating. What does that mean? I mean,
I don't have an answer, but I would think maybe the treatment is killing them. They're not dying from the tumor. They're dying from something else.
That nobody in the media covered that. Everybody covered like cancer breakthrough. Look,
these tumors have stopped. Well, that's great. You stopped the tumor, but the goal is to live
longer. I keep thinking back to your approach seems to be making sure you see the context of every scenario, making sure you factor in all these,
the elements that often are missing. Yeah. And being challenging, both to yourself and to
whatever it is that you're considering. You know, asking good questions, really good questions come from really good listening.
And I think we're not doing enough listening.
And so whether that be medical research
or, you know, what you hear about somebody
in your neighborhood,
there is a real detriment
to not kind of having a life of learning
and wanting to be curious.
I mean, I love people who don't agree with me
because I always learn from them.
But I think that that's less and less common.
And I think sometimes even for me, I can put people off because I challenge them.
And it's not because it's a disrespectful thing.
It's the opposite.
I respect you enough to want to know how you formed that idea.
Because it's different than mine.
So please explain it to me. It doesn't mean I'm going to agree or disagree. It means I want to know how you formed that idea because it's different than mine. So please explain it to me.
It doesn't mean I'm going to agree or disagree. It means I want to know how you got there.
I just remember there's one study that you looked at. Black women in America
can disproportionately die from childbirth even compared to many other industrialized countries.
Is that really a thing?
And why is that?
Well, so again, I think this is one of these things where we are looking at these sort of like statistical things and we're not peeling back to get to the root cause.
And so I did a lot of reporting on maternal mortality, you know, in my sort of prior life
when I was a journalist still.
And I became really interested in C-sections because the maternal mortality, you know, in my sort of prior life when I was a journalist still. And I became really interested in c-sections because the maternal
mortality thing is usually downstream from a c-section. And c-sections are
really difficult to study. So there's this great researcher in Boston named
Neil Shaw who has really looked at this more critically than anybody else in
part because he was delivering babies in two hospitals. In Boston, both Harvard medical teaching hospitals,
populations were very similar,
and his C-section rate was much different in one hospital
than it was in the other.
Now he's the same doctor, right?
There's all these variables that are constant.
And so he became really obsessive
about trying to understand.
Why so many more people were being given C-section.
Right, and by him.
And he said to me, he's the same doctor. He was making the choice to do it.
Right. Like what is the environment? What's going on in this one hospital where I'm delivering
more by C-section than the other hospital? And because he's one guy and the patient populations
are very similar, the hospitals are funded similarly. He was able to
really sort of unpack some of this. And he said to me, it's really hard to study because you never
deliver a baby via C-section and think like, eh, we didn't really need to do that, right? The
hindsight bias is so strong that you're like, thank God the baby's okay, right? Every time.
And so with the maternal mortality crisis, we're looking, you know, they'll say like hemorrhaging, right? Or they'll
name things. The CDC codifies this stuff. They're not calling it a C-section death.
And I think the race component of this comes in because it's probably a socioeconomic thing more
than a race, if you were to really dial it down. If you have a C-section, you're not supposed to lift anything like over 10 pounds for weeks.
If you have an hourly paying job that you don't get maternity leave for and you have to go back to work two weeks, maybe a week after you have a baby, you are at way higher risk for some sort of complication.
If you're a mom and you're home with other kids who you're responsible for and you've had a C-section,
you are at high risk.
So I think those populations are not cared for properly
in the way that we, you know, sort of don't care for women
in the health system very well.
I mean, we've talked about this before, but, you know,
I really became very interested in women's health
because we know that women's brains are different, our hearts are different, our lungs are different.
Everything in your health is dictated by your endocrine system, which is hormonal, and women's
bodies are not studied.
So in 1977, the government made it illegal for women to be in clinical trials if you
were of childbearing age.
What's childbearing age?
The majority of your life, like 15 to 55 or something if you
want to be safe. I don't know. So we weren't studying women's bodies. It was illegal.
And then I think it was 1993, they said, okay, women can be involved in clinical trials again.
But it wasn't that long ago that med school started doing female cadavers.
And it was just this assumptive practice that our bodies are the same, except that
because of our hormonal cycle, we're complicated to study. And I guess maybe people don't want
complex problems, sort of hard to understand. But women, you know, the diseases we get are
different. The prevalence rates are different. The treatments affect us differently. It's a
huge amount of research that needs to be done in that realm.
And I think, you know, birth is a huge inflection point for a woman for all kinds of reasons.
But it's usually the first interaction that she has in a serious way with the medical system.
And so I think what we've done is we've over-medicalized birth.
It used to be you'd have a midwife, right?
Unless you were at high risk and then you'd go to a hospital and you know i have interviewed a lot of nurses who say like i used
to sit by the woman's bedside and tell her that like everything was normal and you know this is
how it goes and count for her and do all kinds of things and now i'm sitting in with a bay of 20
monitors watching heart rates and you have a heart rate
monitor in the room with you and it goes up and the mom's, you know, it's monitoring the baby.
The mom's heart rate is going to go through the roof. She's worried about the baby.
There was actually a really cool clinic that was a pilot program. I think it was out of Duke
where they realized even for follow-up appointments, moms will skip their appointments,
but they will not skip the babies.
So if you can book mom and baby at the same time,
you have the pediatrician and the OB in the same office,
and they both have their appointment,
the mom shows up.
Simple fix.
And that's the deal.
It's like the maternal mortality stuff is happening because women aren't going to the hospital
or they're going to a hospital that's not taking them seriously
and they're not making their follow-up appointments
where you might be able to monitor that there's a real problem.
I mean, in lots of countries, you give birth and somebody from the medical establishment,
whether it be a doula or a midwife or a doctor, comes to visit you at your house.
You get a lot of data when you visit somebody in their
house, not just about the patient, but what's the environment? Is she being cared for? Is she safe?
Is she healing properly? We don't do that. And I do think that from my lens, that feels again,
like a sort of frequentist approach, because what we're doing is we're checking a box. Okay.
I mean, it's like people who are the doctors
that help you get pregnant, the IVF doctors and whatnot,
their scores of success are based on
whether you get pregnant or not,
not if you have a healthy baby.
This is a misalignment, right?
So lots of women who go through IVF
try to pick the best doctor they can find.
Well, the doctor will tell you what their rate is
of getting people pregnant,
not the miscarriage rate is of getting people pregnant.
Not the miscarriage rate.
That's really important.
By the way, what happened to the doctor who had the two hospitals that were very similar
but had these different rates of c-section?
What was happening there?
Well, so his premise is that it has to do with the mom being involved.
And so he actually created...
I haven't talked to him in years,
so I'm sure he's moved this along, but he was developing a dashboard
that would basically allow the mom to know all of these different things
and the medical team would have to go in and talk to her about certain things.
And I think most of his premise was this idea of if the mom is really involved
and there's open communication, the C-section rate will plummet.
And that in one hospital where there was all this sort of like technological innovation, it was leading to more
C-sections. And that the mom wasn't at the table. That basically the doctor would come in and say
like, hey, you know what? You've been doing this for long enough. We got to take you into the OR.
There's a fallacy about people thinking that women are scheduling C-sections
because they want to. I didn't find that in the data at all. Most of it is this sort of real time,
things aren't going quite right. We know we have this other way of delivering. Why not just make
that a preferred option when things don't seem to go right? And I mean, I think the maternal
mortality is a big risk. There also was a lot of data I found that said that women are far more likely to have to have
a hysterectomy later in life if they've had a C-section. It also prohibits your ability to
have lots of kids, right? So it's the only time a surgeon will cut on the same scar over and over
again. Surgeons are taught never to do that. And so it's very damaging. You can find out,
I actually did a story, I think it was for Cosmo
or Boston Magazine, that you can find out what your hospital C-section rate is. So the highest
predictor of whether you're going to have a C-section or not is what hospital you deliver in.
It's not you, it's not your doctor, it's the hospital.
You know, it's funny because you know what that reminds me of? I remember discovering years ago,
this was again, you know, probably decades ago, that there was a study that was done was which type of psychotherapy works best. And they were
trying to isolate, you know, which one, but it turned out that actually the method of psychotherapy
doesn't matter. The only variable that really kind of jumps out is the identity of the psychotherapist,
irrespective of which method they use. Some people are successful and some people aren't.
Wait, tell me more about that. I'm curious.
Well, I don't remember a ton more.
I remember the outcome was that there's something about the person themselves
which made them successful in dealing with people.
Presumably, they had some kind of very good, let's call it bedside manner,
or they were real listeners, or they somehow knew how to cue in on body language. I don't know. It was the person
that mattered. So if you go to a good one, they could use any method they wanted to, and they
would be successful. And if you don't go to a good one, it doesn't matter what method they've got,
you're not, it's not going to work very well. I think that listening is an incredibly powerful
tool for healing. I think that that study also spoke to the types of things I would have believed back then too.
I think that's why I remember it.
But it was obviously very interesting.
We'd have to dig it up.
I can't for the life of me remember anything about this.
I want to look at that.
I feel like there's something really that stands out in terms of this notion of research and how we think we identify these things, like this protocol works
better than this other one. And it turns out it's no, it's the practitioner. And I think that also
goes to, you know, sort of, you talk about the doctor morale problem that we're seeing, which
really, I hope the Broken Science Medical Society becomes a huge network for doctors to feel empowered again.
I think there's this also rejection of wisdom in our culture. Medicine is an art and a science.
It's both. The more experience you have, you have some intuition about something. Somebody comes in
presenting certain symptoms. And if you entered it into an AI, it might not put it all together.
But you remember this person also mentioned last time you saw them that they had headaches, right? Or that they had some other symptom that becomes very important.
Again, it's taking that prior knowledge and applying it. And I think that's for the individual,
you know, the patient you're treating. But I also think it has to do with just experience
treating patients and spending a lot of time listening to patients that we're not valuing
because we're not allowing doctors to spend time with patients. You know, the average doctor's
appointment, the doctor's spending like 12 minutes with a patient. It's not enough time.
And there is, there's some interesting work that looks at communication breakdown between doctors
and patients that I've spent a lot of time thinking about that basically indicates that,
you know, when you're scared or you're nervous, you are most likely to put forward your least significant symptoms. And the doctor is only
listening to your first few symptoms because they're busy and they're thinking about how to
code whatever it is that you just mentioned. So there is a true disconnect in a sort of
communication breakdown sense where you're listing your least significant symptoms and then I'm not hearing the rest. So there's little things like that, that I think we're at Broken
Science, we have a class that's coming out this summer that's for patients to navigate their own
healthcare. And this is one of these sort of tricks where I'm, you know, as a journalist
thinking like, what questions can you ask that can help drive you to be a driver, just like with the maternal mortality. You have
to have a seat at the table. These are the most important decisions you're ever going to make in
your life. You can't be passive. And yes, you're vulnerable. And yes, you want to get along with
people. But there's a respectful way of doing this. You don't have to be combative. And I
definitely think having an advocate or somebody go with you to the appointments if you're reluctant is hugely helpful. So I really appreciate what you're doing here
because you're not necessarily saying that you can solve this huge giant problem that you see.
You're pointing out some of the problems but you're empowering people to help themselves in this, let's call it difficult or
fraught environment. So how do people access Broken Science and some of these projects that
you have? So Brokenscience.org is the mothership and we just redesigned the site. So the back end
is actually a sort of AI operating system. So it'll go in and it'll learn what your interests
are and it'll start recommending things to you we also realized a lot of the material
was intimidating to people so we've created summaries and versions of most
of the really dense material at different grade levels so that you can
really come in at you know knowing nothing or you can go and read the
original work and all of that's free and then we're starting these cohorts that are
called societies. So the medical society is the first one we're launching. And that's going to be
a networking opportunity. There's a huge amount of sort of like social media type capability on
the back end of the site so that people can follow each other. They can share research. They can
invite each other to events. We're doing this thing called journal club, which is the taking
apart of the medical journal studies that will be part of the resource library for the doctors. each other to events. We're doing this thing called Journal Club, which is the taking apart
of the medical journal studies that will be part of the resource library for the doctors
and patients. I mean, I'm not a doctor. I'm interested in all this stuff. So I don't want
to preclude that, but there will be individual groups that will be more specifically focused
on profession. And I mean, I think the approach is very similar to what Greg did when he started
CrossFit. He knew he wasn't going to be able to save everybody but he knew anybody who wanted to
work really hard and do this you know his methodology would benefit from it and so it
was a real grassroots way of building and Harvard Business School called it the fastest growing
company in world history and I think we're going to replicate that. So we have our personal health society
that's launching at the end of the summer
and then an education society.
And those cohorts will feed into each other.
So I imagine our personal health society,
there'll be a lot of people
who do sort of self-experimentation, right?
So I'm going to do the keto diet for six months.
Well, we're going to have doctors in the doctor cohort
who can go study them.
And then hopefully publish that work
in a journal that will launch in a year or so.
So I think there's a lot of exciting stuff.
Our YouTube channel has got a lot...I'm trying my best to do explainer videos where I really
break down some of these concepts.
So I have one on induction, I have one on statistical significance, and I'm happy to
do more of those.
I love it when people say like, you know, I've been doing this forever but I never I can't
quite remember I don't quite understand I can't hold this and then I can try and
help think of ways that we can you know animate it or whatever our Instagram is
very active and popular and that's just at broken science initiative this new
show that I'm doing that's looking at people who have changed a paradigm in art or science
is called Emily Unleashed,
and there's an Instagram page for that.
And when we launch, it'll be, I think, on YouTube.
And that really is supposed to just inspire critical thinking.
So it's talking to people about how they stood up
to the status quo,
and either it made them tons of money
or it got them canceled,
but they felt they couldn't not do it.
And I want to inspire some more of that sort of American spirit rebellion. You know, don't defer to
authority, not on matters that are really important and learn to listen and be respectful to each
other. The ultimate certainty is the theme here, right? There is no ultimate certainty. I am not
a hundred percent confident in anything that I just said to you. I'm pretty confident, but somebody could come in here and know more about something,
and I'd be open to listening to them.
I want people to be engaged in that kind of debate and thought process.
Well, Emily Kaplan, it's such a pleasure to have had you on.
Thank you so much.
I really enjoyed this.
Thank you all for joining Emily Kaplan and me on this episode of American Thought Leaders.
I'm your host, Janja Kelek.