The Checkup with Doctor Mike - How To Talk To Your Anti-vax Uncle | Dr. Steven Novella
Episode Date: March 4, 2026I'll teach you how to become the media's go-to expert in your field. Enroll in The Professional's Media Academy now: https://www.professionalsmediaacademy.com/Buy Dr. Steven Novella's ...books The Skeptics Guide To The Universe and The Skeptics Guide To The Future here: https://www.hachettebookgroup.com/contributor/dr-steven-novella/Listen to Dr. Novella's podcast here: https://open.spotify.com/show/4RVc0stLre1yB5Ac3hiAPl00:00 Intro01:27 What Is A Skeptic?14:00 RFK Jr. & Vaccines28:16 AI Skepticism37:36 Cherrypicking Experts46:29 Debating Flat-Earthers54:20 My Worst Podcast Guest57:49 Jubilee's Surrounded1:09:30 Using Anti-vaxxers Tricks1:20:50 Responding With Anger1:24:13 Institutional Failures1:31:06 Social Media / Insurance1:37:40 Science Based Medicine1:45:50 Acupuncture & KT Tape2:07:00 Alternative/Homeopathic Medicine2:23:30 Funding Research / FraudHelp us continue the fight against medical misinformation and change the world through charity by becoming a Doctor Mike Resident on Patreon where every month I donate 100% of the proceeds to the charity, organization, or cause of your choice! Residents get access to bonus content, and many other perks for just $10 a month. Become a Resident today:https://www.patreon.com/doctormikeLet’s connect:IG: https://go.doctormikemedia.com/instagram/DMinstagramTwitter: https://go.doctormikemedia.com/twitter/DMTwitterFB: https://go.doctormikemedia.com/facebook/DMFacebookTikTok: https://go.doctormikemedia.com/tiktok/DMTikTokReddit: https://go.doctormikemedia.com/reddit/DMRedditContact Email: DoctorMikeMedia@Gmail.comExecutive Producer: Doctor MikeProduction Director and Editor: Dan OwensManaging Editor and Producer: Sam BowersEditor and Designer: Caroline WeigumEditor: Juan Carlos Zuniga* Select photos/videos provided by Getty Images *** The information in this video is not intended nor implied to be a substitute for professional medical advice, diagnosis or treatment. All content, including text, graphics, images, and information, contained in this video is for general information purposes only and does not replace a consultation with your own doctor/health professional **
Transcript
Discussion (0)
So with anti-vaxxers, it's not a knowledge deficit problem.
The problem is that they're full of active misinformation
and a very compelling narrative.
They're trying to kill us with the vaccine.
This system is not for us.
It's not built for people like me.
You can't just give them correct information.
You have to provide for them an alternate narrative for understanding the world.
We've all felt that the awkward tension around the dinner table
when your weird uncle starts spouting off about a conceiving.
conspiracy theory, they dug out of the dark internet rabbit hole where they find themselves.
How do you see eye to eye or have an open conversation with a person whose worldview
seems to be the polar opposite of yours? Dr. Stephen Novella has the answer. He's a retired
clinical neurologist from Yale who specializes in skepticism. Seriously, he's the brain behind
the podcast, book series, and the brand, The Skeptics Guide to the Universe, where he analyzes
conspiracy theories, misinformed trends, and the fascinating reasons people believe in them.
Given that it feels like we're drowning in misinformation, I want to ask Dr. Novella,
how can we keep our heads above water? So we discussed anti-vaccine activists, flat earthers,
acupuncture, homeopathic medicine, alternative cancer treatments, and other controversies
that have skyrocketed in popularity over the last decade. So without further ado,
please join me in welcoming Dr. Stephen Novella to the Checkup podcast. We need to be
more skeptics in this world, agree or disagree? I completely agree with that. That's like my primary
mission is to make more skeptics in this world. That's interesting. So in creating more skeptics,
what benefit does society get? So first I have to define skepticism a little bit because the term
has been co-opted and it's used it in different ways. All it means is thinking about how you come
to conclusions about what to believe in and what's probably true, what isn't true, you know,
what philosophers are called metacognition, right? You're thinking about your thinking process.
You have some way of evaluating evidence. You don't just believe things or go with the flow.
And this is a lifelong journey.
Is it the opposite of being gullible? It's the opposite of being gullible. Absolutely. That's the best
way to put it. So people think of it as cynical or denying the mainstream, whatever that
happens to be. But no, it's just the opposite of being gullible. You kick the tires,
you know, you look under the hood. You don't come to a conclusion unless you think it's justified
by the evidence, right? You apportion your belief to logic and evidence. That's being a skeptic.
And are most people born a skeptic? Is it something they develop in childhood? Is that a muscle that can be
worked on through adulthood? So generally speaking, people are born curious, very interested in
things, but analytical thinking is a learned skill. So no, you're not born a skeptic. That's like
saying, are you born a scientist? That's something you have to learn. It is a muscle. You have to
exercise. And again, you never get there. Like, there's no way, I'm now a skeptical. So I only believe
true things. Like, it's an endless process of refining and refining and more and more nuance and
just trying to get better at doing it. And it feels, I'm sure, to most people listening and
watching, that being a skeptic is fatiguing. In this world where you're bombarded with information
all day long, sometimes it's easier to just go with the flow. So how do we,
manage that? Yeah, so it's, first of all, being a skeptic is extremely empowering. You know, and having
done this now for 30 plus years, you know, interacted with many people who get, continue to get
emails from listeners who are like, first feel so freed by, oh, I don't have to believe this thing
anymore. I don't have to carry water for my tribe, my identity, whatever. I could now just think
what I want to think and believe what I want to believe, whatever is supported by logic and evidence.
So that, I think, overwhelms anything else. It really, really.
is incredibly freeing to just say, I'm just going to call it like I see it and not feel like I have
to defend, you know, some ideology or whatever. But it absolutely can be exhausting as well. And you have
to pick your battles. You have to decide what's important. You know, we talk about like, oh, well,
let's do a deep dive on this topic. You can't do a deep dive on every possible thing in the world.
You do not enough hours in the day. So, you know, generally speaking, it's helpful if you have a
good resources to go to.
There's a lot of people who will do
the heavy lifting for you.
And once you establish
that they're a reliable resource,
so you can say,
okay, I can trust what this guy says.
And then I can put that through a basic filter,
but I don't have to be the person
checking every reference and doing it.
Because somebody else, you know,
we basically will team it.
You know, we'll divide and conquer, as it were.
You have to do that.
Like, nobody could possibly recheck
every single sign, even as a physician.
Like, if we're trusting other experts
to tell us what the bottom line is,
and we can't re-react.
every single of the hundreds of thousands of studies that come out of it's impossible well that's the
goal of creating these professional organizations it's not just to pay monthly dues to them all year long
it's so that they provide us the summaries of these research articles the systematic reviews to guide us
to making better decisions for our patients and i feel those who are skept hyper skeptical perhaps
cynical of the medical industry dislike that fact the fact that we're taking that shortcut by
trusting these larger agencies.
What would you say to those cynics?
Well, there's always like a trust-but-verify approach.
It's like, yes, we do.
Again, you can't go and do the research yourself on everything.
You have to trust that somebody who published research and it went through an editorial
process and peer review.
You can't replicate all of that for every single study.
So there's a couple of ways we deal with it.
One is, and this is both the strength and a weakness of modern medicine, that you are
kind of forced to specialize.
And because, you know, you could really only keep up with the literature.
like really, really in a very narrow area.
And then you sort of moderately keep up with it in a broader area.
And then you have sort of a working knowledge of a broader area.
And you sort of design your practice based upon that.
I know we need general practitioners as well and family doctor, all that.
And that's really hard, in my opinion, because you have to sort of be competent at everything.
You're not going to be a master at anything.
You're not going to be a world expert in everything.
But you could certainly be competent at everything.
That's a lot of freaking work.
You have to rely on the people who are.
are hyper-specializing, right, who are a specialist in this super narrow area. I guess I've
read every single one of the 200 studies on this topic. You can't do that for the thousands of
topics that there are. So it's a pyramid and that's by design and it has to function that way.
Yeah, I just, as a really concrete example of this, I just had a patient that I sent out to a
gastroenterologist because I was worried that they're having an upper GI issue, which I thought
that they would benefit from getting an endoscopy for it, which we don't do as family medicine
doctors. And one of the tests that I would expect them to do is to check for H. Pylori, which is a
bacteria that can form in the stomach, can be problematic, can actually lead to cancers, et cetera.
So I wanted them to test for it. And usually when I'm sending a patient for an endoscopy,
they do that with the endoscopy during the process. They take a biopsy, and they're able to
send that sample out. But for some reason, the doctor made the decision to order
an H. Pylori breath test, which is something that I could do, but usually I wouldn't do it if they're
already getting an endoscopy, because the gold standard to me is the endoscopy. So, family medicine
doctor that needs to be competent, needs to learn from specialists, I send a message through the
electronic health record asking a question. Why did you choose to do it this way so that for the future,
I can better manage my patient's expectations and treatments, but I have to trust the expert on that
with that person being the sub-sub-specialist.
And that hierarchy of evidence and decision-making
is what keeps family medicine practitioners in practice.
Yeah, it's funny, because I'm at the other end
of the expert spectrum.
You're a general practitioner.
I was a sub-sub-specialist, right?
Neurologist and then a headache specialist.
But when we do a consult, like we're at the receiving end of a consult,
we're supposed to be educating the person who referred the patient to it.
We explain...
That's fading, though.
You're, I don't, well, I, I spent my career in an academic institution, and that absolutely
colors my perspective, certainly within the academic institution, you know, where I work.
That was the standard, and that's what I was taught. That's what I taught other people. That's
what we did. You don't just say, I'm doing X, Y, Z. You say, this is why I'm doing that.
Right. And this is what I'm not doing and why I'm not doing that, which is a critical thing that
most people do forget to do, because they're going to ask you that, why didn't you do this?
Right.
So you might as well get ahead of the.
curve and say, and DOSC was not indicated for this reason, whatever, prior to doing, whatever.
You just give the reasons.
Give the references.
The good thing about EMRs now, electronic medical record is you could have macros where you
just say, you don't have to reinvent the wheel every time.
You just say, all right, I got a standard migraine referral.
Boom, here is the standard thing that I want general practitioners to know so that they don't
send us people who just need to stop drinking caffeine, right?
Because otherwise you get these references, referrals that really.
really should have been dealt with at an earlier stage.
And we see teaching the community as part of our mandate.
Yeah, I think what led to this disconnect happening more often
is the fact that a lot of these electronic health records,
it's getting better now, but they don't talk with one another.
And as a result, when I send a patient to see a specialist
and they can't see that one because their insurance didn't allow them,
they see someone else, that person doesn't use my EHR.
and then it used to be, and the reason I'm even aware of this is because my father went to
medical school 14 years before me when we came to the United States.
So I watched his practice get built.
And specialists would send a fax or a paper that said, thank you for your referral, here's
what I did, here's the reasons why I did it.
I just lose that patient and I don't know what happened.
What I have to do is bring the patient back, which is unfortunate for them that they have to
spend the time, come in, tell me what happened.
I have to call the office, verify, make sure they understand correctly in order to make sure they're
making the best decision. Because I want to practice family medicine to the top of its specialty.
Meaning I want to quarterback their care in a way where they got this answer from a specialist.
But now let's see if that works within the entire holistic view of that person's life.
They don't send you a referral now? Because that's like you can't even bill when technically
until you've done that. So I think maybe their EHR does something.
that creates a checkbox that auto sends, but then if it bounces back, they say,
oh, we tried to send it. I don't know. When it's within our healthcare system, it's obviously
easy and it happens. Right, right, right. But again, from the specialist point of view, it's like,
in order to bill as a specialist, you have to have gotten a referral and you have to communicate
back to the referral. If you don't close that loop, the billing is not, is fraudulent. You have not
justified the billing. That's unfortunate. I want to look into that.
that more because it happens just so often where I lose patience to follow up.
We're kind of getting off topic, but interesting going in this direction, when we're talking
about skepticism, and we're saying that it's freeing in the sense where you get to take charge
and not have to fall in line, essentially, with a thought process. That's good on one end,
but to play devil's advocate, isn't it also freeing to believe that there's a higher power,
that there's someone above you making these decisions
where some people say religion to them
eases the burden of life of having so much information come at them.
Aren't we losing a little bit of that
when we have so much skepticism in our lives?
So, I mean, the research shows, right,
that there are some social benefits
to being part of a community.
And so if you ask the question,
are people who are part of a religious community
happier, do they live longer, whatever?
the answer is usually yes.
But my interpretation of literature is that that's because they're part of a community.
Like that's the key element, not that the community has some magical belief in religion
or God or whatever, whatever your belief system is.
So the other component, though, of skepticism, I think this will address the question is,
and this is like, again, if you have to boil it down to one word, is humility.
You can't be a skeptic without being massively humble, intellectual.
You have to be intellectually humble.
Why?
Well, because most of the time when you're dealing with a pseudoscientist, a crank of charlatan,
or somebody just promoting nonsense, man, I tell you, I have to do this for 30 years.
The one feature they all have in common is an utter lack of humility.
And humility makes you ask the question, yeah, but is this really true?
How do I really know this?
Am I believing this because I want to believe it?
because it makes me feel better about myself or about the world
or because it's really true,
because it's supported objectively by science and logic.
And that also applies to religion, right?
You're always trying to get outside of yourself,
outside of your culture, if you can,
outside of your belief system,
inside of your tribe, everything,
and say, yeah, but is it really, really true?
And that's the beginning of the process, right?
So that's part of the freeing aspect, too,
because you're getting out of all of that.
that some people don't even realize how oppressive it is.
It's like, oh, I didn't realize how I oppressed I was having to always defend my party or whatever it is.
Now I could just, you know, think for myself and, and again, the risk is always, especially with newly minted skeptics, like people who are just starting to like really practice.
Like, remember to be humble because the real risk is you start to think you know stuff, you know?
And then, like, once you think the journey is over, it ends.
So the journey has to never be over.
I can hear a voice in the back of my head, perhaps because I've done a few debates as of late
with people who are anti-vaccine or perhaps believe in some pseudoscience.
They would say, oh, we have this scientist here saying that we don't have humility.
Doctors don't have humility.
They have God complexes.
Are doctors that are perhaps scientific in nature and are not pseudoscientific?
Are they also lacking humility?
No, they have the humility.
tell you, same thing is true of a good scientist. Again, I'm coming from an academic
culture, so I'm biased by that. It's harder for me to speak to like your average person
in the field in private practice. But certainly within academia, humility is critical.
And I'll tell you, you know, you can't fake it to other academics. That's, you know what I mean?
Sure. If you know, experts, you can't fake expertise to an actual expert because they'll see you
come in a mile away. And when somebody gets, you know, they get higher on their own stuff, like they
get too full of themselves. Like, you know, everyone else knows it.
You know what I mean? The other people in their very small community, you can usually sniff that out. It's
like if you're not being humble. And what being humble just means is you, you realize the burden is on
you to prove everything you're claiming. And people are going to come at you and say, yeah, but did you
consider this? Did you look at what about this other study that this other guy published? Did you
account for that? And if you didn't do that yourself, if you weren't your own most vehement skeptic,
other people will be it for you. And then they'll be critical of it.
for not being your own
worst skeptic. That is
the standard within science, academia,
and skepticism. And if you're
deviating from that, man, you can fool the
public, whatever, you can't fool other experts,
man, they'll see it coming. Yeah. Do you feel
that this lack of humility
is tied to some sort of narcissism
that's exhibited by individuals?
Absolutely. I mean, look at, let's,
the poster child for this phenomenon right now
within medical circles is RFK Jr.
Right? Talk about a lack of humility.
Right? That explains so
much. And I do think he probably is a narcissist. I hate to diagnose people remotely, but just
casually speaking. You see tendencies. I see tendencies of narcissism. And I've been following this guy for
30 years, not just the last year when he, HHSJ secretary, because his guy's been a menace for a long
time. He is constantly substituting his own opinion for the opinion of the expert community.
Right? We know a science, what's rule number one of science communication? Expert opinions.
expert opinion.
Yeah.
Well, but, or this, you don't substitute your own opinion for true experts.
Oh, well, even more so.
Yeah, yeah.
You don't shoot from the hip-glo.
It's below expert opinion.
Yeah, right.
Yeah, it's even more.
It's right, right.
So, and you have to understand what, where the expert opinion is coming from.
If this is a transparent process of reviewing evidence, et cetera, and saying, this is what the evidence shows, you know, and we all agree that this is what the evidence shows, that's actually pretty solid, right?
if there's a good consensus like that.
If you say, yeah, I reject that,
I want to believe this other thing
because I think I know better
than all those people
who have dedicated their lives studying this.
That takes a massive amount of arrogance.
And that's where you go astray, right?
When you don't, again, you don't ask the question.
You know, and one thing
I always try to do as a science communicator
is when I, you know,
first of all I don't speak about a topic
until I feel I wrapped my head around it
and I've checked with experts, right?
In some way, either in person or published opinion
or if that's easily available or whatever.
At some point, I make sure that what I'm thinking is aligning with the experts.
And if it isn't, I better find out why.
And I assume, this is the other thing.
See, when the arrogant person like RFK Jr., when his opinion is different from expert opinion,
he concludes, I'm right and they're part of a conspiracy.
Whereas I conclude, I'm missing something.
Right?
The humility comes into, I must be missing something.
I don't understand this.
And when you do that, you find out, I am missing something.
Always, you are missing something because you're not an expert.
You know, the expertise is, you know, is a thing for a reason because they know things.
Another aspect of that, you know, at some point I realized like, oh, all this side, outside of my specialty, right?
Outside of medicine, all the things I think I know, like about astrophysics or whatever, is really just a metaphor that the experts are telling us to help us wrap our non-expert.
brains around it. But unless you know 12-dimensional, whatever, you know, whatever the math and all
that stuff, unless you are speaking that language, you don't know. You don't know what a black hole is.
Sure. You have a metaphor that somebody told you. And you think you're going to outthink the
people who came up with the metaphor because you can't understand what's really going on. That's like
super arrogance when you think like, you're going to overturn science because you're just such a
freaking genius. But then that arrogance or maybe confidence some people will view it is so seductive
to the general public. Why is that?
Well, that's another good question.
There's good neuroscience behind that as well.
We're basically evolved to surrender our critical thinking to charismatic leaders within our own tribe.
Right?
And there's an fMRI study now probably five to ten years ago, whatever, where they look, you know, functional MRI skin looks at the brain functioning in real time, right?
And they did fMRI people's brains while they were listening to a public speaker from within their belief system and from outside their belief system.
And they were different.
When it was from within their belief system, the frontal lobe circuitry that engages in reality testing was not functioning.
But it was when they were listening to somebody outside of their beliefs.
And so literally we surrender our critical thinking.
We are literally mesmerized by a charismatic speaker.
But that's just part of the overall phenomenon where, you know, again, we tend to believe things within our own tribe.
And we don't always engage our critical thinking.
And there's a lot of emotional reasons why we want to get that.
What is the tribe that RFK represents?
Because to me, it's not even clear.
I agree with you.
It's not even clear.
But, you know, so initially...
Because he was a dem.
Now he's part of a...
That's superficial.
Yeah.
I think ultimately that's superficial for him.
So, you know, 30 years ago, the answer was he's an environmentalist, right?
And then, you know, he came to his anti-vax position from my perspective through the
environmental past.
because anti-vaccine, you know, the subculture has is a multi-headed monster, right?
There's, there are, you know, liberal anti-vaxers, conservative anti-vaxers, and there's
environmentalist anti-vaxers.
They all have kind of a different path to that, different take on it, and then there are the
parents.
There's different reasons why you might come to that conclusion.
And I think he, as a, you know, his job was suing corporations about poisoning the public, right?
And so this was just right along with that narrative.
Oh, the corporations are poisoning the public with mercury.
He's already been an anti-mercury activist for environmental reasons,
so he, it sort of fit with his narrative.
And his narrative just continued to evolve over the decades
to become more and more conspiratorial, anti-government, anti-corporate.
Everything's a poison.
And, you know, now he thinks now that he's a scientist.
You know, that he can interpret the scientific.
Meanwhile, he's so incompetent at it.
He doesn't even know the difference between risks.
and hazard, like, for example, or absolute and relative, like basic things that you should know
to- Tell the audience.
Yeah, so risk and hazard is, this is a classic thing that comes up with the public misunderstanding,
this kind of stuff.
So I didn't come up with this, but I love it, so I borrow it because it really shines
a light on it very well.
Hazard is like a shark, right?
A shark is clearly hazardous.
They can kill you, like a great white shark.
It's a very hazardous being.
I would say so.
But if you're at an aquarium and the shark, you're at a aquarium and the shark, you're, you're
is in a tank and you're not in the tank, the risk is quite low. So big hazard, very tiny risk.
You get in the tank with the shark, that's a very high risk. Right. So you could say, well,
this chemical is hazardous. Therefore, the public should not be exposed to. It's like, yeah,
but at the doses that it's getting in the environment, in the food or whatever, the risk is insignificant.
There's no measurable risk. So do we care more about the hazard or do we care more about the risk?
I think we should care about risk
because that tells you what the actual probability is.
The practicality of harm, yeah.
And the United States, you know,
the regulatory agencies regulate things based upon risk,
except for California, which regulates by hazard.
The European Union also regulates by hazard.
RFK loves the hazard approach because everything's a hazard, right?
But then when you have alarm bells on all the time,
you inevitably start ignoring the alarm bells,
which we see with a prop in California.
California, what's interesting to me is the idea of what RFK does so well.
And this was a shift that happened in the early 2000s.
I studied this superficially, but I'm curious your take on it.
He initially worked on this World Mercury Project.
Yeah.
And then it shifted to being a children's issue from children's health defense.
Why did that shift?
Because it's the same organization.
It was just renamed.
So how did it go from, I even seen earlier interviews of him saying, no, I'm an anti-mercury advocate
and vaccines are amazing.
They saved hundreds of millions of lives.
That's his statements on camera.
Where was that shift?
What happened that drove him here?
I don't know specifically because that I think happened behind the scenes.
So we're trying to infer this from his shift in public statements.
I think it's that pathway where once thimerosol, which is the mercury-containing preservative,
that was more commonly used in vaccines, mainly just multi-dose vaccines.
But in 2002, they were removed from the standard vaccine schedule in the United States.
But at the time, the 1990s was sort of the big antimerosol move within the anti-vax movement.
And that's when he, and they were saying this is mercury toxicity.
Vaccine poisoning is mercury toxicity.
I think that...
And his...
Blamey of autism spike was he was saying due to mercury.
Yes.
But now that it's gone.
And it's mercury is in vaccine.
So we just shifted from mercury in general to vaccines, mercury in vaccines.
And then now there's no mercury.
Two vaccines.
That's the thing.
This is one of the great stories.
Again, what you have the advantage of doing this for so long is that in 2000,
talking with the anti-vaxxers, including people like Kirby who wrote a book about,
you know, mercury in vaccines.
And they all were predicting that autism rates were going to crater.
They were going to plummet once we took thimerosol out of vaccines.
Because, of course, the thymarosol is causing the mercury.
If you remove the cause, the effect should go away.
And of course, the scientists were saying, no, it's not.
It's not what's causing it.
It's not going to, and there wasn't a blip.
Of course.
Now it's 24 years later.
But they, man, they dragged that out for as long as they could.
Oh, it's the mercury from coal-fired plants is replacing the mercury from vaccines.
It's ridiculous arguments from a public health perspective.
So that basically proved from their own rhetoric that Mercury and Vaccines was not causing autism.
I wasn't even in high school when this decision was made, but I'm curious if you think the decision to remove Thymerazel from vaccines as a way to just be safe and placate this group of people was the right choice by our agencies.
I think it was, and we had this debate at the time. Is this playing into it or in the long run, is this a good thing?
thing. I think it was a good thing in the lunger. First of all, it wasn't absolutely necessary,
because, you know, we were shifting a lot to single-dose vaccines anyway. And the only
the argument you can, from a practical point of view, you could make against it, is that,
especially in other countries, they were relying a lot on the multi-dose vaccines or cheaper
per dose, et cetera. And it kind of was an elitist kind of thing to do, say, we're going to
get rid of these cheaper vaccines to help the worried well feel better. But, you know,
vaccines aren't a product, right? They're a program. And they
only work if you get public buy-in.
So first of all, there is just this stated goal, as you probably know, to absolutely minimize
human exposure to mercury.
That's like the EPA, the FDA.
Everyone agrees, yes, there's no safe exposure limit of mercury, not literally true.
But, you know, we're just going to, it's never going to be low enough.
So let's just make it as low as possible.
So their justification was out of an abundance of caution.
That was the literal term that used out of an abundance.
of caution. We'll just take it out of the routine vaccine schedule. And then it'll also
will sort of put, I think, where it was naive, where they thought was going to put the debate
to bed. We knew it wasn't. But it's like, yeah, but do it. And then we'll have great ammunition
for public, you know, communication on this topic, which we do. I, you know, again, it's a great
thing to refer to it. But it doesn't sway people's opinion who are on that side.
So remember, there's three groups of people, though, we're talking about four, really. So,
obviously, there's people like experts.
You usually don't have to convince other experts.
There are people who don't know, don't care.
Then there's sort of the people on the fence,
maybe the hesitant, or they hurt concerning things,
but they don't really have a lot of information.
Then there are the dedicated anti-vaxxers.
We're not really trying to convert the dedicated anti-vaxxers
because it does nothing, you know,
even on your debate, that woman who said,
nothing you say can convince me.
That's like the classic phrase, really?
Nothing.
You're just admitting to being immune to evidence and logic.
Like, okay, whatever.
I mean, not that no one ever comes over from that group,
but that's banging your head against the wall.
We're always talking to that middle group, right?
The people who are swayable and maybe just have some misinformation
before they go down the drain into dedicated anti-vaxxer, you know.
So like the vaccine hesitant.
The vaccine hesitant, we can call them that.
Those are really the prime target.
And also just the general public to make them more resistant to anti-vaccine, you know,
misinformation.
And they're absolutely persuadable.
Yeah.
Pulling back to the original question, if we have more skeptics that are thinking in this logical
way, that are not just following a tribalistic form of thinking, how does the world benefit?
So it's like the old saying in order for a forest to be green.
Every tree has to be green, right?
You can't have a green forest of red trees.
We live in a democracy, ostensibly, right?
For now, anyway.
TBD.
TBD.
And to whatever extent, you know,
our societies are democratic. It's also increasingly driven by science and technology. If we want to
have a rational society, people need to be rational. And I think we're seeing that. I mean,
this is certainly an eye-opener for me the last 10 years, let's say, in the United States.
Like, yeah, we perhaps a little bit naively thought, oh, the system is resilient. And, you know,
the institutions will save us. And I'm not saying that institutions aren't,
critical component of society. But they are way more fragile than I thought, naively 10 years ago.
And it really, like, no institution is going to save you from everybody being irrational,
right? Because the institutions are made of people. Like if every doctor sucked, it didn't
matter how good the institution of medicine was, it would suck because all the doctors suck. It's the
same thing. If people are en masse irrational, then society's going to be irrational. And that's kind of
what we're living through right now. So we need to.
a critical mass of people who could at least have some way of dealing with misinformation,
who were at least to have some filter in place, some level of humility, some trust in expertise,
the idea of expertise, some way of evaluating information, of holding people accountable.
If we don't have that to a minimal critical amount, democracies can't function and, you know,
medicine can't function and everything, you know, this increasingly complicated civilization
that we're building for ourselves can't function
until we just turn it all over to AI, right?
But until then, until the AI overlords take over everything,
we're in control, you know?
The funny thing is if you ask AI,
should you get a vaccine,
is it overall more beneficial than dangerous?
It says, of course you should.
So I don't know why people aren't even listening to AI.
Yeah, yeah.
They're kind of selective.
The people who are anti-vaccine,
even in the celebrity sphere,
will oftentimes chat with AI and believe in AI.
Some of them have even started AI companies.
So it's interesting how they choose to believe in in one area, but not in another.
Yeah, but I've been using AI a lot, mainly to evaluate it, right?
I do use it for some personal stuff.
But yes, as a science communicator, I want to be able to say, I've used it.
And this is my experience with it.
It's a mixed bag.
It's an absolutely mixed bag.
So specifically, it all depends on how you prompt it, man, how you ask the question.
If you ask it, should I take a vaccine?
You'll get one answer.
If you say, should I worry about vaccines, you'll get a very different answer.
So if you're prompting it in the negative, you'll get fed back what you want.
So I think the antivacters might be asking chat GPT or whatever about.
More emotional questions.
But they're getting the information they want.
And I've encountered that a lot as well.
There's also personalization that happens with these AI companies where they try and make it
so that based on your past queries, it answers more in line with that.
So if you're constantly asking questions with a conspiratorial mindset, perhaps it's
going to give you a conspiratorial answer.
Absolutely. That's the sycophancy problem. It's always trying to make you feel good. That's a great question and then lean into that to try to enhance that relationship, but rather than reflecting reality. How do you think AI plays out with skepticism? What role does it play?
So, I mean, it's a tool and like all tools that could be used for good or for evil, right? It's, I think my biggest problem with AI is the Oracle problem, right? Is that people see it as this magical Oracle and they kind of, or
getting emails from people saying, hey, AI said this, this is the truth. It's like, that's your
reference. AI, that's your reference. They said it and it must be true. Right. And then again,
it was a prompt problem. The way you prompted it kind of provoked that answer here. I prompted
it this way. I got a completely different answer. But let's dig into the references, you know,
this right? And then I, it's just so in this particular case, I think like all the references
were hallucinations. They were all fake. Right. Just like AFK, RFK's reports, right? So they didn't
even know about that. What do you mean it's fake? They had no idea that,
AI could spit out fake scientific references,
and they didn't check them up.
So there's always a teaching moment there.
It's like you've got to check your resources,
your sources, you've got to always go back to primary sources,
check your references, blah, blah, blah.
But of course, somebody like that's not in the habit of doing that.
But whatever, we do what we can.
Maybe it broke through the,
found the chink in their armor a little bit,
make you think about stuff.
Do you think with AI becoming more prevalent,
we're going to be creating more skeptics or less skeptics?
I don't know. I really don't.
There's the AI slot problems.
I think, so again, it's a tool that could be used for good or free evil.
I think on the good side when I'm being optimistic.
And I do think that societies do get more savvy over time.
And we see this when we can compare different countries.
You know, like when the Soviet Union fell, people went from an authoritarian world
to a non-authoritarian world for a while.
and pseudoscience blossomed everywhere
because nobody had any resistance to it.
They were like somebody who's no immune system, right?
Because they never had to think for themselves.
They just were told what the authoritarian government told them.
Whereas, you know, in countries that are more open,
free speech and commercialism, whatever,
it's like we get savvy about, like we know commercials
are trying to deceive us and, you know,
we get more savvy that way.
When people get, with access to the internet
and now the AI is just an extension of that,
People are actually getting smarter, I put that in air quotes, because, depending on how you measure it, right?
Their intelligence is just one thing. But if you do standard IQ tests, they're actually increasing over time.
So people have way more access. You're probably too young to know that before times when you would get into a debate with somebody about some fact.
It could be like, who was the star in this movie, whatever? And you would just never find out what the right answer was.
Because who's going to, how would you find out? Would you find out? If it was the kind of thing you could look up in an encyclopedia and you have,
one, you would do it. But now it's like instantly you have access to the world's store of
information and you find the answer and that's the answer, right? If it's something, you know, factual
like that. Controversial statements like do vaccines cost autism, the internet's terrible for that
because you then it's how do you use it? What skills do you have? What's your Google foo? Like,
now what's your AI foo? Like, do you know how to use AI to get reliable information,
to use it to search information more quickly.
But then you still go through the process of digging down to actual reliable resources
and finding out what both sides have to say and get to a point where you're like,
okay, I kind of have an idea what this issue is now.
Or are you being intellectually lazy, which I think laziness is a baseline of the human condition.
Not really as a criticism.
It is a criticism.
But, I mean, we evolved that way, right?
We evolved to be efficient and lazy is efficient.
And so we always are trying to get to the result as quickly as possible
at expending as little energy as possible.
But in today's world, you just can't do that
because it's too complicated.
There are people who are trying to deceive you.
There are people who have already been deceived.
They're passing it forward.
There's so many things you can't,
you just can't survive in the world today being intellectually lazy.
But that's the baseline still.
And so AI is just a really big excuse to be lazy, man.
Just have AI do the thinking for you,
have it do whatever for you.
And so that's my worry about it, is that it will make people even more intellectually lazy, which is death.
Yeah.
And speaking of the forests being green, how do you think that plays out for the medical students of tomorrow?
Will they become more lazy where they're not actually doing the fact-checking themselves or getting the answers themselves?
And they're just hoping that the AI solves it for them?
That certainly is a risk.
And that's, again, where the answer to that is institutional, you know, and that means,
having the physicians at academic institutions
who are teaching the next generation of doctors,
teaching them how to think.
I do, again, my bias coming from Yale
is that we do a good job of that.
And the entire really medical school curriculum
over the last 30 years has evolved
away from here are some facts for you to know
and to memorize.
Not that it was ever just that,
but that was a lot of it.
A lot of lecturing.
Yeah.
To this is how you think, like a clinician.
right? Because we just assume now they have access to all the data that they need,
and we don't really need to be a conduit of facts. Like, just look it up. I'll tell you how to look up facts,
maybe how to evaluate them, how to put it all together. But what I'm really teaching you is how to think.
That's like the number one thing I need to teach you, how to think like a clinician. And that,
and I lecture specifically about that, is specifically about what is science-based medicine,
and how do you look at the literature and come to a conclusion,
about what works and what doesn't work and what's true and what's not true.
What are five tangible ways that the average person can learn how to look things up correctly?
You can't.
I mean, depending on the level that you're talking about, right?
If you're, if you're, so for a non-expert, the answer is find out what the experts are saying.
That's the answer.
So you want to find out what the, you know, finds reliable sources, you know, essentially,
institutions that are dedicated to science, that are not dedicated to a product or a conclusion,
right? But you don't want to look at the society for anti-vaccine, whatever. You want to look
at just a general scientific organization who pours through the evidence and puts out a position
statement or whatever. Like, not an advocacy group. Not an advocacy group, but scientific,
their advocacy is science and critical thinking or whatever, not a conclusion, right?
Like, as we say, if your only tools, a hammer, everything looks like a nail, right? So you don't
want to ask that person. You want to say, don't go to a surgeon to find out if you need surgery.
First, you want to go to the non-surgeon first. Yeah. And then they'll refer you to the surgeon
that they think that they need a surgical opinion. But anyway, so that's what for a non-expert,
which is everybody for almost everything, right? Maybe you're an expert in one thing. Maybe you're
not an expert in anything. Most people, even if you're an expert in one thing, you're a non-expert
for everything else. So for most things, you're finding out what the experts say. If you have enough
knowledge that you can go to the primary literature, right? Let's just say that level, because
expertise is a gradient as well. It's not a black or white thing. If you think you can read and
understand the primary literature, then that's where you have to go. You can't read, you know,
you shouldn't rely on secondary sources, like what people are saying about it. You want to go back
to the primary source. And then, you know, it's about, you know, finding reliable sources,
evaluating what is it actually saying? How reliable is it? And
what does everyone else say? Like what are both sides saying and how do they respond until you've
sort of found your way to the end of the argument? And then when you come to a conclusion,
and then you still ask, and what did the experts say about it? Because then you're going to be
checking yourself against other experts or just experts, you know, depending on where you are.
There's no quick way to do that. There's just no quick way to do that. It's a lot of work.
And depending on how complicated the question is, that process could be very complicated. I mean,
I've sometimes spent months, you know, doing a deep dive on a topic that was new to me before I felt like I should even talk about it at all.
What was the last subject you did that on?
So, I mean, for that long, I've done that recently on fusion, on battery technology.
I've done it on.
The first one I did on was the thimerosol issue, right?
Like the first time I heard that, it's like, okay, I kind of know what I think the answer is here.
but I'm going to not make any assumptions.
I'm just going to...
I'm going to disprove my hypothesis.
Yeah, I'm going to read a few books.
And, you know, at first you're like,
oh, that might be, there might be something to this.
There's a lot of information.
But let me keep digging, keep digging, keep digging.
And then after a few months of, like, really pouring through,
it's like, okay, now I get it.
When I get what's going on.
So that's constantly the case where, you know,
you have to do it.
If you're going to talk about it in the public
as like a communicator, you have to know it to that level.
If you're just interested,
person, you know, there are so many sources out there. That's a positive aspect of social media
on the internet, right? There's lots of people who are experts who are giving the world the benefit
of their expertise. Well, that's the tricky part because even in my debates, people would say,
why do we have to listen to your experts? I want to listen to my experts. Yeah, the doing experts.
There's like a new expert for me every day. So that's, you know, where you, the real skill of being a
critical thinker or being a scientific skeptic or, you know, practicing science-based medicine
comes from. You have to realize one of the things you realize is that there's an expert who can say
anything, right? And that authority does not rest on any individual. Because any individual,
no matter what their resume, can be wrong. And then people say, well, why should we listen to you?
And I say, don't. You shouldn't listen to me. You shouldn't believe anything I'm saying.
I'm just giving you stuff to think about. Find out for yourself. Verify it for yourself.
find out what other people are saying.
And if I'm wrong, tell me and I'll correct it.
I'm not telling you anything out of authority.
I'm just trying to help you think about things a little bit more deeply.
So it's very easy to do.
And RFK does this as well.
He'll pick his, cherry pick his expert, right?
So if you can pick, if you have the conclusion first and then you pick an expert to match
the conclusion, you aren't checking your beliefs against the expert.
You're choosing the expert to support the opinion you've already had.
And of course, everyone's going to do that.
That's the one downside of the internet, is that you find me evidence that I'm correct,
and you accept whatever comes up is evidence that I'm correct.
Look at that.
I found just the ability to find an expert who agrees with you or the ability to find a paper
that seems to show what you think is not the end of the process.
That's not even really a good beginning to the process.
Your first question should be, tell me why I'm wrong.
Show me who disagrees with this position or what evidence.
That's when you get confidence in your opinion, when you're,
actively looking for the things that proves it wrong. And it survives a dedicated attempt at
proving it wrong. And tentatively, you say maybe there's some value here. But people will,
this is a cognitive bias, right? People want to confirm their existing beliefs and they will
look for evidence to support their beliefs. That's just another way of doing that. So always it's like,
okay, but that's what that one expert says. But what is the totality of experts say? What is,
what is, or 98% of experts saying the same thing? Are they saying something else? And is this guy
have a track record that makes you think that you should listen to him over the other 98% of
scientists or whatever. You keep going. But you know, you have to buy into that, which is, you know,
the other, if you're playing the long game with that person, you'll teach them that about an issue
that you agree with them about rather than about the thing that they're very defensive of.
Yeah. And then hope that they can apply it. And they may or they may not. You can't really
control that. Yeah, I had a guest on the podcast not too long ago, a psychiatrist.
Dr. Amon, who I asked him that exact question that you just presented, where don't you want to
try and disprove what you believe in here in order to make it stronger because it can withstand
an argument?
Because if it's tested more thoroughly, it becomes more valuable, and then maybe we will
utilize the technology he was talking about.
And he openly said no.
I, to me, that is mind-blowing.
Yeah.
How does one get to a place where they say, no, I'm not interested in disproving myself?
Yeah, right. Or I asked the same question to somebody who's promoting like alternative medicine nonsense. And they said, well, how can you prove anything wrong? Well, you could prove things wrong. You can prove that things don't work. I mean, that's a possibility. It's usually a lot harder to prove something wrong because they haven't even proved that it does work. Right. And it's also the question of, again, this is where you get many, many layers is who has the burden of proof. Yeah. Right. You're the one making the claim, especially if it's a new claim or it goes against the grain. You have the burden of proof to you to show. I don't have to prove. I don't have to prove.
that you're wrong. You have to prove that you're correct. And again, you're right. You should be,
your primary goal should be to prove that you're wrong. And good scientists do that. Part of the thing
that we try to do, because it's easy to get into the, especially as a skeptic, to get into the pattern
of constantly talking about the people who are doing it wrong and trying to correct them and correct
them in separation. Yes, so that's a huge part of what we do. But we have to, we do discipline ourselves
and remind ourselves, we also have to make sure we're showcasing people who are doing it right.
And even though it's unremarkable, it's good that it's unremarkable, but it's, you know,
it's like when everything goes right, nobody says anything. You only complain when things go wrong.
But it's like, for example, the scientists, this is now going back 10 years or so, who said,
we think we found neutrinos going faster than the speed of light. But actually, we think we're
wrong, but we just can't prove ourselves wrong. We've tried everything to prove ourselves wrong,
and we couldn't do it.
Help us prove ourselves wrong to the scientific community.
They didn't say, we give us a Nobel Prize.
We proved faster than light neutrinos.
They're like, God, there's got to be something wrong here.
Here are the 20 things we did.
And eventually the community at large did find what was wrong.
There was a bad cable.
It was something really stupid.
But it was like, okay, all right, we didn't break Einstein.
All right, all right, that's what we figured.
There was something wrong with our setup.
That's a good scientist.
They did exactly what they were supposed to do,
really, really, really try to prove themselves wrong and then not arrogantly come out,
asking for a Nobel Prize, but saying, help us figure out what we did wrong, you know.
So when you speak to an individual who is of that mindset, where they're not interested in
proving themselves wrong, and you're perhaps entering a disagreement, a debate with them,
how do you approach that?
What's the context, right?
So is this a public debate where I'm actually talking to the public?
And I don't really care what this person believes.
I'm talking to the audience vis-a-vis this person, right?
So then I'm just modeling good critical thinking.
I'm modeling good, you know, looking for the evidence and, you know, for the third party looking on.
That's mostly what I'm doing.
I think it's mostly we're both doing, right?
Where you're either in the comment section of a blog.
I don't really care what the person believes that I'm responding to.
I'm setting the record straight for everyone else who's reading it.
Or if we're talking about, if you're, and I've been in, you know, very similar debates as well,
I don't ever, ever expect to change the other person's mind.
I'm just modeling what is logic and evidence for the audience.
What if you have a family member, an uncle that is anti-vaccine, and you are discussing
it with them?
And ultimately, you're doing it because you want good for them.
What do you do in those scenarios?
So I'm going to shamelessly plug my book right now, which we haven't done yet.
So The Skeptics Guide to the Universe book, and same names are a podcast, right?
We have a chapter on that, literally on how do you convince your family member to be more critical
thinking.
because it's like the most common question that we get.
So here's a completely different context.
Now you're invested long term, I'm assuming,
in your relationship with this person, in this person.
You actually care about the person.
And so when you're playing the long game,
what we recommend is,
firstly, you don't go right for the jugular, right?
You don't say, you're wrong.
I want to prove you wrong on this thing
that you care very much about, right?
Because people tend to dig in their heels.
The best thing to do is to say,
all, let's see if we can find where our common ground is.
First of all, let's just find some common ground.
What do we both agree on?
And then once you find that common ground,
and you may not be able to, but let's say,
and you have to go really basic.
Like, do you agree that facts exist?
Sometimes you have to get really basic.
Do you agree that science, you know,
the scientific methodology works?
What if they start right there and they go,
ah, facts are tainted by capitalism?
You don't think there's any fact that, you know,
any amount of transparency or replicability?
What if it's replicated in other,
countries and non-capitalist countries. Is there any level of evidence that you'd say,
okay, this is almost certainly true. Again, you just keep going more and more basic.
If you can't even get that, then you know not to waste your time, right? If they don't agree
that reality exists, that facts exist or whatever, then you know, you're never going to
convince them. You're 10 levels, you know, above talking about the whatever the specific
issue, anti-facts. You're never going to convince them about that if they don't agree that facts exist.
You have to get really basic.
But most people do think that the world exists, reality, exists, facts exist,
that there is a certain amount of evidence where you could say, okay, this is probably true.
And you just keep it, do you think the world is a sphere?
Do you think it's round or you think it's flat?
Not everyone would agree with that.
Sure.
Just find something.
Just find something you agree on.
But that's because the earth is flat.
Yeah.
But you say...
I saw the fear come across.
I know, I know.
But some people...
I try people in my life who believe that.
Trust me.
But, you know, you find...
Well, tell me, how do you talk to that person?
What do you...
What's your...
You find the common ground,
and then how do you start introducing facts
about the earth being flat?
So, yeah, so you find the common ground
and you build slowly
on top of that.
And if you...
In terms of addressing a specific issue,
and this is where I think
there's often a difference between skeptics
and just general scientists,
you have to know the subject
cult. You kind of have to know what they're going to say to you. Right. So there was,
you know, Dwayne Gish, you ever hear that name? So Dwayne Gish is a creationist, was, he died
whatever, five, ten years ago, made a career debating evolutionary biologists. Now, you might
think, an evolutionary biologist is going to kick a creationist's ass, right? Because they have
all the facts and logic on their side. Well, not necessarily. Not necessarily. He basically,
you know, owned a lot of evolutionary biologists because of a couple of things. One, there's the
term a Gish Gallup, you may or may not have heard that term, that comes from Dwayne Gish.
Gish Gaplep means, and I've seen videos where people have done this to you, where they
just throw out so many different misconceptions and factual as you cannot possibly correct
them all. So you have to try not to let that happen, right?
So, but part of the problem was the evolutionary biologists knew evolution cold,
but they didn't know creationism. They didn't know what the arguments were that the
creations were going to use and how they were going to twist when they said.
and the bogus references that they were going to use, et cetera, et cetera.
But if you do know that, if you know the creationist arguments
and their misinformation specifically,
and you can counter it much more effectively, much more efficiently.
So at that level, you just have to know what they're going to say.
So you have to know what the flat earthers are going to say.
What do they say?
For example.
So, like, for example, they'll say, well, how come, you know,
when we're looking at the sunset, you could see it literally moving below the horizon.
They'll say, that's an optical illusion.
Say, okay, well, explain that to me.
What's the optical illusion?
Explain the optical illusion to me.
They can't.
So it's very easy to get to the point where they go, I don't know, but this is the way it is.
It's like, okay, but you don't know because it's not true.
Let's see, how could we figure out if it is true or not?
Is there an experiment where you could do, or is there a piece of information we could look up?
And again, it's like, how much time do you have?
How much are you invested, et cetera, et cetera.
And again, I never plan on winning in the moment.
They're never going to say, God, you're right.
I was wrong.
They're never going to say that.
But you're just planting a seed, right?
ground, you plant a seed, and then you give them space. And then you come back and say,
you know, or they may come back to you. And that, I've seen that work over time. They have to
convince themselves. They have to, at some point, they have to convince themselves. You're not
going to convince them. But you've got to give them the tools to do so. If I'm really playing the
long game with somebody, then again, I engage them on topics we agree about and say, oh, yeah,
you don't believe in, whatever, you don't believe in aliens? I don't believe in aliens either.
Why do you think people believe in aliens?
And how do we know that they're not visiting the earth or whatever?
And then you engage their skepticism on topics where you agree
and you develop the general lesson.
You always generalize a lesson.
That's right.
People will sometimes, whatever, they will engage in confirmation bias or whatever.
And then you give them the tools that you hope eventually they'll apply to whatever their true belief is.
And again, that's why it's the long game.
It's a process.
There's no quick way to do it.
When I've tried to do this with friends and loved ones,
they feel they're being manipulated.
And they don't even want to play the game.
And I'm like, it's not a game.
You're learning techniques that you like.
But they feel like it's manipulative.
I don't know why it feels that way.
Yeah.
I mean, it's, yeah.
Because they're challenging themselves
and it probably feels uncomfortable.
I think that's correct.
And again, any psychologist will tell you
that people find all kinds of ways
to resist doing the hard work on themselves, right?
And so that's one of the ways
to just flee the hard work
again, there's nothing you could,
at some point you're engaging in therapy with them.
I mean, you kind of are.
For sure.
And so...
I mean, cognitive behavioral therapy is challenging irrational beliefs.
Yes.
It's core.
That's what it is.
Exactly.
So a lot of it is cognitive behavioral therapy.
But with, you know, here are the tools.
Again, it's just, if you just persistently model critical thinking and give them the tools
over and over again, eventually they may come around.
Again, I don't have a formula.
There's nothing that always works.
or there's nothing you could do in every situation.
There's no secret sauce.
It depends on the relationship.
There's no secret sauce.
It depends on the person.
Depends on how much work they're willing to do.
It's like you could lead the horse to water.
All you could do is, you know,
is be the best communicator you can be and hope for the best.
Right.
For this Gish Gallup situation,
it's something I had on this podcast with a guest,
and we're going to blurb the name because I don't want people to know
because we didn't publish the episode.
It was with...
Okay.
Oh, yeah, he's great at that.
Oh.
Amazing.
at it. And it was my first combative interview or debateful interview where I was going to
challenge my guess. And I could not keep up. Yeah. There was 30 things be, and when I would try and
backtrack, there would be 30 new ones reintroduced. And I was basically disappointed at my
inability to hold him to any statement. And I'm curious. He's a world expert. I mean, it's a
nonsense. He's a world expert at Gishal.
At Gishka, absolutely.
And so you've got to really...
I've never met anyone else like you.
Yeah, yeah, yeah.
So then we've had this debate in the community.
Okay.
Ever since I've been in the community.
Do you debate people like that?
Because who is it going to help?
I've always, you know, just think that we should debate people like that.
But so one answer is don't debate them, right?
That's one...
We're going to come back to that because I'm very excited to talk about that.
It's like, well, it's a no-win scenario.
Don't debate them.
Okay.
The other answer is you agree.
to debate them, but with a format that sets limits on the Gish Gallup, right? So you can't,
not a, it's, you're going to have a question and you're going to address that one question. You get
back and forth on that one question, and that's it. And you get them to agree to that format,
and you have a moderator who will hold them to that format. If they still bust out of that and Gish Gael,
then you've got to call them on that over and over again, and it's, it could be tiring, and it's,
again, it may not be succeed. Somebody like that, who is such an expert at doing,
it and charitematically so, I think, is, you know, that's their, that's his stock and trade.
It's just this complete stream of nonsense that is endless.
And you can never hold them to one thing because they don't, that's not their process.
It's all vibes and feelings and just, you know, feel, that's why you could, you'd probably
have seen the website where it spits out like phases.
Oh, yeah.
It's all AI stuff.
It's like, yeah, it's indistinguishable from the nonsense.
Could not keep up.
It was just the amount of claims being made.
And all of them were wrong, even maybe differing in the amount that they were wrong, but they were all wrong for some reason or another.
And I just couldn't.
And it just made the conversation impossible because I never wanted to be an angry debate.
Even folks who I've brought on to have a very strong disagreement with on, it was a friendly conversation.
Always. It has to be.
If you get angry, you lose.
Yeah, exactly.
So I could not keep.
I almost felt like I would have to be more assertive and angry in order to hold them to something,
but then it would just fall apart.
So I don't know.
You need a moderated format.
That's like a Godzilla level.
Like, I don't know.
He's my kryptonite or something.
Yeah, yeah.
And, you know, there are certain, I don't, you know, I certainly wouldn't just, without conditions to debate anybody.
Because then there are people who are like, they're just looking for a venue.
They're going to use you.
They're not, they're not fair players.
not going to follow the rules or whatever.
Why would I give them a venue when I don't think, like the outcome is inevitable, like a negative
outcome.
So something you can just choose just not to do it.
But if there's somebody who I think is, they may be wrong, but at least to some extent,
they're a fair player and they'll follow the rules.
Absolutely, I will debate them.
So I did these roundtables against people who were anti-vaccine, and in the second round,
I did it against Maha supporters or RFK Jr. supporters.
I got a lot of criticism from my colleagues saying I shouldn't have done the first one.
I did a terrible job in the first one because I was overly empathetic to their illogical or
inaccurate statements.
The second one, they said that I did a slightly better job, but still this puts misinformation
and accurate information on the same table and I'm doing a disservice to society.
I disagree, but I'm curious what you think.
So I did watch those.
And my reaction to the anti-vaxxer debate was, I see what you're trying to do, but I do think you have to think about, again, if you're thinking about the people watching it, you know, I don't like it when misinformation statements go unchallenged. And I don't like providing a venue for that to happen. So like on my own blog, for example, if people state misinformation in my comments, I feel obligated to correct it. And if they become just a fond of, just a fond of,
of misinformation that I can't keep up with. I delete them from my, because I'm not going to be
a venue for misinformation, right? So I think it's a kind of a no-win scenario because everybody
wants you to be their warrior and to definitively vanquish the other side and to win and to do
everything that it's impossible to do, to be a good guy, but also win and to always have,
it's just impossible, right? So you're kind of setting yourself up. Again, it's partly welcome to the
internet. You're going to get that. No matter what you do, you're going to get criticized from one side
or the other. But again, my constructive criticism would be more to control the conversation so that
they're not gish-galloping you with misinformation. And like, I probably would have wanted to do anyway.
I'd tell myself I'd want to do. It's like, as soon as they sent these, give the first piece of
misinformation, it's like, all right, let's talk about that before you go on and then talk about that.
And like, to the end, I do think there's something to be said for focusing on that one piece of
misinformation. And then this is where you have to have the knowledge ready to go. Or if the
format allows, like, let's look it up, you know. And you have to go take it to the end. Like,
definitively, this is not true, right? There weren't whatever a million people dying from the COVID vaccine.
That is definitively not true. Let's talk about that until we can convince you whether or not that's
true. And if when you, because that is the truth, that is the facts, they either get to the
point we're like, well, I just don't believe it. Okay. So now you're just going to deepen the
conspiracy thing about it. But from a factual point of view, we've established it's not true.
Or they go, okay, maybe that one thing is true about these other things. Then you say, well,
if you were wrong about that thing, maybe your source of information is systematically wrong.
Now you have to question everything that you believe about the COVID vaccines, because
you're relying on bad sources, clearly, because this thing that you said definitively as a fact
was completely not true.
So again, depending on the vibe and the venue,
that kind of approach is effective.
Not for the person, they're never going to say,
oh, you're right, I'm wrong.
But for the audience watching,
they'll at least say, okay,
the misinformation was at least corrected
for people who are walking,
and then you sort of derive a deeper skeptical lesson
from the misinformation.
It's really hard to do.
You know, again, I've had patience,
and I'm sure you do as well,
that's a different context.
patients do that. I'll say, well, listen, this is what I think the information is, but I never
confront their beliefs. But if you're having a public debate for the purpose of educating the
public, you have to think about what's the net information getting to the public? What's the net
effect of the interaction you're having? It's really challenging. But just take it as a learning
experience. Yeah, well, this is, I'm curious to give you some backstory on it, to see how you feel
about the backstory of it all. When I saw that these platforms were having a lot of success with
that format, I knew it wasn't a perfect format.
to have the discussion that I wanted to.
But I felt like there needs to be in the social media sphere,
something that goes viral that is mostly good.
And I felt like that would be a potentially good opportunity.
So I actually reached out to them to volunteer for this.
They weren't going to host that.
And they were receptive and they allowed it.
I also requested that they include people who were truly vaccine hesitant
as parents, as medical professionals, as police officers,
because they wouldn't just bring in advocacy people.
And I wanted to talk to them as if they were patients.
And then when I did it, because I had no idea what to expect, this is not my format.
I have no control over the edit.
It was three hours.
So a lot of pressure and fear.
I tried to prepare as much as I could, but who knows?
I remember they even edited this out, so this is cool backstory.
I sit down, and the first person that sits across from me says,
was Hitler a murderer because he killed himself?
And I'm like, what the hell?
I'm here to talk about vaccine.
I'm like, I don't know much about World War II history
and what happened after the fact to speak on it confident.
No matter how much you prep, someone's going to throw you a curve.
That was no opening statement.
Even on topic, like no matter how much you think you've prepped,
and I think a lot of times this is deliberate,
they pull out some new thing, right?
That you can't possibly have prepped for.
It's like, what about this new thing?
I just came across, like, come on.
I was like, well, we'll take a look at that,
but let's talk about stuff that we've had.
And what's interesting, they were supposed to be responding to my claim, which was that
anti-vaccine lies cost people their lies. And that was their somehow their claim. So it went off
the rails multiple times. So after I filmed it, I was so nervous for the reason that you say that I
didn't fact check every single piece of misinformation they did because I approached it like a patient.
I said, where is the biggest benefit I can get from a piece of misinformation they presented,
that I could present a cohesive argument
to model good thought process.
And I'll finish it just to complete the point.
I then saw the edit from the first time that they sent to me
and I go, oh my God, there's so much misinformation in this.
But what I did it, you didn't have control over the final edit.
No, no, this is their thing.
And then, so they just sent me a rough cut.
And I said, this is awful.
I don't think they should publish this.
I'm most going to ask them not to publish it.
And when I told them my concern,
They said, oh, we have a fact-checking service that pops up on screen whenever they make these claims that you didn't discuss.
And even we fact-check some of yours.
And if you see on the sidebar, they have this agency that pops up factual sources, which will mitigate your concerns of the fact that there's more misinformation being spread.
So the fact that you didn't correct everything shouldn't be a problem.
And actually they did.
They popped up quite a bit of times when they were making claims about smallpox blankets and things that I just was not an expert in.
And I said, okay, that mitigates it.
And then the ultimate deciding factor was I called Paul Loffett.
I'm sure you're familiar with his work, probably know him.
And I showed it to him, the rough draft.
And I said, do you think the world should see this?
He goes, if you don't publish this, I will be mad at you.
So I said, well, if he thinks it's valuable, I think I should publish it.
And I've gotten obviously a tremendous amount of feedback.
And in doing round two, I realize that we have these sources that are going to be there.
So there's no need to fact check every statement.
I more so want to get them to come to their own conclusions later down the line that, oh, man,
I am being tricked by this big industry, et cetera.
And, okay, so that's the backstory on it.
Yeah, yeah.
I mean, so the, like, showing the real information on the screen, even if you didn't say it in
real time, is fine.
As long as the information is there, another technique that sometimes helps is to, you know,
I know you want to focus on things moving forward, but you could say, well, none of that is true,
but let's talk about this, right?
or at least say, put it on record, none of that's true.
And so what works, right?
So that's a question that researchers have been asking for the last 30 years at least.
And we have some information about what works in terms of countering misinformation, right?
The 30-year-old model, if you go back to Carl Sagan in the 19-nights, right, those science communicators,
back in the before time, before social media, the model was what we call the knowledge-deficit model, right?
People believe pseudoscience because they don't understand science.
They lack information.
And if we just give them the information, they'll believe what's true.
That is almost entirely wrong.
Like we've learned through research and experience that that's not 100% wrong.
Again, nothing's 100%.
Some topics, it's topic by topic.
And with some topics, like anti-GMO, for example, which is very interesting, people who
are very anti-scientifically anti-GMO are full of misinformation.
and if you give them correct information,
they will moderate their position, right?
So they will actually respond to factual information.
If people who deny global warming,
if you give them factual information,
it has zero effect.
It has no effect.
In fact,
if anything,
they'll dig their heels in.
Is it based on the emotional
or cultural lens or something?
Yeah,
it's partly is like how emotionally invested
are you in the answer?
How much is this part of your worldview?
And also,
how did you get to this position?
And also probably what is this stop?
what fun stuff is this stopping you doing if you do believe in it?
Secondary gain, ulterior motive.
So with anti-vaxxers, it's not a knowledge deficit problem, right?
Because it's not that they lack information.
They do lack information, but that's not the problem is that they're full of active misinformation
and a very compelling narrative.
And that's always the hardest thing to confront.
And what the evidence shows is that you can't just take away their misinformation,
you can't just give them correct information.
You have to provide for them an alternate narrative for understanding the world
because they are using this as a way of feeling the illusion of control over a scary,
chaotic world, right?
And specifically when kids are involved, forget about it.
Parents are so easy to scare and very difficult to reassure, right?
Like if someone's like, the risk aversion of parents is massive.
And if somebody says, I heard something scary about it.
vaccines. That's enough. That's enough to make the vaccine hesitant at least and saying,
reassuring them with dry facts and charts and graphs and whatever, it's not really going to
fly. So you have to give them an ulterior, an alternative narrative. And that's where I think
the skeptical worldview is really important because we're not just giving people tools. We're
giving them a way of understanding the world, by through logic and evidence and reason, right?
and a feeling of control, not the illusion of control, but actual control over their own beliefs at least, and humility towards reality.
It's like, listen, you know, you're never going to know the definitive answer to anything.
Just deal with it.
Get comfortable with uncertainty.
You get comfortable with the complexity.
But with these processes and these tools, you can understand how things work.
And, you know, you can think about at least meaningfully what to believe.
and understanding conspiracy theories,
understanding how misinformation gets marketed,
gives them a way of reframing all the anti-vaxer misinformation.
You can't just play dueling facts.
You've got to give them the skeptical, scientific narrative
to make sense of it all.
And again, you have to build that.
That's not a kind of thing you can just hand to somebody.
You've got to give them the tools to oversee the world that way.
Are there any things we can borrow
or use that the leaders in the anti-vax community weaponized quite well in being persuasive
that we're not doing in the traditional medical world.
I can say how old this debate is too.
Do we use the tools of the enemy to sort of fight back?
And people have flirted with that, and it never really works out well in my experience.
I think...
What example sticks out on your head for that?
So some skeptics have perpetrated hoaxes to then expose the hoax and the process and the gullibility of the media.
And I think the net effect of that is, well, they're hoaxers, you know, or...
Yeah, didn't they... People got fired from a position. I remember there was a big case about it.
Yeah, it doesn't happen that much. And I don't think it's done when it's doing it anymore because of these cases that have happened.
or they think the hoax is real
and the subsequent reveal is the lie.
It's kind of a no-win scenario.
Wow, okay.
That's a very aggressive.
Yeah, that's a very aggressive one.
But then the milder ones of,
can't we just be, you know, more, you know, manipulative
and in the same way that they are, like,
emotionally appealing or whatever?
It's like, okay, I mean,
I kind of see where you're coming from,
but I just think ultimately it's a temporary win at best, right?
You have to give people the critical thinking tools.
You've got to get it.
You've got to play the long game, whether it's the public education or whatever.
That's the only thing that really has lasting effects.
Let me give you an example from the anti-vaccine world.
Because we've long predicted that, you know, the public's opinions on vaccines will rise and fall
with how risky the infectious diseases that we are trying to protect from vaccines are, right?
So if parents are more scared of the vaccine than they are of the diseases that they prevent,
they'll be anti-vaccine.
So maybe we just make them more scared of the diseases.
And then there's a certain amount that you can do,
which is just giving them accurate information,
which is fine.
Showing history, photos, et cetera.
But yeah, but do you show the idea?
Do you show them like,
here's a horribly disfigured person
from out-of-control measles or whatever?
You know, the thing is,
if you overstep the evidence or overstep what's reasonable,
and it's easy to do that.
And it's easy to do that.
You might have a good motive
and you think,
I'm just going to scare people into doing vaccines.
Then it comes around to bite you in the ass.
Because then again, you are sort of using the tools of the enemy,
and it just sort of saps your authority,
the degree to which people will trust and believe you.
So like when the measles outbreak happened in Disneyland,
this was 10 years ago or so, I forget exactly when.
Following that, there was a bunch of pro-vaccine laws passed
because people like, oh, this is real.
people there was a you know a bag of measles outbreak and it was caused by anti-vaxers people who were not vaccinated
and we're like yeah we predicted that would happen did that create lasting effects no it was a temporary
blip and it just went back to to where it wasn't even worse you know afterwards so i don't you know just
historically i don't know that that works and um you know i've personally chosen just to be just
totally straight and not try to be manipulative in any way.
The only, and then where it gets, I think the debate even got to the point of framing.
Again, I know of you probably haven't been in the skeptical movement long enough to remember
the big framing debate that happened, right?
I don't know that it was a big framing debate.
So this is, again, going back.
Maybe I just don't know the title of it.
Maybe there is.
Yeah, well, I'm sure it keeps these kind of things keep coming up over and over again,
but this was specifically the framing debate where some skeptical science communicated
we have to be careful to frame what we're saying in a way to have a maximal effect on public opinion, right?
And which I think is totally legitimate.
But even to that, very milk-toast, mild recommendation to think about the framing of your message,
there was a massive backlash of that's deceptive and we shouldn't, we should just be as straight as possible and not try to frame it.
So I think it's just misinterpreting what framing really means.
Yeah.
Because framing, like, we know this as physicians, you could say,
This surgery has a 2% fatality rate.
Or you could say this surgery has a 98% survival rate.
That's the exact same information framed two different ways.
And we know that people are way more likely to go for it if you say it has a 98% survival rate than a 2% death rate.
That's framing.
The information's accurate.
It is complete and it is completely fair.
It is just the way that you're presenting it.
And so the thing is you are framing whether you know it or not.
Either your default framing by not thinking about it and framing it in whatever way
that happens by chance, or you thought about it and you're presenting in a way that's crafted
to have the best public out, you know, public result.
That's all we made.
Or you're not trying your best to not frame.
Yes.
And therefore becoming inevitably a worse communicator.
Right.
Right.
Because that's harder to listen to.
Right.
When it's, if you say it's 98% chance of success, two percent chance of, stop throwing
numbers at me.
Yeah.
Is there a 90% chance of success or 2% chance of failure?
Right, right.
So anyway, but that's how, you know, persnickety, the community is online about things like that.
Very sensitive to any suggestion that we play the game, you know, rather than just the facts.
I think it partly is the people who are part of the community.
You know, a lot of scientists and et cetera, which is fine.
That's the strength of the community as well.
But as the people in the trenches trying to communicate to the public, we absolutely have to think about the framing of our message.
and how it's presented.
Especially in this day and age with social media,
I mean, 20 years ago,
if RFK came out and said something inaccurate,
a 60 minutes piece maybe would happen
and everyone at home at that time,
always had their TV on
and they'd watch the hit and they'd say,
oh, well, what he said was nonsense.
Yeah.
But now, where are people all uniformly hanging out online?
Right.
And online, it's so difficult
in the era of the attention economy
to actually get people to watch something.
And I feel the more you can frame something in a way that is valuable to the public, still honest, still transparent, the more valuable of a communicator you're going to be.
Totally.
There's also the echo chamber problem.
Everyone's going online and not getting the information.
They're getting curated information just for them to reinforce their tribe, their belief system, whatever will keep them engaged the most.
So if you already are predisposed to believing Maha and RFK Jr., you will see a very RFK Jr., friendly treatment of what he said.
and if you already think that he's, you know, the trial of Satan, that's the information that
you'll get.
Which is, I, again, why I agree with you completely that we have to go into the other spaces.
It may not be ideal.
It may not come out pristine.
It may not be hard hitting in the way that we want it to be.
But even just, hey, I'm not a two-headed monster.
I'm a reasonable person.
I have good intentions.
I have reasons behind what I'm thinking.
here they are and make of it what you will.
That is so useful.
I can't tell you how many times I've been in,
like there was this one time I was debating a homeopath
in front of a society of homeopaths.
Like the entire audience was homeopaths.
Okay.
And I was debating a homeopath.
And this is something I feel very comfortable about.
So that's partly what I was willing to do
because I know it inside and out.
But also I knew all I had to do was not be a jerk.
That's all I had to do was just be,
hey, this is why I think what I'm thinking.
And then afterwards, I was like, you were so nice.
I was, you're not like any other doctor.
No, doctors are mostly nice.
Yeah, they have this three-headed monster that represents doctors,
pharma, and whoever.
Absolutely.
So it's the boogeyman.
Yeah.
So I feel like just presenting the correct model sometimes is valuable.
As long as you, there's no missteps.
Yeah.
Because I feel like the misstep is the worst,
which is what I was afraid of most in these scenarios,
that I would present some data that was inaccurate.
I would fight against the piece of data
that was accurate and I didn't know it.
So you have to be very knowledgeable
about what you know and at the same time
your limits of what you don't know.
Right, and humble.
Yeah, I said I don't know a bunch.
Exactly, exactly.
Don't be afraid to say that's not something
or I haven't looked at that recently
so I would want to check it
before I say anything specifically or whatever.
Yeah, you always have to be hedging.
The person who you mentioned in the one that said,
I don't care if you, whatever you say today,
I'm never changing my mind.
She said something about diseases having patents
And I was, my initial thought in my head, the first neuron that fired was, well, to say that's
ridiculous.
Rhinol virus does not have a patent.
But then I said, oh my God, what if there is some subtype of a virus that one lab patented
and now is doing research on it?
And they'll use that as a way to discredit everything I'm saying.
Better say, I don't know.
Because I don't know the truth about some subsector of this.
And it wasn't valuable.
It wasn't a point that was interesting to argue.
or for the audience to see argued anyway.
I know, it's tricky.
Again, there's the art of the science communication.
It's like, how much of my time and capital
am I going to spend correcting this nuanced misunderstanding
what's going on?
If it's the cornerstone of their position.
I don't know, their instinct is,
that's wrong, that's wrong, that's wrong.
You have to shoot everything that's wrong down.
Like someone's wrong on the internet kind of thing.
But you've got to pick your battles sometimes.
You also want to be likable.
But you could say, yeah.
But you could say, like, that's not exactly true.
It's more complicated.
That's a good answer.
That's always a great answer.
It's actually a lot more complicated
that they're not really patenting diseases,
but, and then whatever the point
you were supporting, you go on to the point you want to make.
So yeah, you learn to,
you have to say things very carefully.
Yeah.
Because again, there's a lot of,
a lot of the, like, science communicators
who are critical thinker, like skeptical communicators.
Like, there are astronomers who are doing this
and physicists and doctors and psychiat, whatever,
but we kind of all talk about everything.
And so sometimes, like, my astronomy friends,
talk about vaccines. I'm like, you can't say it quite that way. And this is why there's like nuances
to it. We have to help each other. It's probably the same flaws that you see in AI. It's not wrong,
but it's interpretation or application. But it opens them up to, because I know, this is where,
this is like knowing the creationists, knowing the anti-vaccin. I know they're going to make hay
over the fact that you said this when it's not literally true, even though it's mostly true,
like saying that there's zero risk to vaccines.
You just can't say that.
You know, it's not the way we frame it.
And, of course, if you frame it in just a dry scientific way,
then they make hay out of that as well.
You've got to find a way of communicating that there's no significant risk.
The benefit far outweighs the risk.
That's a thing you could say without fear of contradiction.
There's overwhelming evidence that the benefit far, yes,
there's risks to everything, but the benefits far outweigh the risk.
saying it that way is if there's no risk, you know, then you're not being genuine.
You've got to be careful.
You've got to be careful in how you say things.
What's your take on the fact checkers or skeptics online and perhaps maybe in other forms of media
that like to get angry?
And they use that as their tool to be effective communicators.
Do you like that approach?
Do you dislike it?
But, you know, my approach is let a thousand flowers bloom, right?
So I try not to be judgmental about my fellow communicators in terms of how they're doing things.
It's not as if I know for sure what the best way to do things is.
And usually there's tradeoffs.
There's, right?
Everything's a tradeoff.
Nothing's like right or wrong.
And so, like, I get it.
You know, using anger to make people like emotionally invested in what you're talking about
and to make it engaging.
Because again, if no one's watching you, it doesn't matter what you're doing.
So I get all that.
I choose not to do it myself because it doesn't fit my,
my vibe, doesn't fit my brand, doesn't fit my personality. So you got to do what works for you.
And I try not to be judgmental of other people. The other question that comes up off is like,
you don't go against religion hard enough. It's like, well, that's just not my thing. You know,
I try to give people critical thinking. And if they apply it to their faith, that's fine.
And but you don't tell me what to do. I won't tell you what to do. It's all good. We're all
playing for the same team here. You know, and then academically sure, we could debate about
tactics and what works better, but I'm not going to flame anybody because they're choosing to do
something differently than I'm choosing to do it. I don't know what the ultimate answer is.
I don't know what works the best. Maybe they're going to make the world better by doing that.
Who knows? Yeah, because when you're making content, at least the way I think about online,
is you have a chance to win people over from the other side. You can help educate people who
are undecided or confused. And then there's rallying the base. And I don't think we necessarily
have to do, communicate to all three parties all at once. Yeah, not everyone is going to be all
things to all people. Exactly. I stick to your lane, you know, lean into your strengths is what I
would say. Do what feels genuine for you. If you're an angry skeptic, fine, being an angry
skeptic. We'll see how it works out. I don't know. It's not me, but that's fine. Do you do you?
You know, that sort of thing. Yeah. And I think also with the way the algorithm serves content,
certain pieces of content can serve those different parties. Perhaps in some videos I do get more
angry than others and that content hits the base and gets them excited. And perhaps I'm way more
empathetic during a debate with a parent who saw someone suffer and they're having an emotional time.
So I think you have to kind of apply it as you go to. Yeah. And again, and the audience has different
people too, sure. Right. So no matter what you do, there are people who are going to love it and there
are people who are going to hate it. So maybe this time I was talking to people who really like the very
gentle approach and other times I get a little more ranty and then people who like that like, oh, I wish
you did that more, you know, whatever. It's like, yeah, we just, whatever. It's, it can't be all
things to all people all the time. So after a conversation with a family member, what do you
judge as a win? So with a, with like one of my own family members. If, if later, this is,
this is the win and this has happened and this is great. If later, I hear them repeating one of my
talking points to someone else as if it's their own. Now I know, ah, it's sunk in and they've
internalized it. And now that's the reality. They're not crediting me.
with it and I don't care. They've internalized that point. To me, that's the biggest win.
Got it. Okay. You mentioned a statement earlier that if no one's watching it, it's useless
or something along those lines. I've been quite critical of the major organizations,
AMA as an example, of essentially falling victim to that, where I don't feel they've invested
in the tools that we have today to reach audiences, to advocate. It's amazing, right? And as a result,
when RFK Jr. tweets something and gets 10 million people looking at it, they put out a tweet
or some box with 50 words on it that gets 100 likes. I tell them, you might as well have not done
this. This is a box check. This is you saying you did something, but you didn't actually do
anything. Why are the agencies so bad at this? I wish I knew, but I agree with you 100%.
They are horrible at it. And it's just the old school brick.
and mortar,
whatever,
the institutions,
they just don't have
this skill.
I'm hoping this
will change over time.
It hasn't really
improved in my career.
Even my own
institution, as much
as I've tried to help
them with it.
They just don't have
the infrastructure for it.
It just doesn't feel
good to them, I think.
The AMA is a particular
problem.
I don't know if you're
aware of this, but in the 80s,
the AMA was sued
by the chiropractic
association over restraint
of trade.
Interesting.
And the chiropractors
won that lawsuit.
because they said that the AMA was telling their members
not to specifically refer to chiropractors.
Not that they were countering chiropractic misinformation, that's fine.
But they said, don't refer to a chiropractor, that's restraint of trade.
So they had this very narrow decision.
It was very narrow.
But the AMA got burnt by it.
And they decided since then we're just not going to fight pseudoscience in medicine,
which is scandalous.
But that's the way it's been since that time.
They don't want to get sued again.
And so they just have really been gun-shy about doing it.
So I think that's still, you know, institutionally a problem.
And otherwise, I think it's just an academia problem.
They think it's just all dirty and they don't, and social media is dirty.
And it's an afterthought.
And it's just their efforts are so lame and frustrating.
Yeah.
You know, and then they kind of, it took me 30 years to sort of convince them that this is a good thing, you know, that science communication.
Yeah, wasn't there a policy of the CDC?
during peak RFK, early 2000s,
environmental World Mercury Project Times,
to not fact-check misinformation
because they felt like he was giving air to it?
Yeah, I mean, again, there's always tradeoffs.
Everything's kind of a bit of a no-win situation,
but I think the coming to the conclusion
that we should therefore do nothing
is not the right answer.
But again, partly,
but I think the best thing about social media
is that people like,
you and me just said, well, screw them.
We're going to do it ourselves.
You know what I mean?
And there's so many, in every field,
there are so many great scientists
who also happened to be great science communicators
just rose to the top
and this awesome information out there.
If you want it, it's there,
more so than there's ever been.
So, you know, that is the good side
of, I think, democratizing information
that the bar for, you know,
because otherwise scientists are too busy being good,
you know, they can't also have a career
as a science communicator,
but now you can do it.
And so it's great.
Of course, it also opened the door for all the pseudoscientists and all the cranks.
And now it's like, oh, now we have to do it just to sort of, just to mitigate that problem.
To mitigate that, just to have some kind of parity.
And I do think we hit way above our weight.
So I know it gets discouraging to think, oh, there's probably 100 pseudoscientists for every real scientist out there on the internet.
But if you, if you, their impact, though, and how do you measure their impact?
You can measure their impact by their Google rankings.
You can measure their impact by media exposure, whatever.
I think that one scientist does counteract 100 pseudoscientists.
Oh, really?
I do.
I think if you really...
Because I do think they have an outsized impact.
And especially since like the mainstream media will, again, they do that false equivalency
thing, which works against us a lot, but this is where it works for us.
Like, they'll find the scientists to say the right thing.
And even though there's a hundred bad guys for everyone, good guy, they'll both get, you know,
the equal to...
And that's only the ratio for people listening.
is online.
Yeah.
In real life,
in real life,
it's the other way right.
It's the other way around.
Thousands of scientists
are doing the right thing.
Exactly, which is why the public
has a distorted view of what scientists say,
because they're seeing all of the cranks.
Yeah.
And there's the few scientists who are trying to communicate,
which is why we need to communicate in much greater numbers.
It needs to be part of academia,
you know?
It's like,
how many papers have you published?
How are you,
what teaching are you doing?
What academic responsibilities are you having?
And how are you communicating to the public?
That needs to be a fourth.
thing that is just automatically, how are you teaching science and what you're doing specifically
to the public? They're sort of doing that now. Again, it's kind of like an afterthought still,
rather than this is the most critical thing that you're doing because nothing we do matters
if we're losing the public debate. I mean, look at what's happening to the world. Look what's
happening to our government. I mean, we're losing and institutionally. And historically, it has happened
that the barbarians come to the gates
and rip down the institutions of science and learning
if we are too stuck in our ivory tower.
I used to think the ivory tower syndrome
was a myth, but it's freaking real.
It really is real.
I mean, in that they are toiling away
in their little community,
their bubble of academia,
and not really caring what's going on in the outside.
Well, that's all nonsense.
We don't have to really waste our time with that.
It's like, no, that's freaking this.
Public opinion wins elections.
that we're living in.
And yes, I mean, if you, I don't know how you could deny that now after what's happened
in this country.
Well, interestingly, it's a feature, not a bug of democracy.
The fact that the majority will have the control.
And if you believe that you're so smart that you're above the majority, they'll still
subject you to their rule at some point.
Yeah.
Interestingly, I started up a political podcast with a friend of mine who's a political scientist.
And we talk about this very thing.
Like, what are the features and bugs of democracy?
That's why, you know, we,
democracy, like we ever live in a Democratic Republic
because we don't have pure majoritarian rule.
For that reason, the framers of the Constitution knew
that pure populist majoritarianism would devolve
into the lowest common denominator.
It would be horrible.
And so you need some kind of institutional fixes
that keep that from happening.
Again, unfortunately, they're very fragile.
As it turns out, and they're easily broken by
a demagogue. I mean, I just don't get it. I would say a charismatic demagogue, but that's a very
subjective eye of the beholder kind of thing, I guess. Do you think social media is a net harm or
a net positive for science? I think it's a net harm, unfortunately. But I do think that,
but there's a lot of good, you know, on there. And maybe in the long run, that good will
dominate over the harm, but I think we've been living through the harm for the last 20 years. And
it's undeniable. Again, I just have to hope that there's a learning
curve there and that we adjust and that a lot depends on the next generation and how they're
going to deal with it. And I get, I see mixed signals from the, you know, if you agree with me,
that sometimes it's like, yeah, they get it. They're pretty savvy. And other times it's like,
wow, they just live in a post-truth world and they don't really, everything is cool. Like,
they don't really know that, no, it's okay to say something's wrong. That's okay. Some things are
wrong. The fear I have, and I'm curious if you share in this, is because in the medical industry
specifically, finances drive a lot of decisions of students. You could just take a look at the most
competitive residency spots are largely driven by reimbursements and lifestyle choices.
So with family medicine, pediatric, psychiatry being at the bottom. Whereas if you think about what
makes a health system function well, it's those specialties. And maybe I'm wrong because I'm biased
and that I'm a family medicine doctor. The hardest fields are the ones where you have to be broad
and learning competency in all these fields, as you said earlier.
So my fear is these younger med students, or maybe even pre-med students, are learning that the
charlatans that do so well in mass communication selling their brain chili, their goop, BS,
that they need to then focus way more on marketing themselves than actually getting reimbursed
for being a leader in their field, researching, doing all this stuff.
Do you feel that fear?
I do think that's happening to some extent.
And again, it's amazing how long we've been having this conversation,
you know, since I've been in medicine,
in the 90s we were saying,
hey, shouldn't we be like reimbursing primary care, you know, practitioners more
and, you know, a procedure-driven specialist less
because we're really distorting the system.
And, you know, we need the quarterbacks to keep the whole system running.
But, and we talked about it a lot,
but I guess nothing ever really happened, you know,
We're still having the same conversation 30 years later,
and I don't see how anything is really going to change.
And again, I don't think there's any fix short, really, of doing that within the current system.
Obviously, we could talk the whole thing and have socialized medicine or whatever.
Come up with a completely different reimbursement model.
But again, it's just a trade-off of one set of headaches for another.
It doesn't really fix the problems.
It just trades it for a different set of problems.
But I do think there are some things like you were talking about the EMR, not talking to each other.
there are some things that work better from the top down, right?
So the VA is a fascinating.
I don't know if you have any experience working in the VA system
or residency rotated through the VA.
Broadly understanding of how their system works.
Yeah, so the VA is kind of a socialized medicine,
you know, system, independent system,
you know, run from the top down by the federal government.
It's like the closest example we have in this country
of what a one-payer system would look like,
and it's really fascinating.
There are some things which it does terribly,
because it's bureaucracy and it's mind-numbingly frustrating bureaucracy.
And other things where it's like, well, that works really well.
So one thing that works really well is they have one EMR system.
That's just like, this is it.
We're using the system.
They impose it from the top down.
It works really well.
And everything communicates with everything else.
So my question is always like, we need to figure out,
and I don't think there's any simple answer to this.
How do we leverage the best of both worlds, right?
How do we decide what things work well top down?
And let's just do that.
Like, let's just say, all right, this is the EMR we're going with.
And every, we're going to have one national, you know, database.
Everything's got to, or at least a standard.
You have to be compatible with this.
There is, I know, there is a healthcare standard of data for databases.
But it's got to be more, obviously, it's not enough.
It's got to be, everything's got to be integrated.
Everything's got to work together to the point where it's one seamless system,
even if it's not literally one company, you know, doing the whole thing.
And other things, you need choices where it's not just some centralized bureaucratic decision
because those decisions are often just frustratingly blind, you know, to reality.
How do we sort of leverage the wisdom of the marketplace and the efficiency of centralized organization?
And I don't know.
Well, I think the hope was our current hybrid system was going to be that.
And it's so far from it.
It's the worst of all the world.
You get the best of both worlds.
You get the worst of both worlds, which is dominating right now.
You might be suffering more from the worst of both worlds.
Yeah.
But yeah, that concern, I see it playing.
out about even physicians who started off in the evidence-based medicine world on social media
quite well. I'm starting to see them burn out in doing the right thing and then realizing,
man, if I just do some marketing, I don't even have to lie. I can just over-hype or over-sell
my services for cash. I'm smart enough to do this competitively. Why bother being a martyr?
And it's like we're losing reinforcements from people who are great at their jobs.
And I don't know how to change the incentives in a way where it's more valuable to tell the truth outside of maybe the feel-good aura of it.
Yeah, I mean, you have to, I think I have a dedication to it.
You know, I think if you're mercenary, it's never going to lead you to the right answer.
And it's kind of hard to imagine a system that will lead mercenaries consistently to, I think, the best behavior or the right decision.
So to some extent, you have to want to do the right thing for the sake of doing the right thing.
And that's got to be drilled into, I think, medical students from day one.
To some extent it is.
There's a lot of Hippocratic oath from day one, like, you know, kind of high ideals.
I think we need to really keep an eye on that and make sure that we are definitely, you know, inculcating that.
Again, forced to be green.
If you want medicine to function well and to be ethical and every individual practitioner needs,
or at least the majority of them need to be.
I also think the other problem is,
you mentioned evidence-based medicine.
It's always a little bit triggering for me
because it's like, yeah, the problem,
I don't know how many people realize
that evidence-based medicine is basically broken,
which is why I promote science-based medicine.
We could talk about very,
I'll give you the quick...
That was literally where I was pivoted to the next topic.
The quick version of what's the difference between evidence?
Yeah, what is evidence-based medicine?
So evidence-based medicine was developed in the 1980s,
and it has two main missions.
One is to have clinical decisions explicitly based on the best clinical evidence
and two, to get that information in the hands of practitioners at the point of patient contact, right?
So that you have the information you need when you need it,
and you're not doing things because they make sense or they feel good or in your experience
or this is how things are done, but explicitly there is clinical evidence to support that specific decision.
Sounds great.
Trying to limit bias.
It is great as far as it.
goes. But there was a massive unintended consequence, and part of it was just historical,
really bad timing, because evidence-based medicine was hitting at the same time that alternative
medicine was hitting. Now, the problem with evidence-based medicine, this can get statistical and
wonky, but basically the short story is that they mostly consider clinical evidence
without considering scientific plausibility.
And then they say, oh, of course we consider that, but they really don't.
I mean, it's not built into the formula.
They do not do it.
And there's countless examples of it from like the Cochran collaboration, which is,
you know, that's the epicenter of evidence-based medicine,
where they specifically did not consider prior plausibility.
Why is that important?
So as a physician, I could explain to you in a way that you'll completely understand it.
I say, I have a 40-year-old woman who has a positive mammogram,
and let's just make up numbers for simple math.
If I say that mammogram has a 90% sensitivity, 90% specificity,
so that's the chance of picking up real cancer,
and the number of times of positive test is actually cancer,
not a false positive.
90% sensitive, 90% specific.
She has a positive test.
What's the probability that she has breast cancer?
90%.
No.
That's the wrong answer that 80% of physicians give.
But it's an intuitive answer.
It's an intuitive wrong answer.
But people think that that's correct.
That's the evidence-based medicine kind of answer,
where you're just looking at the clinical evidence
without looking at prior plausibility.
The answer is you don't know.
You don't know because you need to know
what the baseline probability,
because if you have the baseline is 1%,
you're going to have way more false positives than true positives.
So you always need to know the prior probability.
That's just a statistical statement.
So when we say we have this study
had a 0.05 p value
which means a 95% probability
that the evidence was not due to chance alone
which is not really what it means
but that's how people interpret it.
People think that means there's a 90% chance
that this phenomenon is real
or that this treatment works.
But that's not true
in the same exact statistical way
that's saying that woman
has a 90% chance of having cancer is not true.
The answer is you don't know
until you look at the prior probability.
And that's a basing analysis.
Is that a term that you're familiar with?
Sure.
So most...
Which now they're excited about it.
Yeah, now it's kind of rising in prominence, which is great.
Basian analysis says, well, let's look at the prior probability.
Then we'll look at the new evidence, and then we'll see what the post-evidence probability is.
And, you know, the prior probability includes lots of things, including scientific plausibility.
So, alternative medicine loves EBM, because they could cherry-pick their clinical trials with no consideration of prior probability and say, see, we...
P hacked our way to a 95% chance of being real.
This is real.
This is as real as anything else.
Whereas we would say, you know, science-based medicine, well,
but your prior probability was pretty close to zero.
And you're, even after this study, it's still pretty close to zero.
And so you haven't really moved the needle much with this data.
But so that's a big difference.
Is this a difference between absolute relative risk in a different form?
Somewhat, yeah.
It's a similar kind of thing where you're looking at,
you're using statistics wrong, basically.
and you have to know, it's not exactly that.
It is exactly what I said with the mammogram.
It's an exact analogy.
But it's very similar.
And you have to use statistics the correct way
and not over-interpret them or misinterpret them.
But SBM includes a lot of other things as well,
not just you have to do a basing analysis.
You can't just look at P-values.
It's also things you have to look at things like the decline effect,
which means that when you look at early research,
there's a much greater false positive rate than more mature research.
So you have to see the arc of the clinical research over time to see where it settles into,
what effect size ultimately settles into.
Because initial positive results are almost worthless.
But how many times you go, there's encouraging preliminary evidence.
How predictive, that's another thing, predictive value.
Don't tell me statistical significance.
I want to know predictive value.
how does this evidence predict it's ultimately going to work or not?
And oftentimes, even with impressive-looking P-values, you know, just saying this is
statistically significant evidence.
It's not clinically significant.
The predict, well, there's that, there's two, like the effect size may be clinically
insignificant, but the predictive value may be negligible.
This does not predict that the treatment works at all.
And then when you look at things like publication bias, you could do, like there are tests
you could do to see how much publication bias there is.
And you put it in the context of prior plausibility.
Then you begin to get a sense of, does this probably work or not?
And so with homeopathy or things like that, we could say, well, if you look at all of these things together, the probability that it works is effectively zero.
But if you look at isolation at just the clinical research like EBM does, you could say, well, there's some positive evidence here.
This is encouraging and requires more research.
And like there are Cochran Reviews who literally do that.
about magic, right, about something which cannot possibly work. It's literally magic.
So we need to take this more holistic view of the scientific evidence, of the statistical
evidence, of the patterns in the research. So how does science-based medicine get you better than that?
Like, how does it take those factors into consideration? So you, yeah, so, I mean, I teach this
course. Like, you have to explicitly consider all of these things. If you're doing a literature
review, you need to say, okay, do a best evidence review, for example. What is the best evidence
show? What is the relationship between the quality of the evidence and the effect size? Does the
effect size hold up over time? Is there evidence of publication bias? Is there evidence of citation
bias? Is there now another layer, which is great, because a lot of studies now are pre-registered,
what happens if we only look at the pre-registered studies, where they didn't change the methodology
midstream. Sometimes the effect goes away when you do that. Huh. Why is that? If you eliminate the
possibility of cheating, basically, of P-hacking, the effect goes away. To me, that means the effect
isn't real. But if you don't consider that, you can do a meta-analysis using a lot of crappy
data and you get a crappy answer out of the back end. So it's just, you have to go through
this more thorough process and realize that it takes a lot more data than you think to
really get to a confident answer. And prior probability is very predictive, you know, of whether
or not, if something is breaking the laws of physics, you know, I would bet on the laws of physics
every time. That tells you way more than the P value of one isolated study. Why doesn't the
Cochrane system take this into consideration? That's a good question. I think because their
initial formulation was naive, quite honestly. And it wasn't, it wasn't improvement on what came
It's great as far as it goes, but it kind of assumes that we're only looking at plausible treatments.
Right, no one's going to study magic.
So it wasn't even on their radar.
You know what I mean?
And so they said, well, yeah, we just look at the clinical evidence, assuming that everything was plausible.
And then the promoters of implausible treatment are like, great, we'll do that.
We're not going to consider prior plausibility?
Awesome.
I'll buy that any day.
And so they basically exploited it to their advantage.
an example of that. So acupuncture is probably my favorite example, right? If you look at the,
I don't know what you feel about acupuncture, but it's a really complicated story, super complicated.
And it's the one that I think it's the, that has road evidence-based medicine the farthest into the medical system, right?
Where there's a lot of people who think, yeah, there's something to it. There's something real going on.
There's an effect. The World Health Organization is like, yeah, it works for all these things.
But when you look at the evidence from an SBM, a science-based medicine point of view,
it's pretty clear that there is no effect from acupuncture.
It is pure pseudoscience.
People usually shocked when I say that.
It's like, yeah, but here's what you have to do to look at the evidence.
First of all, the best studies, the ones that control for any kind of bias, are all negative.
There is a really clear inverse relationship between the quality of the evidence.
and the effect size, and the best studies are all negative.
Meta-analyses mix in a lot of really crappy studies,
or they mix in things like electroacupuncture.
What's electroacupuncture?
It's transdermal electrical nerve stimulation.
It's not acupuncture.
It's doing some other, it's like saying,
I injected morphine into an acupuncture point.
That's, right?
I mean, that's pharmacocupuncture.
No, it isn't.
It's you're injecting morphine.
It's the same thing.
So they're mixed variables.
there's also no consistency, right?
It's very heterogeneous.
So in other words, I not too long ago updated myself on,
what's the acupuncture on migraine evidence show?
Like, I dare you to find me two studies of acupuncture on migraine
where they use the same acupuncture points across the board.
There's like random overlap, but they're all different.
So what does that mean then?
How can you say acupuncture works
when it doesn't seem to matter what acupuncture points you use?
Is there a world?
because you're saying that there's this need for plausibility.
In specific indications within healthcare,
I found that there are instances where we don't have a clear pathway for the plausibility
where actually what we've learned ends up being clinically useful.
Absolutely.
So what happens then?
That's a very common response that I get because I think you're confusing.
We don't know what the mechanism is to we know that it is implausible, right?
and people confin that almost all the time.
So how do you get to that?
Because knowing it's implausible is difficult, right?
Well, it depends on what the claim is.
Sure.
So with homeopathy, it's easy.
There's literally no active ingredient in its magic water, right?
There's zero plausibility.
With acupuncture, so first you have to be a good scientist
and define your terms and define your variable.
What do you mean by acupuncture?
What's your definition?
Now, when you ask acupuncturists that or look up the definition of acupuncture
or you read the abstract or whatever,
the study that's,
this is what acupuncture is.
It's sticking needles into acupuncture points
into specific acupuncture points
designed to treat a specific condition, right?
So you have to insert the needle into the skin.
Sometimes they will specifically mention
to elicit the de-key sensation,
but they don't always uniformly say that.
But you have to insert the needle
into specific acupuncture points, right?
Is that a fair definition?
Everyone agrees that's the definition.
Yeah, right?
Seems logical.
It turns out that acupuncturists don't know where the acupuncture points are.
They can't agree on where they are.
There is no agreement on where they are or how big they are or what they do.
After whatever much time, no one knows what they are.
There's been no basic signs to back up their mere existence.
So it's hard to say to base a system of intervention on something that is probably not real.
But isn't that the exact point that you made earlier that we don't yet know what those points?
But how long is that answer?
Well, that's why I don't know.
I don't know.
What is that answer?
There have been thousands of studies of acupuncture.
They've had decades and thousands of studies to answer the basic question of what are the
acupuncture points, where are they, and how do we know what they do?
And it's like astrology.
There's different answers with different traditions.
It's not following a scientific course of building on basic science, building on firm evidence.
It's just different cultural answers with different traditions.
and even down to the individual, they all believe different things.
It's, again, it's not following a pattern of a real science, of a real phenomenon.
It's following the pattern of a cultural belief system.
So from a scientific question, if your scientific question is, do acupuncture points exist?
What are they?
And what do they do?
The answer is, from a scientific point of view, is there is no evidence to support that they exist.
That's pretty clear.
Then you ask the question, when you do acupuncture studies, does it matter if you insert
the needle or not, it clearly doesn't matter. Because if you control that variable, right,
if you poke people in the skin in a blinded way versus inserting the needle.
Sham acupuncture. Yeah, sham acupuncture or placebo acupuncture. It doesn't matter.
So sham acupuncture is like you just poke the skin. Placibumpture, you have like an actual
opaque sheath, you depress the button. And the acupunctures doesn't know if the needle's going in
and the patient doesn't know if the needle is going in, but they get poked so it feels like a needle
is going in. And if you do those studies, it can't tell the difference. There doesn't appear to be
any difference. And so it doesn't matter if you stick the needles. It doesn't matter where you stick
the needles. And nobody knows where the acupuncture points are anyway. So if that's acupuncture,
I think the clear answer is that it doesn't work, right? And so then you say, okay, what about
just if you just look at the clinical evidence? If you do an EBM approach to acupuncture,
what does that show? And that doesn't even really give you a very encouraging result either.
Low quality studies show some potential thing.
Yeah, again, you get this mixed stuff at the low quality end.
You get negative results at the high quality end.
There's a lot of heterogeneity, no real consistency.
And then you find that over time, rather than locking in an answer, they just stop doing the research.
They just start doing pragmatic studies where they're saying, you know, people feel better
when we do acupuncture.
They basically forget trying to establish efficacy.
You know, forget doing the kind of studies that are capable of proving it.
doesn't work, and they're just doing it to promote acupuncture.
And so that's kind of the phase that we're getting into now, where they're just promoting it.
They're not even really trying to figure out if it works or not.
They sort of give it up on that because the evidence is pretty negative.
So SBM gives you a pretty clear answer when you look at the totality of evidence.
That from, again, if this weren't, I say people, if this were a drug, it would not get FDA approval.
No way, not even close.
Like this is, if this were anything else, anything that didn't have a massive
cultural background behind it, there's no way this would be considered legitimate science.
So is basically in your mind acupuncture not a real science because of the lack of
reproducibility? Because they can't point all to the same spot. It's as many things. Yeah, part of
lack of lack of basic science supporting it. It's one thing if you hit upon a new treatment
and it works and you have known nothing about the mechanism. That happens all the time. That's fine.
although you better have pretty solid clinical and consistent clinical evidence,
but if you can prove it that it works clinically,
then we can backfill the mechanism.
But eventually we're going to do that, right?
If we're not getting any progress on a potential mechanism
and there's no proven, you have nothing, I have no good clinical evidence,
the clinical evidence is basically negative and you have no mechanism,
and we're not making any progress on finding it, and it doesn't make sense to begin with.
Like not even on an anatomical level does it make any sense?
Why would my ear make my liver better?
I don't know.
It makes no sense.
It's hard to say that this is legitimate science.
In the cases where it does work, is it placebo effect?
It's demonstrably placebo effect.
So I have a question, because this is something I've actually talked about on the channel.
As someone who doesn't see the evidence for acupuncture, and I've said that openly,
before I was in medicine, I went for an acupuncture treatment for a shoulder injury that I had.
I went for it, not knowing what I was going to get.
I actually saw a pain medicine doctor that practice acupuncture.
And I didn't believe in it at the time.
And my shoulder had a demonstrable tear.
It had functional limitations for at least a year, probably even longer.
And then after one treatment that I didn't like that I found uncomfortable,
I've never had a repeat issue with the shoulder.
What is going on there?
Yeah. So what do you think is going on?
I have no idea.
Do you think sticking the needle in your shoulder repair to tear in your labrum or whatever?
I have no idea.
Yeah, it's implausible if that happened.
So, of course, it's anecdotal.
So you're asking me to explain anecdotal evidence.
Yeah, and I'll tell you, but I hear you.
But I hear you.
The second episode of when this happened, I was training for a professional boxing match.
I had medial apocondylitis.
It wasn't resolving.
I didn't want to put a steroid in because my upcoming boxing match was coming.
And I said, let me try that because it worked so well.
Maybe I'm a unique case where this works well.
even though I have so much doubt given even though it's eBM my eBM background I said let me try it
and I went for the first session and again this was a problem I was dealing for a long time
80% of the pain resolved and the person said come back for a second session we'll get it to 100 which was
my mistake right he hits a nerve and I end up with peripheral neuropathy that was burning
every time I extended my arm yeah this terrible side effect but why am I having these outcomes
and again, anecdotal, I know you can't speak to.
People talk about this all the time.
So part of it is selection bias, right?
When people go to an acupuncture doesn't work,
they don't tell anybody about it.
They don't talk about that.
It's not a story.
It's an issue.
So there's the inherent selective...
Publication bias.
It's a massive confirmation bias
or selection bias of the data.
Like, tell me about the 100 people
who had it where it didn't work, right?
So you don't know how much...
It's always hard to imagine
that we're the coincidence
or that we're the outlier.
When something happened to us,
it's real, right?
It's hard to convince us that that's not represented of reality.
It's even weirder.
You have to impose an analytical, skeptical thinking on it to say, well, that's what I experienced,
but that doesn't mean that it was doing anything physiological.
We do know that there, so the often acupuncture is referred to as a theatrical placebo.
I mean, there's a lot of stuff that happens around it.
And a lot of things that could be having nonspecific effects.
Again, I don't know exactly what they did to you, but they often will put heat on it,
or you're lying in a comfortable position for an hour or whatever.
There's other things around the actual insertion of the needle.
What I know is that when you do the whole thing,
whether or not you ultimately stick the needles on,
doesn't matter to the effect.
So you might have had the same effect if they never even stuck the needles into you
could have just been a non-specific effect of going through this.
You're also telling me, just spitballing, I don't know.
Again, I don't know because it's one anecdotal thing.
You're just telling me now.
But just possible answers are, you know, I mean,
I've had a rotator cuff, you know, tear at one point two, and it took about a year to get better,
and it just stopped bothering me. So maybe it was just that time where it healed, you know,
like they do spontaneously heal. And maybe, you know, you had a little bit of a temporary
placebo effect at a time when it was pretty much getting to the point where it wasn't that much
of a problem. You know, it was healing to the point where it wasn't that much of a problem.
Anyway, and then there's your memory, right? So now you're telling me this story.
It was also documented at time because we made a video. Yeah, that's great. That changes it a little bit.
For most people, though, they don't document a video about it.
But even still, because your memory reinforces the narrative, right?
Your memory literally changes.
So maybe, you know, you exaggerate how bad it was before, how good it was after, how well the timing worked.
The other, there's also an effect where if patients have a chronic problem, they get seven different treatments and then it gets better with the seventh one.
They credit the seventh one.
But it was going to get better at some point.
Whatever they did most recently was the thing they credit with it.
So that could be regression to the mean.
There's all these things that could be going on.
Which I was searching for in my situation.
Yeah, but you can't know.
You can't answer the question individually because it could just be as a coincidence.
You know, you just can't answer it individually.
You have to answer it statistically.
It's just the timing is so suspicious.
Yeah.
And it's happened in other controversial, poorly proven therapies as well.
Are you familiar with KT tape?
Yeah.
So there's evidence.
that it doesn't work well
in a lot of cases.
And especially plausibility-wise,
not much is happening,
maybe some fascia thing.
Maybe you feel better,
better prior pro-proreception.
It could be bracing it.
I mean,
so there's probably some non-so.
There's always, like,
when we say placebo effects,
people think the placebo effect
is all mined over matter.
No, it isn't.
It's actually mostly not that.
It's mostly statistical illusion,
right, regression to the mean,
et cetera.
It's also just non-specific effects
of the interaction,
of the therapeutic interaction.
And sometimes there are elements of the treatment where there are non-specific benefits to those elements.
Like if you're the KT tape, it could be that there is some bracing going on or some support or whatever.
The magical stuff is all nonsense, but it doesn't mean, yeah, it's a little bit easier to function if you have some kind of some sort of tape in there.
I had an acupuncturist actually told me once without realizing what he was admitting to.
He said, oh yeah, when I do an acupuncture session with a patient, they get 90% of their benefits.
before I even stick the needle in.
It's like, yeah, I totally believe that.
I think they got 100% of the benefit, you know, before you stick the needle in.
But, you know, he perceived that, yeah, like, it's all the other stuff happening around it
that's providing some kind of benefit.
Well, I mean, we could say that for a lot in health care.
A good therapeutic alliance with a mental health specialist before they institute
CBT or anything, even if they're not trained in CBT, they get good outcomes.
If you look at the, so my wife's a counselor, actually, but if you look at the evidence,
my understanding of it is that the therapeutic alliance with the therapist is literally everything
and it does not matter what they're doing.
It doesn't matter if they're using CBT or existentialism or whatever.
It's the skill, the therapeutic alliance.
Yeah, that's what does it.
So like with acupuncture, there's a lot of questions, ways of getting at this question of
like, is what they're doing matter?
So there have been studies where they looked at acupuncturists who were given a 10-minute seminar
and how to do acupuncture, who have been doing it for less than a year, and who were doing it for 20 years.
Right.
So if you thought there's a real skill here, right, where knowing where and how to do it matters,
there was no difference between any of those groups.
The skill of the acupuncturist doesn't seem to matter at all.
But they also varied something else.
some of the acupuncturists were friendly and encouraging to the patient and others were clinical and
detached. That mattered massively. So every way you look at it, it follows a pattern of it's a
theatrical placebo, not a real physiological effect. And again, that's the holistic evidence approach
you have to take to a complicated question of, does acupuncture work? What is acupuncture? What do you mean by work?
And if you're trying to control for variables and you're talking about efficacy of a specific intervention, we could say there's no efficacy to sticking needles into acupuncture points.
And that has been established now with thousands of studies in the totality. It's very clear.
Now, thinking practically as a primary care physician, when my patients are reaching their wits end with chronic pain, we've tried a lot of different interventions.
They're now at the point where they're either going to be taking opioid medications,
potentially going in for dangerous surgeries, and they're considering acupuncture.
I run into the dilemma where I know the evidence that you're describing or the lack thereof,
and I think, man, maybe it's worth them trying this because maybe they'll get something from the theatrical placebo performance.
Or am I doing epistemological harm by teaching them that you don't know.
need science. Yeah, I love the way you frame that. You are causing epistemological harm,
in my opinion, and it's not worth it. And I've had a lot of patients who have chronic pain,
chronic neuropathic pain often go to acupuncture and it's mixed results, man, it's all over the
place. It's not like it's a home run every time or they, even more, more often than not,
it doesn't work. Of course, if it worked, they probably wouldn't be coming to see me. But so
that's why it's not scientific data. But my point is, if you're talking about as a practitioner,
or as a clinician, it's not like everyone's going to get a positive effect, even placebo effect,
and it's often expensive, and they convince them of a lot of shit. Sometimes they're even
anti-vaxxers, et cetera. You're putting them in the hands of a pseudoscientist. You have to be aware of that.
And the side effects are more than they admit to, but if you look at studies, there are significant
side effects often like you're thinking, you know, look at a picture of any acupuncture,
man, they're not wearing gloves. That's the joke we say. What's the difference between
dry needling and acupuncture,
but dry needling, they use gloves.
That's really the only difference.
So I just think, and I just tell patients, you know,
I looked at the literature.
This is why it's important as a physician,
I think, to look at the literature for outlier therapies.
I've thoroughly looked at the literature,
and I'm not convinced that there's any real effect there.
I don't tell them, I don't say don't do it, or it's pseudoscience.
I've looked at the evidence, and as your physician,
I do not recommend it, and I think that there's no evidence to support
that there's any benefit there.
And that's what I believe.
It's a completely fair statement.
Now, does the evidence show that there is a significant danger to going for acupuncture?
Because on the off chance that it could work, if the risk is quite low,
maybe in comparison to the risk of the opioids.
Well, that's what they're doing to try to sell.
They're saying it's better than opioids, which is kind of a low bar.
It's a really low bar.
To be fair, it's a reasonable thought.
It's a reasonable thought.
But I don't think it's a fair way to promote.
Because you can promote any pseudos.
Yeah, I'm not saying that you can promote anything that.
Better than acupuncture.
You can support anything with that argument.
Better than opiates.
So here's the thing.
I treat it a lot of chronic pain as a neurologist,
neuropathic pain specifically,
which is even harder than other forms of pain.
And I will try anything that has evidence of efficacy.
And if there are treatments that are,
even if the evidence is preliminary,
but it's a treatment that itself has been shown,
to be, you know, to work, it's doing something, it's doing what it's doing. Like it may be a drug
that is approved for other things and has a mechanism where it should work and the preliminary
evidence shows it actually does work. I would say, all right, that's enough where I think
it's reasonable to use while we're waiting for more definitive evidence. If that evidence is
negative, I'll stop using it. But at least I know the risk versus benefit. I have some way of assessing
its therapeutic, net therapeutic effect. Will I use something that's completely untried,
completely, especially if there's no plausibility behind it, I think the risk of harm is always
just generically, you know, worse. But the other thing is, you know, chronic pain's a complicated
issue. I think the answer is to get better at treating chronic pain and taking a more thorough
approach to it. Sometimes these patients need psychotherapy.
Not sometimes. Yeah, you know that that's a critical. They might need to be.
Being a DO, I get a lot that referred to me quite a bit. Yeah, yeah. They need to address all the aspects
of their pain. You know, you know, tell me everything you're taking. Tell me how it's affecting
your life. You know, I'm going to refer you to a pain center that has a psychologist or just a
psychologist who treats chronic pain. We're going to try the neuropathic agents. We're going to,
you know, I try, obviously try to avoid opiates at all costs. You can go to some of the
ultram, like sort of halfway measures. You know, even that, you know, has addictive potential.
So you've got to be careful there as well. And you've got to talk them through it sometimes and
hold their hand and let them know. This is, we're taking the long approach here because
that's what you want, not the quick fix, because the quick fix is going to lead you down a very
dark place that's not going to help you. And so you've got to be a little bit patient.
But people do get out of this. And he didn't get to do the lifestyle things. You've got to do
everything. And, you know, I just get motivated. Because sometimes it's as simple as, oh, you're not
giving me a quicker fix, bad review, you're a bad person, bad physician. I don't care about
that, though. I didn't care. That's one of the advantages of being a salaried academic.
I didn't care. I had no reason to care about any of that. It wasn't in my livelihood, but,
which is, I think, a point in favor of that kind of, like a salaried physician's
whose livelihood is not dependent on anything but just being a good physician.
Right, right.
But it's hard.
Not like I have the magic solution to chronic pain.
It is hard.
It is very hard to treat.
And at a tertiary referral to the center, I got the worst of the worst.
I got the people already failed primary and secondary care.
Well, now that you bring up alternative medicine, now legitimate institutions are bringing
complementary medicine as part of their treatment protocols.
Cleveland Clinic has a whole center for this.
Why?
It's a business.
There's very firm answers to the why.
One is that it's a business.
So oftentimes, I've looked at this very many times,
my own institution, other institutions.
Sometimes it's because the decision was made by business people,
like working at the hospital, not people who are scientists or clinicians or whatever.
It's free money.
It's a cash cow.
Eventually people are going to succumb to that temptation.
They're going to do it because it's free money.
As soon as they feel like they have enough cover,
they could say that,
oh, this is acceptable enough that we can do it,
they will freaking do it.
The second is there's often one true believer behind it.
When you find out what's going on,
it's like, yeah, this one guy is making all this happen.
And then everyone else is just what we call shruggy.
So they don't know, don't care.
It's not worth, again, that's the ivory tower problem of whatever.
It's touchy-feely.
It's benign.
who cares? And again, a lot of people are, a lot of clinicians end up where you were. And I totally
get this. I'm very sympathetic to this because I've worked with the chronic pain patients and I know
how desperate you are to have something to offer them. And it's the hardest thing to say to a
patient is, I have nothing to offer you. Now, it's, you know, if you're a procedural specialist,
that's easy. You could say, well, this is not something I, my procedure can fix. Here, I'm going to send
you back to whatever, the medical doctor, whether that's, you know, a medical doctor, whether that's, you know,
a GP or the medical specials like a neurologist.
But like for me, it's like, I'm it.
You know, I could try to get them into a pain clinic,
which is not easy to do, as I'm sure you know.
It's way more demand than supply there.
I could try to help, but I'm still going to be in charge of their pain management
long term one way or the other.
So I can't punt, right?
And so I always want something to be able to offer them that I think could move the ball
in a positive direction.
But sometimes I'm like really struggling, you know,
patients where I've tried everything or they have multiple allergies or whatever. There's just
all sorts of berries in the way. And I totally see the temptation to say, why don't you just
try the acupuncture or whatever? Because, you know what? It gets them out of your office for a cycle.
It gives them something to do. Maybe you'll hope they'll have a placebo effect. They won't be
calling you as often. I get it. I get it. But I just could not in good faith do it, knowing what I
know about the evidence and the literature as deeply as I do. And I've been on record reviewing.
I published articles and peer-reviewed journals about acupuncture reviewing the literature.
I couldn't do that and then say, sure, try it.
It doesn't really fit.
But I'm always doing what's best for my patients, right?
And I didn't ever think it was best for them to have them go to see a pseudoscientific practitioner
who's going to tell them unscientific things and do a procedure that has no proven efficacy
just to get them out of my office for a cycle.
It's not worth it.
So how do you react or how do you feel when,
Cleveland Clinic launches the complimentary center for well-being.
Yeah, it's terrible.
It's very counterproductive.
So we complain about it on science-based medicine.
We try to write about it and say this is why this is happening.
This is why it's counterproductive.
It confuses patients.
It re-diverts funding.
It diverts attention.
I got to tell you, man, the harm of alternative.
Alternative medicine is obviously a broad umbrella term.
But however you slice it, there's demonstrable harm there.
There was a study published four or five years ago.
You could look it up on science-based medicine.
where they looked at, again, it's hard to like thoroughly control for this, but they did as good as they could do.
They just looked at patients going through a cancer center and they, you know, surveyed them,
and they split them into people who used alternative medicine as part of their cancer treatment
and people who did not use alternative medicine as part of their cancer treatment.
And the patients who used it died faster, right?
They had worse mortality.
They had worse outcome.
Of course, they weren't randomly selected to those two groups, so they,
that they may not be, but they did try to balance them and weight the evidence and compare
that they did everything they could statistically to make it work. And there's a pretty strong
signal there that if you divert your time and attention and resources to things that don't work
when you're fighting for your life with a life-threatening disease, that is counterproductive.
Which, of course, I say it like that, it makes sense. But that's what's basically what's
happening. And even though there's the Trojan horse, if it's just making them feel good
to, you know, it's just, you know, quality life. So yeah, that's how you say that.
that in order to not be held, you know, to, like, proving efficacy for what you're doing
and to get into the door.
But at the end of the day, you are completely diverting their attention and resources
away from real medicine.
And you are inculcating within them anti, you know, they were less likely to be compliant
with their chemotherapy.
They were less likely to be compliant with their other medication, with their doctor
recommendations.
That's a feature, not a bug, as we say.
It is that kind of anti-science, anti-medical establishment conspiracy thinking,
It's baked into the industry.
It is not an outlier.
It is not just something, a quirky thing.
That is part and parcel of the alternative medicine industry.
It's to be anti-science and anti-mainstream medicine.
And you cannot separate those two things.
Homeopathic medicine.
Is it BS?
So that's one of the easiest questions I get,
because it's one of the things that we look at,
which is 100% pure magical pseudoscience.
There is zero plausibility to it.
and the history of it is very, very clear.
And the evidence is very, very clear too,
because despite that, most people don't know what it is.
They think homeopathy is herbalism or natural remedies or whatever.
Nope, that's not what it is.
It was invented pretty much out of whole cloth a couple hundred years ago by an Austrian physician,
and it's based upon ideas that are completely wrong.
You know, the idea that like cures like, you give somebody a small dose of a substance
that causes a symptom, it'll actually reduce.
or cure the symptom, there's no reason to believe that that's true. That's like sympathetic
magic. It's magical belief. It's not true. And 200 years of subsequent science has not shown it
to be true. If anything, it's not true. But then they also don't do that. They dilute the
substance out of existence, right? So now you have the echo of the like cures like.
The water has memory. Yeah, they say, well, the water has memory, which it doesn't. And they say,
well, but there's not even a single molecule. It's a one has the essence. It's an essence-based
medicine, you know. So it's magic upon magic. And then when you look at the substances they're
using, it's fairy dust, you know. So it's like fairy dust diluted out of existence is the best
description of what homeopathy is. It's pseudoscience from beginning to end. So what you end up
with is like a sugar pill or a thing of water with no active ingredients and it with no
zero plausibility that could do anything. It's water. It's literal water. And then, so then you
just do placebo versus placebo studies. And you're thinking. And you're
throw in the dice every time and they're cheating and peehacking to get some results sometimes.
But what's good about the homeopathic literature?
Is that it provides hydration?
You don't even get enough water for that.
But the good thing is it's like it's a great real world test of what happens when you study treatments
that don't work within a belief system like homeopathy.
What happens?
And so you could look at the homeopathy literature pretty certain.
that homeopathy doesn't work even before you look at,
because there's zero prior probability,
and say, this is what the literature looks like
when you study treatments that don't work.
And then we can use that as a template
to look at other treatments.
You know what I mean?
So we could say,
this is the percentage of studies
that will have a false positive.
This is the pattern of that we see in the literature,
the lack of replicability,
the effect declines over time,
the lack of the heterogeneity
and the results, et cetera, et cetera.
this is the way they manipulate the data
to try to make it seem like it works.
But at the end of the day,
and there have been multiple systematic reviews now,
there was like an Australian systematic review
eight to 10 years ago, whatever.
There was a Swiss systematic review, whatever.
There was a U.K. one where they looked at 30, 40,
50 different indications,
and they conclude that homeopathy has been shown
to work for zero indications,
not a single one.
After thousands of studies,
after 200 years of research,
it has not been shown to work for anything.
That's just the EBM approach.
Then you add that to the SBM approach,
the science-based medicine of there's zero prior plausibility.
It's just a great story.
And again, it teaches us how to look at the literature.
So then I say, okay, here's what the homeopathy literature looks like,
and I know homeopathy doesn't work.
What if the literature on another question
looks just like the homeopathy literature?
To me, that means it probably doesn't work either.
If it's showing me the same pattern of evidence,
it's also partly why I say acupuncture doesn't work.
It shows the exact same pattern of evidence
that the homeopathy literature shows.
I don't think that's a coincidence.
I think that's because neither of them work at the end of the day.
So that's sort of the one good sort of meta thing that comes out of it.
It's like it's a great object lesson in what are the patterns in the literature
when you are doing placebo versus placebo clinical trials over decades.
Is there any world where by doing SBM, the science-based medicine approach, we can miss opportunities for innovation?
So there's always a trade-off between false positive and false negative, right?
There's no perfect world where you have zero false positives and zero false negatives.
And it's a slider, man, right?
There's a quality issue.
So the only net gain, win-win, is when you improve overall qualities.
We're always trying to do that.
And then the other layer is we're always trying to balance the false-positive.
versus false negative, knowing there's a trade-off between the two. The question is, does the
system we have right now, is it optimally balanced between false positive and false negative?
And the, not just me, not just my science-based medicine colleagues, but statisticians and
journal editors, a lot of people are coming to the conclusion of we're way too far to the false
positive end of the spectrum. And some journals have said, we don't even want to see P values
anymore because they're way too biased
towards false positives. I want to see
effect sizes. I want to see number needed to treat.
I want to see a basing analysis.
I don't even give us P value.
Other journals have said, we think we should
change the cutoff of
statistical significance from 0.05
to 0.005.
That's another tradeoff, right? That's a false
positive to false negative
tradeoff as well. They say we should move the
slider over in order of magnitude
and that will balance things out better.
I think the best approach
is just to educate experts, physicians,
on how to look at the literature.
Again, it's the intellectual lazy thing.
The P-value was never meant to be the one test
of whether something is real or not,
but people are using it that way, and you can't.
So I think just better educating our physicians
and our scientists about how not to P-Hack,
how not to over-rely, how to do statistics properly,
how to do a more thorough assessment
is the ultimate solution
rather than just a fix of moving the slider up or down one way or the other.
Now, in terms of innovation, though, I think the short answer is no,
because innovation doesn't really follow the same rules,
meaning that anything kind of goes if you're just trying to come up with new things.
You're saying innovation and recommendation, universal recommendation.
So if you go back to the structure of scientific revolutions
and the philosophers of science who are, you know,
if you're familiar with, you know, those guys.
But basically, there's two components of science, just in general, not just medicine.
There's the generating new ideas and then testing whether those ideas are real or not.
So science-based medicine is all about testing those ideas to see if they're real or not.
Innovation depends largely on coming up with new ideas.
And there, it's like anything kind of goes.
Wherever you get your inspiration or your ideas or your clinical observation, whatever,
many different pathways can feed into that end of things, but at the end of the day, you have to
test it to see if it's real or not. So I don't think it negatively impacts the innovation part
just by saying, but we need evidence. Now, you could also ask, another way to ask the question
is, are we doing more harm than good by having the threshold where it is now? Just, you know,
if you look at the threshold meaning, how likely are we to use treatments based on
preliminary evidence. Everyone agrees if you have class one home run evidence of efficacy,
that's fine. You should be using those. And if you have home run evidence of lack of efficacy,
not everyone agrees, but most scientists will agree. Yeah, those are treatments we shouldn't use.
But where's the threshold in the middle? That's often where the conversation is. Where's the
threshold of evidence before you think you're more likely to help a patient than to harm them
by trying a treatment for which there's only preliminary or there's mixed evidence or there's
less than this best optimal class one evidence.
That's a hard question to answer.
There's no one simple answer to that.
And statisticians will be debating endlessly
about exactly where that line should be.
And it's not even one line.
It's a lot of lines there.
Isn't it more about presenting that information,
allowing the patient to decide off in the front of consent?
Absolutely.
So there's a gray zone where it's patient preference.
Then there's a threshold where it's like,
you could recommend it.
And there's a threshold below which you could recommend not to use it.
Right.
That's what I believe.
But there is that in the middle where there's trade-offs and you can present it to the patient.
You can say, we don't really know, whatever.
That's fine.
That's that sort of gray zone in the middle.
But we could ask questions about where we are today.
And so one question is, well, if you look at treatments that are being used but are based on preliminary evidence,
and then later definitive studies get done, how likely are those treatments to be overturned?
And that study's been done a couple of times, right?
And I'm trying to remember the numbers off the top of my head,
but it's something like only about 30% of the treatments were confirmed
with a later, more definitive clinical trial.
40% were shown not to work, and 30% had inconclusive evidence.
So less than a third, you know,
so these are treatments that people are using,
so I'm presumably because they think they're plausible
and the preliminary evidence is encouraging.
most of them don't work.
So that's one piece of information to help us evaluate.
Do we have the threshold correct or not?
To me, that argues like,
well, maybe we should be dialing it back a little bit.
But of course, you're individualizing this a lot
because a lot depends on the patient.
If they have a terminal untreatable evidence,
I would put the threshold way down, obviously, right?
Then the risk benefit, it's all risk benefits.
So, you know, if you're treating something
that's self-limiting and subjective,
it's like I probably don't want to subject them to some new untested treatment just when, you know,
I could get away with the tested treatments.
But if they have, you know, something that's untreatable and fatal and I have no treatment for them,
absolutely, you know, experimental things are fine.
But then there's the tangential question of what, in what context do you offer it?
I think it's always best to offer it in the context of a clinical trial.
But if that's not possible, sure.
And I've done that myself.
I've treated ALS patients with experimental treatments that were available because they were already approved drugs based upon what I thought was sufficiently encouraging evidence.
None of them worked, but was anything lost?
No, the patients were going to die anyway.
It was at that point the risk versus benefit favored using it.
So you have to individualize the decision to a lot of these individual contexts.
But again, as long as you're thinking explicitly about it and you're thinking statistically about it and you have a clear sense,
I think as physicians we should be masters of this right.
This is our job to make these decisions.
We should be thinking about it in the most complicated, nuanced way possible
and to be able to communicate it to the patients
so that they could make informed decisions.
But I think just saying, why not, let's try it.
Who knows it might help?
That's a casual approach.
Then you get to the more likely to cause harm than good.
Yeah, the issue that I find with this in the world that we exist in today
because not all of us practice in an academic setting
is that research needs to be ultimately funded by somebody.
Ideally, in an unbiased fashion,
but to get people to give money for unbiased research
is not the easiest of things.
Budgets for research are being slashed in the United States
at the NIH because of Kennedy's doing.
You have great scientists leaving the United States
and going to other places to not even do research in the U.S.
So there's this natural thought,
by me in the sense of who's going to be doing that research, that's scary.
And then also you have a pretty solid thought that even a non-scientific person can believe,
where they say, well, who's going to put money to test if this mineral that I love
actually helps with my condition where no one's going to make any money off of it?
Right. So who is going to be funding that research?
Yeah. That's a great question. But you can answer that question, right? You can say,
has anyone studied it and what's been published about it.
So there is investigator-initiated research.
I participated in a lot of this myself,
where the institutions will, you know,
because academics are trying to make a career for themselves too, right?
If you're a researcher, you've got to find something to research.
You can't do all pharmaceutical-funded research,
just doing the final step, you know, of a...
That's important research as well, you know,
that sort of translational sort of final step research.
But, you know, for a time I was, you know, part of an ALS consortia,
where we only cared about one thing,
finding Cures for ALS.
That's the only thing we cared about.
And we would study anything,
whether it was funded by the pharmaceutical industry or not.
And we just came up with the funding.
There are ways of getting alternate funding
for things.
A lot of it from the government.
You write a grant.
You get it from patient groups.
Or if it's an FDA-approved drug,
you'd ask the pharmaceutical company,
hey, you want to give us some free drug
so we could study it?
Maybe you'll get another indication out of it.
And they would go along with that sometimes.
So you find a way to fund the research.
But you're right.
The research money,
follows profit, absolutely. That's why we need the NIH. That's why we need the National
Cancer Institute. We need the government to fund research that won't get funded by the pharmaceutical
industry. And that's their explicit mission to fund research that won't be funded by the
pharmaceutical industry. That's explicitly what their mission is. And so we need that. And so, yeah,
cutting that research is a disaster. That's going to cause a lot of downstream harm and patient
deaths, et cetera. That's just terrible. It's hard to underestimate the harm that's going to come from
that, but it may take a generation to really feel it. And then we'll never know, like, where we would
have been had they not slashed that research. You mentioned how when you were looking for ALS
cures, you were looking everywhere and testing anything that you could think. There was a recent
issue in Alzheimer's research where there was a thought initially that some image manipulation
was happening. And then afterwards it was proven. And then it led to it. And then it led to you. And then
led to potentially a decrease of innovation of different variables that could be studied.
How do we prevent that from happening in the research world?
Yeah, so that's a really fascinating and tragic story.
Alzheimer's disease is such a complicated neurological disorder.
Yeah, a lot is looped into it.
Yeah, a lot of it comes into it.
And there's a lot of lessons, historical lessons,
and sort of how we make these decisions about what we research.
So one quick version of it is that, you know,
there was two schools of thought with Alzheimer's disease.
And we don't know.
We don't know what's driving the disease.
But there was one beta amyloid hypothesis, right,
where we know that beta amyloid is accumulating in the brain cells
and that that was a marker of disease progression.
And they, you know, form the tangles and everything.
So it's, and so there are people who think that's the disease, right?
If we can stop the beta amyloid, we will stop progression of the disease.
and that was sort of a dominant opinion within Alzheimer's research,
partly because of this fraudulent research that was supporting the amyloid hypothesis,
and that probably did reduce exploring other avenues for a while,
and it may have really rehampered progress in that field.
The problem with the amyloid hypothesis is that for 20, 30 years,
we tried to treat it and it didn't work.
And so at some point you'd say, well, if it's not working,
maybe we're just, it's the wrong hypothesis.
There's also tau and other things,
and we're getting much more sophisticated
than our understanding of the, you know,
the pathology of Alzheimer's.
But, and the amyloid hypothesis is dead.
Don't let me give you that wrong impression.
But then what happened was the newer drugs came out,
you know, and these are monoclonal antibodies.
So they're just way more effective.
Are there much more powerful ways of treating amyloid.
And they do work, sort of, you know.
So it's like not a home run, but it was the first time in decades that anyone has demonstrated
actual efficacy in slowing, like modifying the disease.
And it was based upon treating amyloid, not just preventing it for building up, actually breaking
down the plaques that already exist.
But it comes with a lot of side effects and the effects are modest.
But it's a proof of principle.
At the first real proof of concept that we have that this is actually affecting the progression
of the disease.
So we'll see, I mean, it's actually pretty exciting time for Alzheimer's research.
We are starting to make some progress and we are learning a lot.
And after decades of like the story getting more and more complicated but not really making any progress in treating it.
So we'll see maybe we're starting to turn a corner.
But this little deviation, fraud is just poison to science.
It's so, on so many levels, this, you know, fraudulent researcher, you know, was really slow,
was diverting attention probably away from other avenues of research and was putting his thumb
on the scale of the scientific literature for just their own personal career, just terrible.
And is there anything we can do about this?
So I think that, yes, I mean, there are things that we can do about.
The fraud was eventually discovered.
And it could have been discovered prior to publication if they had a process in place.
So I think that scientific journals largely trust the honesty of the people who submit
articles, and maybe we just can't live in that world anymore. They just, you know, they do
editorial review and then they do peer review, but they don't really look for fraud. You know what I mean,
that's not really what the system is designed to do. That usually gets picked up after the fact,
and then they retract it, right? But this shows you how long and it could last and how much
damage can happen before that, the peer review clicks in and they retract the results. So maybe we need
to have scientists submit their raw data.
Although when I suggested that one researcher told me, it's like, you know how many
terabytes of data I have?
I mean, there's no way I could save or...
Can AI play a role in any of this?
Maybe.
You know, I do think that this is a good use of AI, absolutely, because a lot of fraud can
be detected with pattern recognition kind of algorithms that AI can do.
Absolutely.
So I'm hoping, that's actually one thing I'm very hopeful for, that AI will make it easier to detect
fraud in scientific publications pre-publication.
And I do hope they leverage that technology to do that.
That would be a massively good outcome if they did that.
If we can really dramatically reduce the amount of fraud and research, that would be great.
Because it's increasing.
It's increasing because of the explosion of pay-to-play journals and, you know,
there's the number of papers being published.
It's just overwhelming.
Like there's no way to keep up with it.
We need AI, I think, to sort through all that data.
Since AI sources whatever is published and available to it via the web, if there is, let's say, just a dump of all of these low-quality journals, at some point, does AI start making wrong recommendations?
So absolutely.
If AI is trained on bad data, it's going to make bad recommendations.
This is already happening with systematic reviews and meta-analyses.
So one thing you have to look at with a systematic review is there's a garbage in, right?
What was their criteria?
So again, we get back to the acupuncture literature.
And if you look at the acupuncture literature worldwide, about 60% of the clinical studies
are positive and 40% are negative, which is about exactly what you see when there's zero effect,
by the way.
You see that 60-40 split.
That's what you see in homeopathy.
that's basically the split you see.
And there was a statistician
who wrote the paper
most published research is wrong. I don't know if you're familiar
with this kind of seminal paper, where
he calculated that
even with a little bit of p-hacking, you can get a
60-40 split and a false positive
to, you know, you can get 60%
positive on a negative effect.
So it kind of all fits together. Like this is what we
expect to see with negative literature. But if you
look at the acupuncture literature coming out
of China, the clinical research,
what percent do you think is
positive. Of the clinical research for acupuncture coming out of China. So it's 60% positive
worldwide, which is consistent with a null. Hypothesis in my opinion. What percent do you think is
positive? I mean, I'm hoping it's higher. How much higher do you think it is? No idea. It's 100%.
Well, I mean, that doesn't fit. Impossible. Realism. It's impossible. It's statistically
impossible. Even if it worked, it would be statistically impossible. You should have a bell curve of effect sizes
coming out of any clinical research,
if we're honest, right?
This is the...
So having 100% positive rate means it's worthless, right?
Basically means they're not discriminating at all.
Is that that publication bias I mentioned earlier
where they're only publishing positive research
and no negative research being published?
It's 100% publication bias.
That's what we're seeing.
So if you include those studies
in your systematic review,
it's putting a thumb on this scale, right?
We know that it's not reliable
because they have 100% publication bias.
So you have to account for that.
And good systematic reviews do.
They say, all right, if we only look at whatever,
countries that don't have 100% bias,
or you do the, we're only looking at pre-published,
you know, the pre-print, you know, the pre-research registering,
those who said, this is the research I'm going to do,
these are the methods I'm going to use,
and then they follow the methods that they registered
so they can't cheat and tweak them after they look at data.
If you look only at those, does there, they're still in effect.
If we adjust for publication biases, they're still in the fact.
You got to do all those things.
But again, it's that garbage in garbage out problem.
It already exists with systematic reviews and meta-analys
and then we have the exact same problem with AI.
My final question is one that's probably going to get applause from the anti-vaccine crowd.
All this P-hacking, data manipulation, things that we can do to make research look better than it actually is,
that perhaps people have weaponized in Eastern medicine and some other examples.
What's stopping the pharmaceutical industry from doing that?
So, yeah, that's a great question.
This is the problem that we run into as science-based medicine advocates all the time,
is that it's when you turn skepticism into cynicism, right?
What we're saying is we're trying to raise the bar of the quality of scientific research,
and it's happening.
It's demonstrably happening that more and more, you know,
fixes are being put into place and methods are being put in there.
But keep in mind what we're saying,
what we're saying, not that all research is fraudulent, you can't trust any of it, it's all nonsense, or whatever, you could make any outcome that you want.
We're saying, if you follow this process, you can get to a reliable answer. It just takes time. It takes 10 to 20 years of doing multiple studies and looking at replicability, et cetera, et cetera. And we can detect for pehacking. We can do all these things and for publication bias. If you do all those things, you get to a reliable result. And so where we're, are you?
It's not like it's all nonsense. It's just that you can't do shoddy research or look at only
preliminary studies or look at only poor quality studies. You have to look at all of these things,
replicability of high quality studies. So there is a threshold beyond which we could say, yes,
we can trust that this evidence is reliable and this treatment works.
So one thing that is, I think, strong about our country and every country has its own system,
right. The
FDA
research that counts towards
FDA approval is very highly regulated.
I've participated in it. I'm telling you,
the paperkeeping is meticulous.
It is very highly regulated.
So it doesn't mean that
the pharmaceutical industry is not trying to give themselves the best
chance of having a good outcome,
but it would be really hard to straight up cheat
for that kind of research.
But the other thing is,
We're not the only country in the world.
The thing anti-vaxers forget that,
a lot of pseudocides forget.
We're not the only country in the world.
This is getting replicated around the world.
And if you look at the worldwide data,
then you start, you know,
then signals emerge, right?
We do see that, yeah, there's pretty reliable
that it's replicable over and over and over again
that the measles vaccine works.
You know, and when you take it away,
measles comes back.
And there's no correlation with any vaccine and autism.
That's been replicated over and over again
in all these different countries.
There is a threshold of reliable evidence.
So you have to, and I often, when I'm lecturing about this,
sometimes I realize, oh, I've turned to that corner
where now they think they can't trust anything,
and I have to remind them, no, this is not an argument
for cynicism or for nihilism.
This is an argument for high-quality scientific research.
We can make it better than it is,
but you can get to a threshold of reliability.
It's just higher than you think it is.
Well, that's why I like that you use the term.
I forget which podcast I saw you,
speak about false skepticism.
Yeah.
Because that's essentially what that is, right?
I mean, cynicism is another word for it.
But if you're being selectively skeptic, yeah.
Denialism is pseudo-scepticism.
Absolutely.
And that's the hypocrisy that you pointed out is so valuable.
As a very clear example, RFK said, we need to copy Denmark's vaccine program.
Why Denmark?
Why Denmark?
But even further than that, why don't we copy the fact that they just said that no vaccines
cause autism because they had the largest study of 500,000 plus.
Why are you selectively choosing one part of their system, but not the other?
Yeah, he's a total cherry picker, right?
So he chose Denmark because they have the lowest vaccine recommendations.
And also, as you know, the country is not a good analogy for the United States in terms
of their...
Two percent population.
Their health care system, they have socialized medicine, et cetera, et cetera.
Again, vaccines are a public health intervention, not just a drug.
You know, not just a...
And they have to look at it that way.
but he, I mean, that's, he weaponizes it well.
Yeah, absolutely.
Yeah.
Well, this was such a fun conversation.
There's a lot of fun.
Thank you so much.
I learned a lot, and I hope the audience did too.
I know they did too because a lot of valuable info presented there.
Well, thanks for having me on.
It was a lot of fun.
Where can people follow along your journey?
So you can find most of what we do through the skeptics guide.org.
So I have a podcast, The Skeptics Guide, to the universe.
I blog at science-based medicine.org.
I also have a personal blog, Neurologic.
blog, which is my neuroscience and all things skepticism.
Which is how we got connected in the first place.
We're going to talk about headaches.
Oh, that's right.
Yeah, yeah.
And maybe this room for part two on this.
I also published two books, the skeptics guide to the universe, which is like a primer on
critical thinking.
So that's give people that book.
If you want to say, how do I deal with my crazy uncle?
Well, first give them the book.
Have them read through it.
And then you could start talking to him about it.
And then the second one is the skeptics guide to the future, which is thinking about future technology
from, again, a critical thinking point of view.
Amazing. Thank you again.
What a great conversation.
If you enjoyed this one, I know you'll also enjoy my conversation with Dr. Daniel Aman,
who has some very unique theories that I wanted a challenge on this podcast.
They made for a very interesting conversation.
So scroll on back and find that one.
Huge thanks to Dr. Stephen Novella for coming down to the city for this episode.
I really admire how long he's been fighting the good fight against misinfo.
And I think we could all benefit from being a little bit more skeptical these days.
If you enjoyed listening to him, check out his podcast, The Skeptics Guide to the Universe,
which has been running for over a decade.
I've got a long way to go to catch up.
As you heard him say, he's published several books under the Skeptics Guide banner.
Those are also linked down below.
Please be sure if you enjoy this episode to give us a like, a five-star review, a comment telling us what you enjoyed about the episode,
as it helps us find new viewers and listeners.
And as always, stay happy and healthy.
