Huberman Lab - Improving Science & Restoring Trust in Public Health | Dr. Jay Bhattacharya
Episode Date: June 9, 2025My guest is Dr. Jay Bhattacharya, MD, PhD, Director of the National Institutes of Health (NIH) and Professor Emeritus of Health Policy at Stanford University. We discuss which scientific questions oug...ht to be the priority for NIH, how to incentivize bold, innovative science especially from younger labs, how to solve the replication crisis and restore trust and transparency in science and public health, including acknowledging prior failures by the NIH. We discuss the COVID-19 pandemic and the data and sociological factors that motivated lockdowns, masking and vaccine mandates. Dr. Bhattacharya shares his views on how to resolve the vaccine–autism debate and how best to find the causes and cures for autism and chronic diseases. The topics we cover impact everyone: male, female, young and old and, given that NIH is the premier research and public health organization in the world, extend to Americans and non-Americans alike. Read the episode show notes at hubermanlab.com. Thank you to our sponsors AG1: https://drinkag1.com/huberman David: https://davidprotein.com/huberman Eight Sleep: https://eightsleep.com/huberman Levels: https://levels.link/huberman LMNT: https://drinklmnt.com/huberman Timestamps 00:00:00 Jay Bhattacharya 00:06:56 National Institutes of Health (NIH), Mission 00:09:12 Funding, Basic vs. Applied Research 00:18:22 Sponsors: David & Eight Sleep 00:21:20 Indirect Costs (IDC), Policies & Distribution 00:30:43 Taxpayer Funding, Journal Access, Public Transparency 00:38:14 Taxpayer Funding, Patents; Drug Costs in the USA vs Other Countries 00:48:50 Reducing Medication Prices; R&D, Improving Health 01:00:01 Sponsors: AG1 & Levels 01:02:55 Lowering IDC?, Endowments, Monetary Distribution, Scientific Groupthink 01:12:29 Grant Review Process, Innovation 01:21:43 R01s, Tenure, Early Career Scientists & Novel Ideas 01:31:46 Sociology of Grant Evaluation, Careerism in Science, Failures 01:39:08 “Sick Care” System, Health Needs 01:44:01 Sponsor: LMNT 01:45:33 Incentives in Science, H-Index, Replication Crisis 01:58:54 Scientists, Data Fraud, Changing Careers 02:03:59 NIH & Changing Incentive Structure, Replication, Pro-Social Behavior 02:15:26 Scientific Discovery, Careers & Changing Times, Journals & Publications 02:19:56 NIH Grants & Appeals, Under-represented Populations, DEI 02:28:58 Inductive vs Deductive Science; DEI & Grants; Young Scientists & NIH Funding 02:39:38 Grant Funding, Identity & Race; Shift in NIH Priorities 02:51:23 Public Trust & Science, COVID Pandemic, Lockdowns, Masks 03:04:41 Pandemic Mandates & Economic Inequality; Fear; Public Health & Free Speech 03:13:39 Masks, Harms, Public Health Messaging, Uniformity, Groupthink, Vaccines 03:22:48 Academic Ostracism, Public Health Messaging & Opposition 03:30:26 Culture of American Science, Discourse & Disagreement 03:36:03 Vaccines, COVID Vaccines, Benefits & Harms 03:47:05 Vaccine Mandates, Money, Public Health Messaging, Civil Liberties 03:54:52 COVID Vaccines, Long-Term Effects; Long COVID, Vaccine Injury, Flu Shots 04:06:47 Do Vaccines Cause Autism?; What Explains Rise in Autism 04:18:33 Autism & NIH; MAHA & Restructuring NIH? 04:25:47 Zero-Cost Support, YouTube, Spotify & Apple Follow & Reviews, Sponsors, YouTube Feedback, Protocols Book, Social Media, Neural Network Newsletter Disclaimer & Disclosures Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Since 2012, there's been no increase in American life expectancy.
From 2012 to 2019, literally, it was, well, not literally, almost entirely flat life expectancy,
whereas the European countries had advances in life expectancy during that period.
During the pandemic, life expectancy dropped very sharply in the United States,
and only just last year did it come back up to 2019 levels. In Sweden, the life expectancy dropped in 2020 and then came right back up by 2021,
2022 to the previous trend of increasing life expectancy.
Whatever those investments we're making as a nation in the research are not actually
translating into meeting the mission of the NIH, which is to advance health and longevity of the American
people.
Because they kept saying, we don't care.
And so it's almost like big segments of the public feel
like they caught us in something as scientists
and we won't admit it.
And they're not just pissed off.
They're kind of like done.
I hear it all the time.
And again, this isn't the health and wellness supplement
taking anti-woke crowd.
This is a big segment of the population that is like,
I don't wanna hear about it.
I don't care if labs get funded.
I wanna know why we were lied to
or the scientific community can't admit fault.
I just wanna land that message for them
because in part I'm here for them,
and get your thoughts on what you think about,
let's start with lockdowns, masks, and vaccines,
just to keep it easy.
And what do you think the scientific community
needs to say in light of those to restore trust?
So first, let me just say,
I don't think I'm the NIH director,
unless that were true,
unless what you said is true.
Otherwise, I'm not the NIH director.
So I was a very vocal advocate against the lockdowns,
against the mask mandates, against the vaccine mandates,
and against the sort of anti-scientific bent
of public health throughout the pandemic.
I've also argued that the scientific institutions
of this country should come clean about our involvement
in very dangerous research that potentially
caused the pandemic.
The so-called lab leak occurrences.
Welcome to the Huberman Lab Podcast,
where we discuss science and science-based tools
for everyday life.
I'm Andrew Huberman, and I'm a professor of neurobiology
and ophthalmology at Stanford School of Medicine. My guest today is Dr. Jay Bhattacharya.
Dr. Jay Bhattacharya is a medical doctor and a PhD
and the director of the National Institutes of Health.
Prior to that, he was a professor of medicine
at Stanford University.
And I should mention that he did all
of his formal academic training at Stanford,
his undergraduate, master's, PhD and medical school training.
Today, we discuss the past, the present
and the future of publicly funded research
in the United States.
The National Institutes of Health is considered
throughout the world, the crown jewel
of basic and medical research.
Explicitly because the basic and medical research, explicitly because the basic and
clinical research that it has funded has led to more treatments and cures for disease than
any other scientific enterprise. Basic research is focused on making discoveries
without any particular treatment or disease in mind when that work is done. It is absolutely
clear, however, that basic research provides the knowledge base from which all treatments
and cures for diseases are eventually made.
Today, Dr. Bhattacharya shares his vision
of which aspects of NIH are especially effective
and which need revising and improvement.
We discuss how scientific ideas are evaluated for funding
and what can be done to create more funding
for more ambitious projects
leading to treatments and cures.
This is a very timely issue because despite its strengths,
the NIH has gained a reputation over the last two decades
for favoring safer and less bold work
and therefore leading to fewer discoveries.
We also discuss what will be done
about the so-called replication crisis.
The replication crisis is, as the name suggests,
the inability for certain findings to be replicated.
Dr. Bhattacharya shares with us new initiatives
soon to take place that are designed
to verify findings early and to incentivize replication
so that knowledge base built by NIH science is accurate.
As some of you may know, Dr. Bhattacharya stepped
into a very public role during the COVID-19 pandemic
when he co-authored the so-called Great Barrington
Declaration, which argued against lockdowns.
He was also quite vocal against mask mandates
and he addressed vaccine efficacy versus safety,
especially for young people.
Those stances of course were very controversial
and he explains the logic for his stance on those topics.
That discussion leads into a very direct conversation
about vaccines more generally, not just COVID-19 vaccines,
but also measles, mumps, rubella vaccines,
and the very public and controversial issue
taking place right now about vaccines and autism.
We also discussed drug prices
and why Americans pay 10 times or more
for the same prescription drugs sold in other countries
and the relationship of that to public health.
I want to emphasize that the issues we discussed today
will impact everybody.
If you're a scientist, they certainly impact you.
If you're a physician, they impact you.
And if you're young, if you're old,
if you're a patient, if you're healthy,
if you're American or if you're outside the United States,
they will impact you.
Dr. Bhattacharya was incredibly generous
with his time and his answers,
directly answering every single question I asked.
Nothing was cut.
As a consequence, it's a lengthy podcast,
but I felt it was very important
to get into the nuance of these issues
so that you, the listener, can get real clarity
on where things stand and where they are headed.
As a final point, my graduate student training,
my postdoctoral training, and my laboratory,
first at the University of California, San Diego,
and then at Stanford, where it is now,
were funded by the NIH.
So you'll notice throughout today's episode
that I'm very impassioned by the issues at hand.
At the same time, I strive to include questions
that I keep hearing from my followers on social media
and from listeners of the Huberman Lab podcast.
Some of those come from ardent supporters of the NIH
and others as you'll see are more skeptical
or even critical of the NIH.
I strive to represent all those voices
during today's conversation.
I certainly have my own opinions and stance
on many of those issues.
And I do voice some of those throughout today's episode,
but again, I try to be thorough and broad encompassing.
As you'll see, Dr. Bhattacharya cares deeply
about basic science and the future of medicine and health
in this country and throughout the world.
He is our appointed leader
in the science discovery public health enterprise.
And I'm grateful to him for taking the time
to share his vision and for his willingness
to listen to the many and wide range of voices,
including those critical
on these literally life-sustaining topics.
Before we begin, I'd like to emphasize
that this podcast is separate
from my teaching and research roles at Stanford.
It is however, part of my desire and effort
to bring zero cost to consumer information about science
and science related tools to the general public.
In keeping with that theme,
this episode does include sponsors.
And now for my discussion with Dr. Jay Bhattacharya.
Dr. Jay Bhattacharya, welcome.
Thank you for having me, Andrew.
I've been wanting to do this for a very long time.
We are colleagues at Stanford,
although now you've formally wanting to do this for a very long time. We are colleagues at Stanford, although now you've formally moved to Washington to be
the director of the National Institutes of Health, but you've played such an essential
role in shining a light on certain aspects of public health, mostly that happened during
the time of the pandemic, related to lockdowns, vaccines, et cetera.
We'll talk about that. But now you are in the chief position
of directing research dollars and the initiatives
of what is arguably the most important health organization
in the entire world, not just in the United States.
So thank you for taking the position.
Thank you for being here.
And the first question I have is,
for those that are not familiar,
what is the, not just
stated mission of the NIH, but what is the really essential mission of the National Institutes
of Health?
So let me start with the stated mission, because the stated mission is something entirely worthwhile.
Anyone who listens to it should say, yeah, we should do this.
It is to support research that advances the health and longevity of American people.
And of course, the research that we do doesn't just advance American health, it advances
the health of the entire world. For a very long time, the NIH, the National Institute
of Health, has been the premier biomedical organization supporting research that translates
into almost every drug that you take.
The NIH has had some role in developing almost every, you know, all the fights over
what's the right thing to do to get good sleep, what's the right thing to do for your diet.
The NIH has played some role. And for American biomedicine, it's the essential institution.
It supports the careers of a very large number of biomedical scientists around the world,
and specifically me.
I mean, I got NIH funding for most of my career.
I was a reviewer for the NIH, a scientific reviewer for grants.
It's an absolutely essential organization.
Yeah, I agree.
My lab ran on NIH money primarily, so thank you taxpayers, American taxpayers.
And I think for most people when they hear that word health and what you just said about
the mission statement for NIH, there is this assumption that most of the work being done
at or funded by NIH is human clinical studies,
or even mouse studies that are testing a particular drug,
a dose response curve, you know,
what's the lethal dose of this?
What's the half-life of that?
But as you and I both know,
much of what NIH does is fund basic research,
research for which we don't have any clear idea,
maybe even the foggiest of ideas,
that there could be a potential upside for human health.
Things like what controls the pigmentation patterns
of the noses of Doberman Pinscher dogs.
I bet you we could find that grant.
So when we, maybe not anymore,
but when we step back and we look at basic
versus applied, AKA clinical research, what percentage of the NIH budget,
which we'll talk about in a moment,
is directed toward basic research
and what percentage is directed toward clinical studies
or the testing of some drug,
what we call preclinical trials,
testing in mice or non-human primates, et cetera.
So there's big fights over exactly what that demarcation line is.
So I'm not going to commit to a single number.
What I will say is that a substantial part of the NIH portfolio appropriately focuses
on basic science.
Basic science meaning fundamental biological facts that can be used in many, many, many drug studies, other research where you don't
necessarily know specifically in advance when you're doing it what the applications are
going to be.
The NIH very appropriately funds that work, especially work that's not patentable, right?
Because no drug company has an incentive to do that work,
and yet it's vital.
Let me give an example, just to put some meat
on the bone of it, of something that the NIH didn't fund,
but actually is within the mission of the NIH
to have funded if it had.
Let's just take the research that led to the understanding
of the structure of DNA as a double helix. Watson, and that Watson, Crick, Rosalind Franklin, all those folks in England in like 1950s.
Well that work is not patentable.
It's hard to imagine like someone trying to patent the double helix structure of DNA,
right?
So that means that it's not going to be in the interest
of any specific company to support those scientists that discovered that, and yet it's vital to
almost everything we do in biology, right? The NIH very appropriately funds that kind
of work, the work that is not in the interest of any particular company to do. It's all
as a market failure, if you think like an economist.
The market failure is there's no incentive of the private sector to do that kind of basic
work, and yet that basic work really advances human health in ways that are sometimes unpredictable.
And so it's correct and right that the NIH continues to fund that kind of basic science
work as well as the applied work where you take the advances and say, okay, well, does this, here's a drug that might work to treat this disease,
right? That's, that also, that kind of work also is appropriate for the NIH to fund. There's
an interesting dividing line where the question is like, what should be left to the private sector
to do? Right? So the private sector tends to fund large-scale clinical trials
at sort of the tail end of the development process.
Sometimes they'll fund earlier clinical trials.
But the private sector has an incentive
to fund those kinds of studies, because that gives them
exclusivity patents things.
So why should the taxpayer pay for that when there's already private actors that are willing
to pay for that?
So there's this interesting dividing line.
You want the NIH work to be translated just so that patients can have it.
That means the private sector has to be involved to some degree.
It certainly has to be using the products of the NIH research.
But that dividing line is fuzzy and controversial.
Same thing with between basic and applied.
As I said earlier, there are huge, almost religious wars over where that dividing line
is.
Are you a basic scientist or are you an applied scientist?
So all the numbers don't make sense to me exactly, given that religious work, but the
fundamental thing, which is we have to fund basic work, that I believe in pretty strongly.
Well, as a basic scientist, I'm not a clinician, but I worked on clinically relevant issues
in my lab related to restoration of vision in blinded diseases like glaucoma, things
like related to anxiety, et cetera.
I also know that we have some beautiful cases, as you pointed out, of basic research leading
to important, I will say cures to serious diseases.
And there was no thought at the beginning of that basic research that the outcome would
be related to human health.
I'll just briefly mention a couple.
I want to ask more questions than I want to speak, but my scientific great grandparents, David Hubel and Torrance and Wiesel, did the
early work defining the structure and function of the visual system, first in cats, then
in monkeys. Eventually it was clear the same was true of their findings in human work.
And early plasticity, changes in the visual system of say there was a cataract or a droopy eyelid
or divergent eyes, strabismus or convergent,
so what we call cross-eyedness and things of that sort.
And we know on the basis of that work
that children need corrective surgeries early
or else the brain is forever blind
to the perfectly fine eyeball
if the eyes aren't correctly aligned.
Okay, in other words, the old practice of,
oh, you don't wanna put kids under anesthesia,
it's too risky, et cetera.
The work of Hubel and Wiesel saved the vision
of millions and millions of children in the US and abroad.
People with cataract have those cataracts removed early
and on and on.
And I would also say as a second example
that much of the basic work on cell biology
that took place in the second half of the last century,
you know, where are the mitochondria?
What's in the mitochondria?
Electron microscopy lights.
Let's talk about all the folds in the mitochondria.
Let's talk about the Golgi, all that basic cellular biology
that is the stuff of textbooks was, as we say, necessary,
perhaps not sufficient, but necessary for the development
of essentially
every existing cancer treatment.
But the cell biologists that did that work weren't thinking about cancer until much later
in that work.
So those are just two examples that I would argue NIH had funded a tremendous amount of.
And the reason I'm setting it up this way is because I think nowadays, part of the reason
you're here, is that we are potentially looking at a redirecting
of a significant amount of the research dollars that taxpayers provide to the NIH and the
NIH to labs away from basic research, which understandably has some people concerned.
That said, in order to translate things from the lab to the clinic, we also need to think
about translational work.
So I just put that out as kind of an offering to elaborate.
Andrew, I have no intention of implementing that, of shifting the balance between, I think,
as I said, basic science work and applied work are both tremendously important parts
of the NIH portfolio. And the question to me is, what's scientifically important and interesting in terms of accomplishing
the NIH mission, which is, again, advancing the health and longevity of the American people?
Both basic work and applied work can contribute to that mission. And in fact, I think any large-scale scientific institution
that seeks to support the mission that the NIH has, has to have both in it. So I don't
have any intention of gutting basic science. I personally, I do epidemiology, health policies, health economics, statistics.
That's very, very applied.
But I have great admiration for my colleagues like you who do basic science work.
I think it's what advances and fuels the next generation of advances.
So it's going to stay part of the NIH mission as long as I'm the director.
Thank you.
I and many others will be very relieved to hear that answer.
I think there is this fear that the new administration is going to eliminate basic research somehow
and replace it with only applied research and clinical studies, and that somehow, and
this is not my belief, that there's going to be some private interest related to that
and it's all going to get co-opted in some kind of cloudy way.
What I'm hearing from you is that is not the direction
that NIH is going to take.
It's not, in fact, I've not heard anyone
inside the administration tell me to do that
or suggest that as the appropriate path.
I just, I mean, everyone I've spoken to about my vision
has said, yes, that makes sense.
Great.
I'd like to take a quick break
and acknowledge one of our sponsors, David.
David makes a protein bar unlike any other.
It has 28 grams of protein,
only 150 calories and zero grams of sugar.
That's right, 28 grams of protein
and 75% of its calories come from protein.
This is 50% higher than the next closest protein bar.
David protein bars also taste amazing.
Even the texture is amazing.
My favorite bar is the chocolate chip cookie dough.
But then again, I also like the new chocolate
peanut butter flavor and the chocolate brownie flavored.
Basically, I like all the flavors a lot.
They're all incredibly delicious.
In fact, the toughest challenge is knowing which ones to eat
on which days and how many times per day.
I limit myself to two per day, but I absolutely love them.
With David, I'm able to get 28 grams of protein
in the calories of a snack,
which makes it easy to hit my protein goals
of one gram of protein per pound of body weight per day.
And it allows me to do so
without ingesting too many calories.
I'll eat a David protein bar most afternoons as a snack.
And I always keep one with me
when I'm out of the house or traveling.
They're incredibly delicious.
And given that they have 28 grams of protein,
they're really satisfying for having just 150 calories.
If you'd like to try David,
you can go to davidprotein.com slash Huberman.
Again, that's davidprotein.com slash Huberman.
Today's episode is also brought to us by Eight Sleep.
Eight Sleep makes smart mattress covers
with cooling, heating, and sleep tracking capacity.
One of the best ways to ensure a great night's sleep is to make sure that the temperature of
your sleeping environment is correct. And that's because in order to fall and stay deeply asleep,
your body temperature actually has to drop by about one to three degrees.
And in order to wake up feeling refreshed and energized, your body temperature actually has to
increase by about one to three degrees. 8Sleep automatically regulates the temperature
of your bed throughout the night,
according to your unique needs.
8Sleep has just launched their latest model, the Pod5,
and the Pod5 has several new important features.
One of these new features is called Autopilot.
Autopilot is an AI engine that learns your sleep patterns
to adjust the temperature of your sleeping environment
across different sleep stages.
It also elevates your head if you're snoring,
and it makes other shifts to optimize your sleep.
The base on the Pod 5 also has an integrated speaker
that syncs to the 8Sleep app and can play audio
to support relaxation and recovery.
The audio catalog includes several NSDR,
non-sleep deep rest scripts,
that I worked on with 8Sleep to record.
If you're not familiar, NSDR involves listening
to an audio script that walks you
through a deep body relaxation
combined with some very simple breathing exercises.
NSDR can help offset some of the negative effects
of slight sleep deprivation,
and NSDR gets you better at falling back asleep
should you wake up in the middle of the night.
It's an extremely powerful tool
that anyone can benefit from the first time and every time.
If you'd like to try 8Sleep,
go to 8Sleep.com slash Huberman to get up to $350 off the
new Pod 5.
8Sleep ships to many countries worldwide, including Mexico and the UAE.
Again that's 8Sleep.com slash Huberman to save up to $350.
I'd like to talk a little bit about something that most people perhaps are not familiar
with in terms of its acronym, but it's a very important issue, which is this notion of IDC,
indirect costs.
So my lab ran on NIH grants for many years, and my lab and other labs would apply for
grants.
If we were fortunate enough to get one of those grants funded, we might receive, let's
say, a typical grant would be a million dollars over the course of four years, so 250 a year
for four years.
But then in addition to that, my home university, Stanford, would get some percentage above
that, not a percentage of that million.
I would still get the million to spend on mice, antibodies, graduate student salaries, et cetera.
But some percentage of that one million, and I think at Stanford it's roughly 50X percent,
so let's say another 500,000, would be given to the university for so-called indirect costs.
This is not something that just happens at Stanford.
This is typical of every single NIH grant that I'm aware of. The indirect costs pay in principle for administrative handling of the grant and the various infrastructure
things related to the mouse care, keeping the lights on, having a janitor empty the
trash at night, these sorts of things.
IDC, as it's called called has become a hot button issue
for two reasons.
One, as soon as the new administration came in,
the Trump administration came in just this last year,
they cut the IDC rate across the board,
not from say 55% at Stanford, other places were 75%,
some places were as low as 30%.
They said, nope, we're not paying this stuff anymore.
The National Institutes of Health, in other words, the taxpayers, will pay up to but no
more than 15, one five percent above any given grant.
I'd like your thoughts on that because this weaves into some bigger issues that relate
to a lot of the sentiment that you know Why should taxpayers be paying for these universities to run especially when universities some not all have large endowments?
right, so actually I just preface my remarks by saying that
There was litigation against that 15% which essentially said the government couldn't
Impose that 15% so it's been blocked. Yes
So the right now the the rates are whatever they were.
They're not the 15% based on that court order.
I can't comment on the litigation and I can't comment as a result of I'm now a member of
the government.
It's like I'm not allowed to do that.
But I do want to talk about the broader issues related to indirect costs.
And I want to put it in a broader context.
So the context is this.
So in the mid-'40s, Vannevar Bush,
who was one of the main science administrators in the United
States, he made an argument that the federal government should
partner with universities in organizing
the scientific infrastructure of the United States, that the universities were tremendously important parts of the scientific infrastructure of the United States.
The universities were tremendously important parts of the scientific infrastructure, and
the federal government had an appropriate role in supporting the universities of the
country to do scientific research of interest to the American people.
So the indirect costs kind of structure came out of that commitment.
And frankly, it makes sense to me.
It's appropriate that the federal government have
some role in deciding how to support
the universities of the country to be organized
around research that is in the American interest.
The question is, how much should it be?
How should it be structured?
In what way?
Those are the key policy issues that we're really talking about.
We're not talking about should there be some federal support for the universities.
The question is how?
Let me just step back and talk about like the current structure, the way it works, because
it's really non-intuitive.
So first, you're a brilliant scientist.
You apply to the NIH.
You get a grant that gives you a million dollars a year.
I'll just make a clean number.
So a million dollars for the next five years,
the federal government is going to give you money to run your lab
and do all this kind of stuff.
You work at Stanford.
Stanford has a 55% indirect rate.
So that's on top of the million dollars a year.
The administrators at Stanford then will get $550,000.
So for your million dollars of work,
the taxpayers will pay $1 and 1 half million, roughly,
to Stanford a year.
Now, as you said correctly, that $ a million dollars will go to the fixed cost
of doing research, right?
The stuff that's not specific to the,
the lab you're running, the people you have to hire
to do the work that you propose, but the fixed cost,
the building, the maintenance, the, you know,
all the stuff.
Someone's gotta take the biohazard stuff away,
all that stuff.
And it's not just you, there are other folks
who are using the same radioactive materials
and so they can support many, many research projects,
not just one, right?
So it's funding that kind of work, right?
And again, that's a legitimate use of that money.
So, right.
Here's the way that the economics of this work.
In order to get fixed cost support, you have to have brilliant scientists like you that
can win NIH grants.
If you don't win NIH grants, Stanford doesn't get the 550.
But in order to attract brilliant scientists, you have to have the infrastructure where
the scientists can do their work.
So it's a ratchet. So in order to have the money, the infrastructure support, their work. So it's a ratchet, right?
So in order to have the money, the infrastructure support, fixed cost support, you have to have
scientists. In order to have the scientists, you have to have the infrastructure. It's
a ratchet that essentially makes it so that we concentrate the federal support for the
money in the, to a select few universities.
They're winners and losers.
And so the scientific infrastructure of the country is concentrated in a relatively few
universities, mainly on the coast.
And there are brilliant scientists in other places that are not at those select few universities
that have trouble getting NIH grants, even though they're brilliant scientists.
It draws the federal support away in a structure that essentially says lots and lots of states,
lots and lots of institutions are going to have trouble getting the infrastructure support
that they need in order to have the scientists come there.
So that's the basic economics of the way indirect costs actually work.
And so the question is that the right structure?
There's also questions about, you know, like so for instance, your science involves, your
basic scientists, your science involves lots and lots of fixed costs, right?
Radioactive disposal, all the stuff. The research I did, epidemiology, health policy,
statistics, it's basically a computer. Me with a data set and a computer, I can hire
some biostatisticians to help me or...
We call that a carpet lab.
Yeah. And so, does the university need the same indirect cost support to support my fixed costs as
it does yours?
And the answer is obviously no.
And yet, that's the structure we currently have.
So there are policy questions to be answered about have we structured the indirect cost
support in the right way?
Are we inducing the right incentives?
Can the American taxpayer be sure that we're auditing the use of the indirect costs appropriately?
Those are the policy questions I think that are an issue in the indirect cost fight.
Again, I won't get into the litigation.
I'm not allowed to actually comment on that.
So I wanted to abstract it to a higher level because I think the policy question is not
should the federal government support universities to do this
kind of research, to have sort of the facilities.
The question is how should it be distributed across the country?
To what extent should the researchers get it versus the administrators get it?
And then on the back of that, there's also other research institutions that have very
different indirect cost recovery
rates for the same university, right? So like, you know, I think Gates Foundation
is, I don't know the exact number, like 15%, something on that order, whereas
like the NIH is 50% to the same university. That looks funny.
The question is, I mean, sometimes I've heard, well, the Gates Foundation puts more
of the money into the directs, right?
So maybe they'll charge you for the rental cost of the building or something.
I don't know exactly.
I'm very familiar with Foundation versus NIH money, and it differs by foundation.
But typically, a university, and I've been at two, I'm tenured at Stanford,
but my lab started off at University of California, San Diego, a public university.
Typically when foundation money comes in, the university imposes a minimum of about
8% administrative cost just for handling, just to do the paperwork, to pay the admins
that do the handling.
There's something very important in what you're bringing about.
There are actually two issues, so I want to backtrack to one issue to make sure that people
really understand this, because I realize that some of this might sound a little bit
down in the weeds, but it's just so important.
The first thing that I really want to draw from earlier in our conversation, as you pointed out, that the current model of NIH is that
taxpayer dollars pay for the basic research and for the exploration of whether or not
the findings from that basic research will benefit disease.
If there's any technology, device, drug, whatever, that is brought to the public through the
private sector.
Put differently, the taxpayers fund the research and development, but they don't capture any
of the upside from the private companies that make money selling you the SSRI, selling you
the hopefully someday novel Alzheimer's treatment.
We don't yet have a satisfactory treatment for Alzheimer's
as we'll get into.
So the general public who are not basic scientists,
in other words, if I take off my hat as a basic scientist
and I say, yeah, I'm a taxpayer.
I give a significant amount of my income
to the state of California and to the federal government.
I like science.
I certainly would like to live a long, healthy life.
And I hope some of that science helps me do that. But I'm would like to live a long healthy life, and I hope some
of that science helps me do that.
But I'm going to have to buy back the results of what I paid for.
That's where I think a lot of the general public sit.
And I'm not saying they don't like, appreciate, and respect science and scientists, but to
any rational person, you don't need a degree in economics to say, that kind of
sucks.
And made worse, if I want to read a paper that was published with the work that I provided
for my tax dollars, I have to buy that from the journal.
By the way, that changes in July.
Okay.
Yeah.
This is a huge issue. That's one of the decisions I made. Yeah, it's $34. Not anymore. Listen, I've been grateful to changes in July. Okay. Yeah, that's a- I mean, this is a huge issue.
That's one of the decisions I made.
Yeah, it's $34.
Not anymore.
Listen, I've been grateful to publish in Nature and Science.
You know, these are like Super Bowl rings for scientists.
I'm sure it's part of the reason I got tenure at Stanford, and I had great fun doing the
work, and I believe in the work.
It stood the test of time.
But were I not an employee of Stanford that pays for the subscriptions to those journals,
I have to buy the work back using my tax dollars
that funded the work.
This is crazy.
This is like me giving you the money
for the supplies to build a home.
I get to, you get to live in the home.
I don't even get to see the home.
I have to purchase a ticket to see the home.
That's how irrational it is from the perspective
of somebody who's just not understanding the pipeline
and they should, of basic applied research.
So let's just, I want to return to that briefly
because this relates, in my opinion, directly to IDC.
So that's a crazy picture for anyone
that doesn't understand how one piece relates
to the next relates to the next.
And now that I'm in public, I'm in media, I'm public facing, what I've come to learn
is that the general public is very smart.
Max Delbruck was right, assume infinite intelligence and zero knowledge, but it's very hard for
people to connect more than two or three dots.
They're busy.
So we could talk all day about how this leads to that, leads to this, leads to that, the brick on the wall model,
and then there's this treatment, and they're like,
I'm paying for this stuff,
and I can't even read the paper about it,
let alone glean the positive benefits
without paying out the nose.
Yeah, so a couple of things, let me go backwards,
because you had two major issues you brought up.
So first, the journal thing.
My predecessor, Monica Brignoli, who is the NIH director, the National Institute of Health
director before me, she made a decision, a really great decision, essentially to say
if the NIH supports a scientist's work and then that work leads to a journal publication,
that publication ought to be available free to the public immediately upon publication.
You're not allowed as an NIH-funded scientist to publish in a journal that doesn't have
that as a policy.
That policy was due to go into effect in December of this year.
I think it's a great policy because I agree with your analysis entirely.
If the American taxpayer pays for the research,
why shouldn't the American taxpayer
be able to read the research for free?
Because they already paid for it.
Why did they pay a second time on the back end
after the research is published?
And it's not like it's free if you're a university employee.
The university has to purchase a very costly subscription
to the journal in order to, for a faculty member
to read the papers. Now I'm lucky enough I can access pretty much any paper in the world, before a faculty member to read the papers.
Now I'm lucky enough I can access pretty much any paper
in the world, but that's because Stanford spends
millions and millions of dollars and it's made worse.
I forgot the one real stinger in this.
When you publish a paper, you use taxpayer dollars
to pay the journal.
That's correct.
Thousands of dollars to publish it,
then they sell it back to the general public.
Nature charges $12,000 for like the major.
These are just good, so it's, but okay.
So that's a racket.
Right, yeah.
Sorry, I realize I'm talking more
than I'm asking questions.
No, no, this is, I mean, like I'm agreeing with you.
So like, so I, Monica Brignoli,
the previous NIH director, in December of this year,
was, she made a policy that those papers
have to be available to the public for free.
I made a decision, one of my first things I did was I said, why wait till December?
Let's just do it in July.
Great.
Thank you.
And so starting in July, what you just said will no longer be the case.
The Americans and everybody will have access to the papers that the Americans already paid
for if they're NIH funded, for free.
Thank you, on the behalf of,
literally, this isn't a political statement,
on the behalf of myself and every other American citizen,
thank you.
We've been paying for this research forever,
and I've had to pay to get it back.
I mean, it's not like journal editors make that much money,
but the journals make a fortune.
So Macmillan Press, El Savio, I've done my homework on this.
We're talking billions of dollars in income.
And the marginal cost of publishing now is effectively zero.
You put it online.
Right?
And there's some costs for maintaining the web page and all that, and there's some editorial
staff.
But the level of investments
the public had been making for the NIH to then be asked
to pay $30, $50, $100 for the papers itself that are
published, I mean, it's just insulting.
And actually, it impedes the progress of science because
it makes it so that there's this barrier where regular
people can't get access to the things that
scientists are talking about, right?
So there's like this public transparency aspect of it where the scientists ought to be engaging
with the public about their ideas, right?
The idea is that we are just living in this ivory tower and only we get to decide what's
true and false and then we impose it on the public.
During the pandemic, we saw the folly of that model.
So it's, I think, a small step forward,
but an important one.
I think you're being humble,
and I'd like to point out that I think it's a big step
forward because it's not just a token to the public
for all their dollars over the last,
how old is the NIH?
100 and some years.
100 and some years.
It's really what should have happened a long time ago.
So thank you very much.
And I guess thank you to Monica as well
for initiating this, but thanks for accelerating that.
I think when people start to understand
how the NIH works a bit,
and they understand this IDC thing,
this indirect cost thing,
the question comes to mind,
how much of the cost of running science at a university, public
or private university, should the public be responsible for?
I mean, that's a really interesting question.
Yeah.
I mean, I think... So let me tie it back.
As you said, these are all interlinked topics.
Let me tie it back to something else you just said earlier, which is, okay, so the NIH funds
your work.
Your work then results in maybe not necessarily you, but somebody else who uses your work
to create a product that they patent and they make a lot of money off of, they sell it to
the public.
At least indirectly or sometimes directly, those patents are funded by American taxpayers.
Well, the NIH also has a big intramural program, but it's like a scientist who worked directly
for the NIH.
They make some advances and sometimes those advances result in patents.
Those patents then result in products that are sold above marginal cost.
And so the question is, again, by American taxpayers, because the patent protects entry
into those markets.
So the question is, how much should the American taxpayer be funding for this kind of work?
Should there be private actors to be allowed to like have
to make to make money off of this this this resource the American taxpayer funded and
and the question as an economist I'll say the question is complicated and the reason
it's complicated is you might say okay well there should not be a patent at all right
shouldn't be patented at all.
The there was a law called the byh-Dole Act in the mid-'80s.
I can't forget the exact date that essentially said that NIH-funded work ought to be patentable.
The reason was that it's the last mile problem.
You have some fantastic basic science research that has some fantastic biomedical results that there's no way for the patent,
right?
Then there's no interest to develop into a product that then advances health.
The wisdom of the BiDOL Act was to say, well, look, if you allow there to be patent on the
last mile, then now we've created a commercial interest to take the basic science advances and translate
them into something that actually benefits people.
Now, the price is going to be higher, at least while the patent is still in place, but then
eventually the patent go away and then the thing will be available to the public at large
to accelerate the transition from the basic science investments we make to things that
actually benefit the
public very directly.
That's the, so in a sense there's a trade-off there, right?
So you're trading off the fact that for a while there's products funded by the American
taxpayers that are at higher prices than it kind of would be in a purely competitive market
for the fact that you get more rapid access to the benefits of that investment.
So that's the basics trade-off at play,
and that's why I say it's complicated.
Well, and when I joined UCSD and when I joined Stanford,
I signed something saying if I make a discovery here
that translates to an important device or drug,
that the university is gonna capture some of that upside.
And Stanford is a place where there's,
let's just say a history of people going into biotech and to neurotech and because of the influence of the engineering school.
There's actually a great joke about Stanford that a former president of Stanford told me,
which is there's only two kinds of Stanford faculty, Stanford faculty with companies and
Stanford faculty with successful companies. A discussion for another time.
But it's commonplace for faculty at Stanford to have companies to split their time between
the university and their companies.
But most places, like most of the NIH grants that I reviewed when I was on study section
reviewing grants, most of the great work I would hear about at meetings came from people at universities
who were really focused on charting the cell types in the retina, understanding the activity
patterns in the brain during sleep and how it relates to neuroplasticity.
Very few of them were involved with companies in a serious way, let alone had their own
companies.
So for the taxpayer, who make up the majority of our listenership,
is giving money to universities and the universities are spending that money making discoveries.
I think most of the time that the university and the scientists who do that work are not capturing the upside.
The general public isn't capturing the upside. They're actually paying for the upside.
So it's a little bit like the journal situation. That's why I brought that up. It's a little bit like the journal situation all over again,
where we're as taxpayers funding a lot of this,
and then have to buy it back over and over again.
Okay, so there's one other complication
about the United States versus the rest of the world.
So let's just put that aside for just a second.
Let's
get back to that. Before I get there, I want to say in response that in fact, when you
take a medication or when you have some health advice that actually works, often the NIH
research was somewhere in the path leading up to that involved.
And there are huge returns to that, right?
If you have a drug that treats your disease well, you know, you have congestive heart
failure and now you have a drug that allows you to live longer, more health, you know,
in a way that allows you to live more fully.
Or if you have, you know have diabetes and you slow the progress
of the disease so it doesn't result in your kidneys failing, you're going blind or whatnot.
Those are advances that are really worthwhile.
And even if the price is higher than marginal cost, it still could be very worthwhile.
So you take Metformin, it's a very cheap drug now, but once upon a time it was a patented
drug and you prevent the progress of type 2 diabetes, that's a big advance for patients.
So the value that you get from the NIH-sponsored research then is potentially very, very high
in terms of improving your health, even more than the marginal price for the drugs that you end up paying or the products or the advice or whatever it is.
So you're saying it was a good investment for the taxpayer.
Yeah, it was good even for the taxpayer, right?
Now I wanted to put aside the business about the international, like the US versus the
rest of the world.
Now I want to bring that to the forefront.
It is also true that American taxpayers and Americans pay somewhere between two to
ten times more for the same product, the same drug product as people in Europe pay.
Why is that?
There's again a lot of complicated reasons around to do that, but I mean just, it's a
very, very simple observation. There's something in economics called the law of one price, right?
When you have one country charging 10, there's a market in one country where the price is
10 times more than for another country.
What you'd expect is somebody to go buy the goods from the other country, from the cheap
country, let's pay the cheap price, then go resell it in the country that has a high price.
And now what would end up happening is that you'd get an equalization of the price.
As long as there's the capacity to move across and essentially close this arbitrage opportunity
through competition, you'd see those price differences collapse.
And yet, for decades, Americans pay two to 10 times more for the same product, often
made in the same manufacturing facility, than Europeans do.
And that, it's again, a complicated reasons why, but it has to do partly with the way that American health insurers interact
with drug companies.
Drug companies essentially use Americans as a way to fund their research and development
efforts.
That's what they say.
The higher prices that we pay fund the last mile research that the drug companies do to
test the new products.
Are you saying the last mile research is the most expensive
because it's the stage four clinical?
Yeah, the straight through clinical.
The safety stuff, though,
right before we go into humans at large.
Yes.
We want to know if anyone's gonna drop in.
Yeah, so that's the argument that they make,
that the drug companies make, is that,
well, yes, Americans are paying this high price,
it's really worth it to do that.
And then they go to Europe and Europe says,
well, we're not gonna to pay those high prices.
We're going to charge you.
If you're going to market the drug in France and Belgium and in Germany or wherever, you
can do it, but you're going to have to charge us essentially marginal costs.
So if I understand correctly, the United States taxpayer is funding the late stage and most
expensive research and development that the drug companies do.
They sell the drugs to us at a premium and they use the difference between the real cost
and the sort of allowed cost abroad to make it very cheap overseas.
In other words, we are paying for the insurance,
so to speak, that the drugs that are marketed in Europe and elsewhere are safe.
Yes.
So the taxpayers in the United States are funding the basic research and the clinical
late stage research for the entire world.
Yes, in large part. I mean, like Europe does have some institutions that invest in basic research, so it's not
entirely zero, and there are, of course, private foundations that do it, but through the NIH,
that's the single largest investment in basic science research in the world, and also applied
research. And also by higher drug prices in the United States relative to the rest of the world,
we are funding the phase three trials, all the research and development efforts that
happen at the tail end of the research pipeline that the drug companies do.
So essentially, American taxpayers are the piggy bank
for the world for almost all of this research pipeline.
Wow, okay.
What is being done to bring drug prices down
in the United States?
I heard this recently as a press release
from President Trump that drug prices in the United States
are soon to come down.
Knowing what I know now, based on what you just told us,
the immediate question becomes,
who's going to pay for that late stage safety research?
I mean, it's not expensive
because it's fun to do expensive research.
It's not expensive because they're still exploring
the basic chemistry of these molecules
or functioning of the devices.
It's expensive because you have to make sure
that people aren't going to drop dead or form some other worse pattern of illness through the use of the devices. It's expensive because you have to make sure that people aren't gonna drop dead
or form some other worse pattern of illness
through the use of these drugs.
And that means a lot of human subjects
and many, many measures.
It's not just one endpoint, like did it lower blood sugar?
It's like, did it lower blood sugar?
And also, did you blow a gasket in here,
some capillary in a critical part of your brain?
So, I mean, this is a very expensive work.
So it still needs to be done is what I'm saying.
Who's gonna pay for it?
Okay, so let me just take a couple of cuts at this.
So first, like that phase four surveillance,
that happens after the drug's been marketed.
That's typically the FDA that conducts that work.
NIH can fund some of it, but it's mostly the FDA that tracks the safety and efficacy of
drugs in broader populations after the drug has been approved for use.
So again, American taxpayers are paying for that.
The phase three studies, the studies of large-scale clinical studies to check the effectiveness
of a drug, check the, again, the safety profiles
of larger populations.
That's typically the drug companies paying for that, right, in principle.
But then American taxpayers pay for that with higher drug costs.
President Trump, in the last couple of weeks, issued an executive order essentially saying
we have to make the other countries of the world pay their fair share of this.
So he put an executive order in place with various mechanisms, if you want to talk about
some of those mechanisms, that will reduce the difference in price between what the US
pays and what the rest of the world pays.
What will likely happen is that Europe will pay a slightly higher price, again funding
the research and development efforts to do that last mile of research.
The US will pay a lower price, and so the world will share that R&D burden more equally
than we currently do.
Currently it's American taxpayers on whose shoulders that burden of R&D currently falls.
What President Trump has said is that that's not a equilibrium that should hold,
that there ought to be policies that allow us to equalize those prices. And the kind
of mechanisms used include things like including drug price discussions in trade negotiations,
so the tariff link to the tariff policies he's implemented, allowing re-importation
of drugs.
So the idea is that, let's say I'm in Europe and I'm charging basically nothing for some
drug and you're the United States, someone can come to me, buy the drugs from Europe
or Canada or wherever, bring them to the United States, resell them at a much cheaper price and make a little bit of money, but that then would equalize
the price and various other mechanisms to try to bring the United States much more close
to where the price of the rest of the world.
It's not that the R&D won't happen.
It's just that the prices everywhere will be more equal so that the burden of R&D is
shared more equally across the developed world.
What is to say that these other countries will simply say, no, we're not going to absorb
more of the cost?
People don't like to see prices go up.
They're comfortable with seeing prices go down for obvious reasons.
I can think of one example, maybe not the most critically important example in most
people's minds.
There's a class of drugs that was released last year
or about last year called the Doras.
These are drugs that encourage sleep
by suppressing the wakefulness mechanism
as opposed to promoting the sleepiness mechanism
in loose terms.
They have much lower abuse potential
than a lot of other sleep medications.
And given the essential role of sleep
in mental and physical health for, you know,
and I'm a strong believer that behavioral tools,
sunlight, et cetera, are critical,
but some people truly struggle with, you know,
clinical grade insomnia and it's extremely detrimental.
It's widespread.
These drugs are very expensive,
$300 a month or more in the United States.
Knowing what I know now,
just the idea that some of that $300, let's say, let's make up
a number, $200 of those dollars is to cover the research costs so that in Northern Europe,
it can be available for $50 a month.
That borders on upsetting for me.
Yeah, it is upsetting.
I think I understand why President Trump issued that executive order.
It's upsetting for me too.
It makes no sense that the American taxpayer should bear the burden of this R&D expenditures
when there are lots of rich countries in the world.
Why shouldn't we be more equally distributed?
The question is like what will happen?
How the drug companies respond to the executive order and how our allied
nations respond to the executive order is open still.
I don't know what it's going to look like.
But what I can say is that the current equilibrium is not sustainable.
American taxpayers, once they understand what's actually been happening, and this is decades
long, they're going to say no.
And so the way that it plays itself out,
it's hard to project exactly,
but what I do know is that every effort,
the government is currently,
is making every effort to make sure
that those prices get more equalized.
I think, just take it from the perspective
of a European citizen, right?
Someone, a French citizen or a Spanish or Portuguese or English citizen, right?
Or, you know, citizens of Great Britain.
For them, raising, allowing this price is more equalized in a way that so they share
the burden essentially creates an interest of
the drug companies to focus on the kinds of health conditions that they have.
Most of the research now, since it's paid for by Americans, the drug companies are focused
on problems that Americans have.
It aligns the interests of the drug companies to think more broadly about what they should
be investing in to include the health problems that Europe has.
Is it true that, I've heard this before, 90% of the psychoactive drugs, like the antidepressants,
the SSRIs and related things in the world are prescribed and consumed in the United States.
I got no other specific number,
but it is a pretty substantial,
I think as far as like drug profits go,
I think it's like two thirds or three quarters
of all drug profits are had in the United States.
And are most of those for the sort of Adderall
and psychotropic type stuff?
No, sorry, I don't know if psychotropic
is the correct term, I'm gonna get beaten sorry. Sorry, I don't know if psychotropic is the correct term.
I'm going to get beaten up by people
if I don't get this right.
Let's just say psychoactive, excuse me,
I meant to say psychoactive drugs like SSRIs,
which by the way, in my view of the literature,
they're not always bad,
but we hear that they are bad in some instances
or many instances,
but like for the treatment of clinical grade OCD,
the SSRIs have been a tremendous tool.
They haven't cured OCD in every case,
but they've been a tremendous tool.
So I don't want to, I want to make sure
not to demonize them.
I, so I don't know the specific numbers
for psychoactive drugs, but I, but as industry as a whole,
it's the United States that drives drug company profits,
that pays for drug company profits.
I think it's like two thirds or three quarters, I forget the exact number.
So what are these American problems?
So it's obese, are they obesity related issues?
Yes, obesity, depression.
I mean, a lot of the obesity, I mean, the United States is, I think it's like Mexico
is now above us, but like for a long time was the most obese nation in the world, big
nation in the world, big nation
in the world.
So the diseases related to obesity, now admittedly, the European countries have those problems
too, but just to a lesser degree.
The drug companies, their research and development efforts naturally go to where they're making
the most money. And so what this will end up doing is it'll align the drug company incentives to focus
on the problems that Europeans have at a slightly higher levels than the Americans have relative.
Now these are all rich countries.
So it's not like there are unique diseases that happen in Europe that don't also happen
in the US. It's a question of relative levels of investment, right?
And so, you know, I don't think that's necessarily bad.
Like an excessive investment in just the things that Americans have at scale don't necessarily
translate to better health for Americans, right?
So you can see this since 2012, there's been no increase in American life expectancy. From 2012 to 2019, literally, it was, well, not literally, almost entirely flat life expectancy,
whereas the European countries had advances in life expectancy during that period.
During the pandemic, life expectancy dropped very sharply in the United States, and only
just last year did it come back up to 2019 levels. In Sweden,
the life expectancy dropped in 2020 and then came right back up by 2021-2022 to the previous
trend of increasing life expectancy. Whatever those investments we're making as a nation
in the research are not actually translating into meeting the mission of the NIH, which is to advance
health and longevity of the American people.
We've had some tremendous biomedical advances that have now allowed us to treat diseases
that were previously untreatable, which is great.
That's a good thing, but it's not actually as far as the broad health of the American
public address the chronic disease crisis that we face or address the crisis in longevity that we face.
The next generation of kids, our kids, are likely to live shorter, less healthy lives
than we have lived as parents, as American parents.
And I think that that's, that I think is an indictment on this entire industry.
We focused on managing illnesses and treating illnesses and try to hold on, especially chronic
diseases and as a result, and we're failing at it.
Europe, on the other hand, is seeing expanded life expectancy. This, I think, this change of trying to equalize drug prices, aligning our portfolio of NIH
investments to meet the health needs of the American people, it's a long-needed corrective.
Yes, if we'll succeed, I hope so.
That's the reason I took this job.
I'd like to take a quick break and acknowledge our sponsor, AG1.
AG1 is a vitamin mineral probiotic drink
that also includes prebiotics and adaptogens.
As somebody who's been involved in research science
for almost three decades
and in health and fitness for equally as long,
I'm constantly looking for the best tools
to improve my mental health, physical health
and performance.
I discovered AG1 back in 2012,
long before I ever had a podcast
and I've been taking it every day since.
I find it improves all aspects of my health,
my energy, my focus,
and I simply feel much better when I take it.
AG1 uses the highest quality ingredients
in the right combinations
and they're constantly improving their formulas
without increasing the cost.
In fact, AG1 just launched their latest formula upgrade.
This next gen formula is based on exciting new research
on the effects of probiotics on the gut microbiome.
And it now includes several clinically studied
probiotic strains shown to support both digestive health
and immune system health,
as well as to improve bowel regularity
and to reduce bloating.
Whenever I'm asked if I could take just one supplement,
what that supplement would be, I always say AG1.
If you'd like to try AG1,
you can go to drinkag1.com slash Huberman.
For a limited time, AG1 is giving away
a free one month supply of omega-3 fish oil
along with a bottle of vitamin D3 plus K2.
As I've highlighted before on this podcast,
omega-3 fish oil and vitamin D3 K2
have been shown to help with everything
from mood and brain health, to heart health, to healthy hormone status and much more.
Again, that's drinkag1.com slash Huberman to get a free one month supply of omega-3
fish oil plus a bottle of vitamin D3 plus K2 with your subscription.
Today's episode is also brought to us by Levels.
Levels is a program that lets you see how different foods affect your health by giving
you real-time feedback on your diet
using a continuous glucose monitor.
One of the most important factors
in both short and long-term health
is your body's ability to manage glucose.
This is something I've discussed in depth on this podcast
with experts such as Dr. Chris Palmer, Dr. Robert Lustig,
and Dr. Casey Means.
One thing that's abundantly clear
is that to maintain energy and focus throughout the day,
you want to keep your blood glucose relatively steady
without any big spikes or crashes.
I first started using levels about three years ago
as a way to try and understand
how different foods impact my blood glucose levels.
Levels has proven to be incredibly informative
for helping me determine what food choices I should make
and when best to eat relative to things
like exercise, sleep, and work.
Indeed, using levels has helped me shape my entire schedule.
I now have more energy than ever and I sleep better than ever.
And I attribute that largely to understanding how different foods and behaviors impact my
blood glucose.
So if you're interested in learning more about levels and trying a CGM yourself, go to levels.link
slash Huberman.
Right now, levels is offering an additional
two free months of membership when signing up.
Again, that's levels.link, spelled of course,
L-I-N-K slash Huberman,
to get the additional two free months of membership.
Well, I really appreciate that you explained so clearly
what's going on with this drug price differential
and who's paying for it.
I was not aware of that.
Perhaps I should have been, but I was not aware of that.
And as we talked about a little bit earlier,
most of the general public,
even the science and engineering mathematics trained,
they can connect two or three dots,
but they're also very busy.
And the general public, like I said, I believe are smart,
but it has to be spelled out very clearly the way you did
for people to really understand.
I'm a health economist actually.
That's my job.
Right, well, I think, and I mentioned that in my introduction, but I think it is very important for people to really understand. I'm a health economist actually. That's my job. Right, well I think, and I mentioned that in my introduction,
but I think it is very important for people to understand
that you look at things through the lens of science
and medicine, but also epidemiology and economics.
You know, there's a saying in laboratories,
which is that, you know, just adding more money
doesn't improve the science, but it certainly allows you to take bigger risks in service to health and discovery.
And without money, no science gets done. I mean, no money, no science. You can't pay graduate students, postdocs, et cetera.
I don't want to spend too much time on the structure of basic laboratories, although that's my leaning.
I could spend hours talking to you about what's going to happen with the universities, et
cetera.
We'll come back to that.
But there is one piece that we opened up earlier that I think it's important that we close
the hatch on, which is the notion of indirect costs being now, well, it's pending litigation,
but level to a lower number, 15% if the administration has their way, back to the variable rates depending on the university
if this lawsuit has its way.
And here's what I hear a lot
to just put in the simplest of terms.
Stanford, Harvard, UT Austin, big universities,
often the private universities have big endowments.
So money that's been given by donors,
some might have come in through tuition, it's been invested.
They sometimes will spend the interest,
but as you and I both know,
no university likes to spend the endowment.
Just like no one really likes to spend their savings, right?
People like to spend the interest they make
on their investments from their savings.
Nobody likes to spend their savings, universities included.
The general public tells me all the time, not just on X, but on all platforms and whenever
I interact with the public, why should we pay for research at these universities that
have these large endowments?
To which I say, now it's true Stanford has a very large endowment, Harvard as well, UT
Austin and other places, but many universities, fine universities, superb universities throughout
the United
States do not have extremely large endowments.
And as you pointed out, there's excellent work, important work, I should say, being
done those places.
So to cut the IDC to 15% for everybody, I can see where I'd say, well, why don't they
just dip into their savings, the endowment?
But if you're, I'm not going to name names, but if you're at a smaller public university,
and in particular in certain areas of the country,
not on the coast, unless you're at like a WashU
in St. Louis or UT Southwestern, and they got riches,
I'm honest, they have a lot of money,
there isn't a savings account to go into.
The buildings don't look the way they do
at these other universities.
You don't have these impressive lawns and thousands of gardeners, which we're so blessed
to have at places like Stanford and Caltech that have tons of money.
So to cut the IDC across the board for everybody isn't just sort of trying to restore order
to the rich.
I do think it potentially punishes the less wealthy universities and important
research.
I say that in service to them and frankly just being at Stanford, it wouldn't be right
for me to be like, oh yeah, 15% will dip into the savings.
It doesn't quite work that way if you're at a public university.
Well, I think you're hitting on the exact policy question, the right policy question.
The question is how should the federal investment in fixed cost of research
be distributed? Right now, it's distributed in a very unequal way where the top universities
have access to that money because they have scientists that can win NIH grants. It's a
funny thing because if you think of it as like a support for the fixed cost of research, you have to have scientists who are good at getting support for the marginal
cost of research in order to get the fixed cost of research.
But if they're fixed, why would you do that?
Why wouldn't you have the money go more equally spread across, right?
The endowment money is another more complicated question.
I think that endowment monies often are focused on particular projects.
There are restrictions on it.
But you're absolutely right.
It does make a buffer for some of the bigger universities that allow it to survive the
vicissitudes of NIH funding or the economy, more so than for universities that don't have that endowment.
But from the federal perspective, the key thing is how should the funds be distributed
across universities?
There's a program called IDEAS program that the NIH, the National Institute of Health
has, and I apologize because I don't remember the acronym, but I'll tell you what it does.
It says for research institutions in the 25 states that are in the bottom half of the
distribution of NIH funding, it gives them a leg up in being able to get access to this
federal funding for the fixed cost of research.
I think that's a great program because what it does, it says, look, the federal government
shouldn't just be funding the top universities. It doesn't make sense from the point of view of trying to get the biggest bang for the
buck in scientific knowledge.
Just like a very narrow, like this isn't a narrow thing, it's like an important thing.
I think scientific group think happens when scientists are all just on the coasts and
the only scientists you interact with are scientists who already agree with you.
Geographic dispersion of scientific support allows more richer conversations about science
that allows different scientific ideas to develop just simply because it's more geographically
dispersed.
It combats scientific groupthink. There's other other reasons to as you said like that other excellent
Scientists in universities that aren't on the in the you know, like the Stanford's harbors or whatever
That that if you gave them more a environment where they could do their work
They would have it you make tremendous advances, right? So I think for lots of reasons it makes sense to do that
I don't want to comment on the specific 15 percent of our subject litigation You make tremendous advances, right? So I think for lots of reasons it makes sense to do that.
I don't want to comment on the specific 15% of our subject
litigation.
I will say that the key policy issue is exactly
the thing you said.
How should the money be distributed
for fixed cost of research across the universities?
Like one system, you can imagine,
would be where different universities compete on costs.
So a university that's able to more inexpensively provide
a square foot of lab space, fully supported
with radioactive disposal and all that stuff,
maybe the NIH ought to be giving money to that university more
than a university that has to provide it
at much more expensive rates.
That's not the current system, but you can imagine a system
like that.
So I think this fight over this 15%,
I think it's a great time now to rethink
how the NIH and the federal government
supports the research infrastructure of the country.
For the first time in, I think, in 40 years, it's now part of the public consciousness,
this thought.
And I don't think, I've not seen anybody who says that we shouldn't have federal support
for universities.
The question is how should it be structured and to what extent?
Those are, I think, legitimate questions for public policy debate.
Yeah.
Well, before moving on from funding and the relationship between tax dollars and universities,
I wanna ask one more question,
then we'll move into issues of public health specifically.
But having been on study section,
I realize I never explained what study section is.
Study section is when a group of scientists convene,
it used to be in different cities or virtually, and they review grants.
Typically, the people who review the grants are expert or near expert in a given area,
typically three primary reviewers, a bunch of people vote on the grant.
And to make a long story short, whether you get money to do research from the federal
government, aka the taxpayers, is voted on by a jury of your peers.
This has distinct advantages, in my opinion,
because real experts or close to experts
are evaluating your work,
and they either have to advocate for it
or they actively try and kill it.
From the perspective of a reviewer,
you're given 12 grants,
and you know that only three of those can be funded or so.
And so you literally have to advocate
for the one or two that you feel most strongly about
and you find ways to legitimately make sure
that the other grants are not scored as well.
And you evaluate each one on the basis of its merits,
but you go into those study sections knowing like,
goodness, like this grant,
I sure would like to see this one,
and this other work is kind of pedestrian,
it's kind of like all the others.
Now this is a great model in principle.
However, you talked about groupthink,
it lends itself very well to people
who are very good at grant writing,
which is important, grantsmanship is important,
continuing to get money and in particular new ideas, ideas that are outside the vein
of what a researcher has been doing for the last five, 10 years, promoting the idea of
doing new ideas, of chasing new concepts, new hypotheses.
It tends to make science move very slowly and very incrementally.
That's one issue.
However, I realize I'm weaving two questions, but there's what you described before that
the majority of science that's funded at these universities on the coast, as is geographic
effect, group think effect, what about the rest of the country and these other places?
The study sections, the people who review the grants, intentionally include people from
throughout the country.
It's related in fact, I think to the distribution of the electoral bodies and people who lobby
in Congress.
So in other words, there's no study section on a given topic, say Alzheimer's, where you
don't see people from the coast, but where you also don't see somebody from the Midwest,
somebody from the desert Southwest.
There's always been geographic coverage in the people who decide which grants get funded.
This is a historical component here, but so the question is a very straightforward one,
which is given that a jury of peers decides what gets funded, that checks off the box
of are they experts?
Yes, more or less, but it also means that nothing really that new can get funded.
Yeah.
I mean, I think you've hit on a real problem,
which is, I think, let me contrast with Silicon Valley,
right, so in Silicon Valley,
you're an angel investor of EC or something,
and you're a venture capitalist,
and you invest in a portfolio of 50 projects,
and 49 of them fail, and the 50th succeeds,
it becomes Google or Apple or something.
That's a very successful portfolio.
The process of how we, if the NIH review grants embeds in it a certain conservatism in a desire
to make sure that every grant that's funded succeeds.
You can have a portfolio where every grant succeeds, but then the portfolio as a whole
is not as productive as it ought to be.
Because how do you make every grant succeed?
Well, you just fund incremental work that you know will work.
We call that turning the crank.
There was a professor at the Salk Institute, a superb institution down in San Diego,
said to me, you know, two kinds of science.
There's the kind of science
where you really test a really bold hypothesis,
and most of the time it will be wrong,
but if you hit something, it's apt to be spectacular,
maybe even open up an entire field, maybe cure a disease.
This has happened before many times over.
Or there's the science that will get you funded
where you turn the crank.
You look at a different protein in a pathway
that is marginally interesting,
but is predictable in terms of its ability
to create papers, students need papers, plus docs need papers.
Most of them don't want to go on to be lab heads.
So they just kind of need papers and a PhD.
And you learn something along the way and hey,
you might stumble on something really interesting,
but it's kind of like stand on one foot,
stand on the other, spin around.
And without money, there is no science.
So you could understand why people would be incentivized
to do this kind of more incremental,
I'll just call it pedestrian,
kind of like really they're showing this again.
You go to the meetings, it's like,
they've been doing this stuff for like 15 years,
but they keep their NIH grants.
And then at the end they go,
we were funded for 30 years.
I've had this, when people brag
about having the same grant for 30 years,
I just go, oh my goodness, you should be embarrassed.
Now, how about seven different grants
over the course of 30 years?
And tell me that one of them led to something interesting,
but don't kid yourself into thinking that having a grant, course of 30 years. And tell me that one of them led to something interesting.
But don't kid yourself into thinking that having a grant, an R01 that lasted 30 years
with five renewals, it's like, I look at a lot of those careers of some of my senior
colleagues, and I'm like, you made the interesting discovery in the third year of the first iteration
of the grant.
The only thing you've proven is that tenure can, that tenure keeps people around too long.
This is coming from a tenured professor.
Yeah.
So like, what gives?
I was formerly a tenured professor until recently.
But you gave it up by choice.
I did, yes.
Okay, so before the pandemic in 2020,
except for a decade before,
I'd been working on measuring the innovativeness
of the scientific portfolios.
I had a paper that was published on the eve of the pandemic asking how innovative is the
NIH portfolio in particular.
And so, let me just describe the methodology because it's easy to understand.
So take every single published paper published in Biomedicine in 1940, take all the words
and word combinations in it and just list them.
Okay?
Then you do the same thing for all the papers published in 1941 and subtract off all the
1940 words and word combinations.
What you're left with are the unique words that were introduced into the biomedical literature
in 1941.
You do this for 42, 43, 44 into 2020, and what you get is a history of biomedicine.
That comes right out of the words that were actually published.
You can do this because computers, right?
And so you have an age for every single idea that was introduced in biomedicine that
just comes out of this automatic process. You go back to the papers and ask, how new
are the newest ideas in the papers when they were published? Right? So just to take a concrete
example, polymerase chain reaction in 1982, 83 was a new idea.
And so if you were, Kerry Mellis, publishing a paper with the words, polymerase chain reaction
in 1982, that's a paper that's relying on new ideas.
If the newest idea in your paper in 2020 is a polymerase chain reaction, well, that's
an idea that's almost 40 years old, 40 plus years old, right?
And now it's in the method section.
Barely. Right. Right. Because it's just like Xerox, old, 40 plus years old, right? And now it's in the method section.
Barely.
Right.
Right.
Because it's just like Xerox, right?
You barely mention it, right?
So the point is that you can use this method to ask how new are the ideas in every single
biomedical paper that's ever been published.
So we did that.
Me and my colleague, Miko Pakalin at
University of Waterloo, we asked, and then we asked, for NIH funded papers, has the age
of the ideas in the paper shifted over time? And the answer is yes. Papers that were published
in the 1980s with NIH support tended to work on ideas that were one, two, three years old. Papers published
in the 2000 teens were working on ideas that were seven, eight years old. At the same time,
in the 1980s, the age at which you could win a large grant at the NIH, they're called R01s,
you know, and you don't know all about that, but like folks, but the reason why these large
grants are important is because
they are the ticket first to getting funding so that you can actually test your ideas and
do the experiments you want to do, but also they're the ticket to getting tenure at fancy
universities.
In part, I should say, because R01s, these large grants, carry large amounts of IDC,
indirect costs.
Let me put it differently.
If a professor comes to a university
and does absolutely groundbreaking work,
but does it entirely on foundation money,
which carries very little indirect funds
to provide to the university,
there's a chance they'll get tenure, but very small chance.
Professors that have RO1s stand a much higher probability of getting permanent employment
at that university.
I'll call it tenure.
There are ways to lose tenure, but in principle, it's academic freedom.
Tenure was never really about a job for life.
It was really about the freedom to explore ideas.
It turns out there's some subtleties in Turns out there's some subtleties in that.
But I think it's so important for people to understand, so much so that when I heard about
this perhaps reduction in IDC to 15%, my first thought was, whoa, that's a big cut.
My second thought was who will get tenure and who won't get tenure.
Now it will have to be based on the merits of the work.
Now there is a correlation, right?
People who do spectacular work tend to get grants.
People who get grants tend to get more money and then you can explore more, et cetera.
And the dirty secret in all the R01 stuff is that everybody knows that the R01s are
used to fund the next bout of research.
But what you propose in an R01, sorry to break it to every,
it's work that's already completed.
This is the inside secret of every scientist.
Oh, every scientist, because you want to say,
look, I can do this.
I mean, I've had R01 support also.
I mean, I guess- Yeah, you show them
the preliminary data.
This is what I did.
This is what I'm going to do for the next five years,
but the dirty secret is,
this is what I already did for the past five years.
I get the money, I do the next thing.
This is the shell game that every scientist learns to play because otherwise, as you say,
you get it in the neck, which is grant speak for you're done.
You can't take students or postdocs.
You've got to fire your technicians.
You close your lab and you become what's called dead wood.
There's a game that's being played and it's not a dirty game, but it's this kind of like,
kind of don't ask, don't tell game.
Everyone knows that people are doing this.
And look, scientists are good people.
I wanna be very clear.
They're just trying to survive.
Most scientists, I think-
Most of them.
I believe most scientists are trying to get it right.
I think that local culture can contaminate things,
and this grant, this need to be funded-
I'll grant you most of them.
Okay, yeah.
And you know, I'm here in part as an advocate
for the public and in part as an advocate
for the science community.
I can't split myself any differently.
So am I, Andrew.
Yeah, we just do a quick question, right?
But with lower IDC, who will get tenure?
I mean, who will get tenure?
What's it gonna be based on?
Yes, I mean, that background is really helpful.
But here's a fact.
In the 1980s, the age at which scientists won
their first large grant, R01, was mid 30s.
Okay, I got mine, let's see, I started my lab
when I was 35, I got mine at, my first R01,
I got when I was 37.
But I started my lab in 2011.
Right, in 2011 to 2020, you were young for an R01.
I was, yeah.
Right? It's a typical scientist within the mid-40s before they got their first R01.
I didn't have a family. I worked 90 hours a week.
Right. So the point is that young early career scientists take much longer now to be able
to get support to test their ideas out than they did in the 1980s. This is important for innovation because it turns out that,
this is another paper that I published before the pandemic,
it turns out that it's early career scientists
that are most likely to try out new ideas in their work,
in their published work.
Right, so in fact, this is depressing,
but for me, as with a man with gray hair,
but like it's monotonic.
Like the first
year after your PhD is when you're most likely to have newer ideas in your papers and then
every year after that for every single year of chronological age the age of the ideas
you tend to work on tends to increase by about a year.
Well the late Ben Barris my postdoc advisor and beloved colleague at Stanford who unfortunately
passed away in 2017 he used to say he's like he was a passed away in 2017, he used to say, he was 60 when he died roughly,
he used to say, he's like,
nobody does anything after they get full professor.
And I was like, that's crazy.
We have Howard Hughes investigators,
people that wouldn't know about,
he goes, all the critical work is done early.
I said, what about you, Ben?
You're there.
He's like, oh yeah, I'm done.
You know, this was before he knew he was dying.
You know, I mean, this is the dirty secret
because when you're young, you're hungry.
Given the space from your previous mentors,
you're gonna go for it because you have to go for it.
And if nothing else comes of today's discussion,
already a lot has come of today's discussion,
I wanna put in a really strong vote for encouraging,
I'm gonna catch so much heat for this,
but the older labs talk about funding to put in a really strong vote for encouraging, I'm going to catch so much heat for this,
but the older labs talk about funding the next generation of science while taking most
of the pie for themselves.
I really believe like if I could just, I'm not going to beg, but I am going to be emphatic.
We need young labs to be funded.
This is an open door. Yeah
In my Senate testimony when I became like before I became an ice director This is this is a major initiative. I want I mean, I think that early career
Let me put a look at probably too too sharp a point on it, right?
so right now what we do is we we take the careers of young scientists and
effectively
Put them at the careers of young scientists and it effectively put them at the service of
older scientists, more established scientists.
So the early career scientists are essentially doing the work of the older career scientists.
So you have to have postdoc one, postdoc two, postdoc three before you have any chance of
getting an assistant professor job where you could test your own ideas out.
Essentially the labor of young scientists is devoted to the ideas of older scientists
in the current system.
That wasn't always true.
And the NIH has played a role in that.
And it's part of the reason why we have had essentially this sort of more incremental
progress than I would have hoped for.
When I did my PhD and did my MD in the early 90s and then into the mid-90s, I envisioned
a career where there would be huge advances in science that I would spend my entire career
thinking about and chasing, right?
And there have been some huge advances.
But frankly, I have this sense that there have been
fewer of them than I would have wanted,
expected as the 1990 version of me.
Especially in the biomedical sciences,
because I think we see the expansion of AI,
we see the expansion of computer science, et cetera.
I could not agree more.
I actually think some of the programs
like the post-bac programs at NIH,
I don't want to destroy this program by saying this, but these are where people finish college and
they decide to go two years of research before they decide to go to graduate school.
This in my mind delays and kind of drains the initiative of a lot of, look, there's
nothing more beautiful than someone graduating college who's still excited about biomedical
science.
Taking that energy, usually they don't have a lot of other commitments yet.
I think we should fund them so they can have a healthy life.
They don't need to have a lavish lifestyle, but a healthy life and spend as many hours
as is reasonable in the lab making discoveries to get through their PhD, do like it used
to be a short postdoc, start a lab and hit the ground running in their 30s
and get major funding to be able to test new ideas.
It's not just the Silicon Valley model.
It captures everything we know about brain plasticity.
Their brains are still plastic.
They're full of energy.
They're full of dopamine naturally.
And I'm not saying that everyone past 60
is like dead wood, old wood.
There's some amazing work being done at the,
but it's very top heavy.
And of course, no one wants to give up their lab.
I know people in their seventies and eighties,
they don't know what to do.
If they retire, they think they'll die.
I don't care, get a hobby, let the next generation in.
Actually, there's a one good result.
What one result that was made me a little bit comforted
was in this paper that I did with Mika Pakalin
on age
and the trying out of new ideas.
That is that teams of young scientists, first author relatively young, teaming with a mid-career
or later career scientist as a senior author, that combination is most likely to try out
newer ideas in their work.
It's like you kind of need the-
Interesting.
So keep the old folks around.
By the way, I'm turning 50 in September,
so I'm nearing these numbers.
You're still a young man.
All right, well, I'm very passionate about this,
in part because some of my former graduate students
in postdocs are now professors at universities
working extremely hard on extremely interesting questions,
but I know they would be pursuing even bolder questions
related to immune system function and autism related to visual repair to cure blindness.
I mean, these are not trivial issues that they're trying to pursue.
They deserve and their peers deserve the majority of the taxpayer dollars for discovery because
I think that therein lie the discoveries.
And there is this culture in academia of people kind of pinning awards on each other as you go up the discoveries. And there is this culture in academia of people kind of pinning awards on each other
as you go up the ladder.
Some of those awards are nice.
A good friend of mine just was,
he's a member of the National Academy of Sciences.
He called me, I said, congratulations.
I was like, this is fantastic.
And he said, feels good, but like, you know,
I wanna be in lab, I wanna be in clinic.
I mean, that's what's important.
The titles are, in the end, they're meaningless.
I've seen so many colleagues die.
Like their offices get cleaned out within a week.
They're gone.
And so the discoveries that young scientists make with tax dollars, to me, is the most
important and beautiful thing that can happen.
I mean, it will soon migrate into a discussion about public health, but I'm so relieved to
hear A, that journals are going to be accessible to the public and B that you feel this way
about young scientists because I got nothing against the old.
I'm not an ageist, but let's face it, youth is when discovery happens.
I think let's bring this back to something you brought up earlier and I haven't yet addressed,
which is how we evaluate science at the NIH, right, these study sections.
They're inherently, as you alluded to this, they're inherently conservative, right?
So just to put a real fine point on it, so I think in the 2000 teens, there was a policy
that in order to be on an active member of a study section, standing member of these
grant review panels, you had to have an active ROI, a large grant, an active large grant.
Think about that, right?
So I am a scientist.
I'm really well accomplished in my field.
I have a large grant.
By every measure of scientific success, I'm a success.
And now I'm sitting judging young scientists pitching their ideas, some
of which, if they turn out to be true, maybe undermine my ideas.
It's really hard to open your brain and say, oh, okay, I'm going to support a project that
might undermine my entire career.
Everything we know about cognitive bias
supports what you're saying.
There's another aspect too, which is, you know,
letting go of one's own ideas,
especially if you're funding and your ability
to pay your people depends on them is tricky.
There's another kind of, this is not just inside ball.
If you're on study section,
your grants are evaluated differently.
A lot of people are on study section because you get what's called a special, where people
you know and you know who they are, a small team of people that generally like you and
you like them, you even can suggest names for who's going to review your grant.
Being on study section helps you get grants.
You have to get one first in the open water of grant study section. But I hope what
people are starting to understand is that the system isn't corrupt. It's just structured
in a way that doesn't favor bold, innovative change. And those words, bold, innovative
change are thrown around a lot. I was part of the National Eye Institute's Audacious
Goals Initiative. We'd get into a room every year. We'd sit around, how are we going to
clear blindness? What are we going to do about pigmentosa, macular degeneration?
And then everyone went back to doing the same work they were doing before.
A lot of times these phrases get thrown out there, websites get put up, and nothing changes. When I talk to the public about science, there's a couple of modes.
Like now, post-pandemic, a lot of it is just purely cynical.
But there's another mode of thinking about scientists that are just sitting around thinking
deep thoughts, making big advances.
But in fact, what you're saying, and I agree with, is true.
It's not entirely cynical, but the fact is that there's a sociology to science.
This is sort of like a careerism inside science.
And sometimes it can lead to good.
Your competition with other scientists to make the next big advance.
But I think in the current way we structure incentives in biomedicine, very often we
discourage that kind of sharp innovation.
We encourage essentially incremental advances so you have a safe scientific career for the rest of my life, rather than
take a big scientific risk where I might fail, but if I succeed, I cure macular degeneration,
I cure type 2 diabetes or whatever, right?
The structure of this, essentially, if you want to put it down as the key problem, is
that in biomedicine, academic biomedicine, we are too intolerant to failure.
If you have a big idea that doesn't work, essentially you're out.
That's not true in Silicon Valley.
Silicon Valley, a failed startup, doesn't mean that you can't get another draw at trying
to make a successful startup.
Silicon Valley does not punish failure that sharply.
And that is the key to its success.
Whereas in biomedicine, the current version of it we have now, we punish failure way too
sharply.
Yeah, I completely agree.
And I should definitely point out,
I never had trouble getting grants.
So I'm not coming to this with any cynicism.
I moved on to podcasting and I still teach
and closed my lab out of a joy of what I'm currently doing.
It wasn't that I couldn't fund myself.
I did see excellent grants get killed.
I also saw some excellent work progress.
I definitely agree with this analysis that you did.
Thanks for doing that paper.
I'll take a look at it.
We'll put a link to it.
That work in early in one's career tends to be the really innovative stuff.
There's just something about the younger brain that is more ambitious.
It's a higher risk taking.
And unfortunately now there's so much pressure
to get funding for IDC reasons and to get tenure
that oftentimes young investigators will lean
toward the more pedestrian turn the crank type of science,
get tenure and then think they're gonna go do something.
But typically there's something bigger.
I am very relieved to hear that young investigators, young scientists,
new ideas are going to be prioritized, hopefully through where it really matters, like brass
tacks.
I think early career R01s should be bigger than late career R01s.
It should be inversely related to the size of a laboratory.
I think smaller universities should get a bigger piece of the pie.
I do, if the work is up to par, right?
You don't just want to give them money just because.
But I imagine if R01s were, I don't know, 50, 75% bigger for new investigators, maybe
they weren't four years or five years.
Maybe they were six years.
You could really take a run at something or multiple things.
And then maybe older investigators who had grants for a while, you don't want to turn
them out to pasture too fast.
You want to pivot them slowly.
I'm kind of joking.
But maybe there are ones should be smaller and they should be more selective about what
they're doing because with a lot of grants top heavy in the older generation, they can
kind of just spread it around.
Well, that postdoc went back overseas and that didn't work out.
I hear about a lot more kind of quiet exit type failures
as opposed to we tried really hard.
We thought this signaling pathway was gonna be the thing.
It wasn't.
Close that hatch, pivot quickly to the next thing.
There's a few things we could,
I mean, one of the nice things of being the NIH director,
there's lots of smart people
who've given me fantastic suggestions, especially for this specific problem, which I think is
the key, probably the most important thing I'm going to be dealing with.
That plus the replication crisis, we've talked about.
And I'm not sure exactly what the exact portfolio of things we do will fix this, but we have
to support young scientists, early career scientists.
We have to punish failure less, and we have to change the incentives around so that people want to test the big
thing, the big thing that translates into advances for some of the most intractable
health problems we face. And if we don't do that, the NIH, we're going to look back and
say, well, the NIH portfolio of investments the American taxpayer made have not paid off, just from a macro scale.
I mean, you can frankly say this for the last, at least since 2012, we have had no increase
in life expectancy in the United States.
The NIH portfolio in that sense did not pay off during that period.
I've heard, and I think it was the former director of NIH in a public forum at the end
of last year, it was November of last year, I tuned in for that, said that we've developed
more treatments to extend the life of older people or at least to limit their suffering
somewhat.
So, cerebrovascular disease, cardiovascular disease, things related to dementia, small
differences to keep them alive longer, but the real dearth
of meaningful treatments sits around younger populations
who are dying deaths of despair
or whose health is in really just in a dire condition
due to obesity, diabetes and mental health issues.
So in other words, young people are getting sicker earlier
and staying sicker and older people are getting sick
but holding on to some remnants of health longer.
And most of the treatments are geared
toward the older population.
Is that true?
Yeah, that's true.
That's exactly right.
That's a terrible situation
because it essentially is not preparing for the future.
Right, so what we have is a system,
is a sick care system.
The advances we've made have allowed people to stay sick longer. It hasn't translated
a longer life, right? It just, it's, there was a hope, I think, when I first started
doing research in 2001, in population aging, there was this idea of a
compression of morbidity. That is, you live a long life and the time you
spent really
sick and disabled was compressed at the very end of your life. Rather than spending a long
time disabled and sick and you die after having spent like a decade or more very sick, the
idea was that we have advances in our culture as produced results so that you live a long
life and you only
spend a few months really sick at the end of your life.
That hasn't panned out.
In fact, we have very little increase in life expectancy and for many, many people, unfortunately,
a very long period of time in a state where the quality of life is not that high, not that good, right?
Dementia, chronic disease, leading to, say, diabetes leading to all kinds of kidney failure,
macular degeneration, you name it, peripheral vascular disease, heart disease.
You end up with a situation where all of these amazing biomedical advances that we've had
over the last decades have not translated to actually improving the health and well-being
and longevity of the American people.
I think that the biomedical infrastructure, research infrastructure of the country has
to translate over for results for real people, for the American people. Otherwise, people can ask us, why are we having these,
why are we doing what we're doing?
It can't just be that we're doing cool things.
I mean, not that we're not doing cool things.
A lot of cool things are getting done.
But if they don't somehow eventually translate over,
again, I don't mean to distinguish basic science work.
I think basic science work is really important.
But eventually it has to translate over,
or else people will say, why have we made these vast investments?
The key thing is if we're not actually improving health as a result of the research we do,
then we haven't accomplished our mission, right? And the research agenda of the NIH,
as we've talked about, it's like we talked about international
relations as determining in part what scientists work on in drug pricing.
We talked about how politics determines the agenda the scientists work on, right?
So you talked about HIV, right?
So the political focus on HIV led to the vast investments the NIH has made in HIV with some
positive effect, actually a lot of positive effect.
And then also the sociology professions, the scientific profession, these are all complicated
things that result in the portfolio.
But if the portfolio ultimately doesn't meet the health needs of the American people, then
it's not doing what it's supposed to be doing.
Part of my job is to make sure that it does meet those health needs.
The Make America Healthy Again movement, that's what it's asking for, that the health institutions
of this country actually meet the health needs of the people where they are.
And in the large part, we've not successfully done that in this country for decades.
Otherwise, we wouldn't have this major chronic disease
crisis we're currently facing. And so that's, you know, it's a complicated question. It's
not like, you know, it's not just solved by funding one grant or making specific decisions.
It's about the incentive, the system at large to focus on, to create incentives so that scientists turn their ingenuity toward those health needs
rather than just advancing their careers incrementally.
I'd like to take a quick break and acknowledge one of our sponsors, Element.
Element is an electrolyte drink that has everything you need and nothing you don't.
That means the electrolytes, sodium, magnesium, and potassium in the correct amounts, but no sugar. Proper hydration is critical for optimal brain
and body function. Even a slight degree of dehydration can diminish cognitive and physical
performance. It's also important that you get adequate electrolytes. The electrolytes,
sodium, magnesium, and potassium are vital for functioning of all the cells in your body,
especially your neurons or your nerve cells.
Drinking element dissolved in water makes it very easy
to ensure that you're getting adequate hydration
and adequate electrolytes.
To make sure that I'm getting proper amounts of hydration
and electrolytes, I dissolve one packet of element
in about 16 to 32 ounces of water
when I first wake up in the morning.
And I drink that basically first thing in the morning.
I'll also drink element dissolved in water
during any kind of physical exercise that I'm doing,
especially on hot days when I'm sweating a lot
and losing water and electrolytes.
Element has a bunch of great tasting flavors.
I love the raspberry, I love the citrus flavor.
Right now, Element has a limited edition lemonade flavor
that is absolutely delicious.
I hate to say that I love one more than all the others,
but this lemonade flavor is right up there
with my favorite other one,
which is raspberry or watermelon. Again, I can't pick just one flavor, I love one more than all the others, but this lemonade flavor is right up there with my favorite other one, which is raspberry or watermelon.
Again, I can't pick just one flavor.
I love them all.
If you'd like to try Element,
you can go to drinkelement.com slash Huberman,
spelled drinklmnt.com slash Huberman
to claim a free Element sample pack
with a purchase of any Element drink mix.
Again, that's drinkelement.com slash Huberman
to claim a free sample pack.
This is a perfect segue for a discussion
about the replication crisis.
It's a perfect segue because up until now and still now,
the independent investigator model
for those that aren't familiar is
Andrew Huberman gets hired as a assistant professor
who might get tenure at a university.
And then the so-called Huberman Lab,
before it was a podcast,
it was also a lab, actual laboratory space, physical space,
has to come up with a set of ideas that hopefully pan out.
You get funded for it, you get tenure,
and then you can pursue new ideas.
But it's an independent kind of startup of its own.
My neighbor, two doors down in the hallway,
works on something else.
One of the major issues I believe
that led to the so-called replication crisis
is that it is very difficult,
even with the best of intentions for two laboratories,
to do the same work in an identical way.
Five minutes longer on a countertop at room temperature
might change an antibody that could lead
to a different outcome.
I mean, there are so many variables.
The solution to this is collaboration.
Instead of having independent investigators,
you have clusters of laboratories,
hopefully distributed throughout the country,
working on the same problems, collaborating.
There are grants of this sort, but here's the problem.
As you point out, it's a sociological issue.
The graduate student in my lab needs a first author paper if they want to eventually get
their own lab.
The postdoc in another laboratory doesn't want to be a middle author with 20 other authors.
To continue to flesh out the world of science with scientists, the independent investigator
model works.
Those independent laboratories are naturally going to come up with different answers, talk
about them at meetings, and maybe there'll be some convergence of ideas.
But wouldn't it be beautiful if laboratories collaborated to try to solve important problems
related to public health, and everyone was incentivized through perhaps not easier,
but more plentiful funding to do the research,
salaries that these people can live on reasonably
while they're graduate students in postdocs,
and maybe even laboratories that are more structured
around a problem, so it's not called the Huberman Lab,
it's called the Laboratory for Curing Blindness.
And there's another laboratory for curing blindness at WashU and another one in a university
in Illinois.
And we all collaborate and we try and cure blindness as opposed to making it all about
the principal investigator, the independent investigator.
The rock star model of science kind of works and it kind of is part of the problem in my opinion.
I agree with you about collaboration in the following sense.
So science is a collaborative process, but the incentives within science for individual
advance can often lead to a sort of a structure that elevates careers
without necessarily producing truth.
So let me flesh this out.
Very tactfully put.
OK, so there's a colleague of ours at Stanford
named John Ian Eades.
He wrote a paper in 2005, absolutely brilliant scientist,
I think the most highly cited science, living scientist
in the world, right?
So he wrote a paper in 2005 with the title, Why Most Published Biomedical Papers Are False.
I mean, when you make a title like that for a scientific paper, it better be convincing.
And in just a few pages, it's an utterly convincing paper.
And it's not because scientists commit fraud, that's not the reasoning behind
it, because science is hard. It's exactly, and the hard in exactly the way you just said,
Andrew. So you publish a result, you believe it to be true, you have some statistically
significant result at some level, you know, we say P equals 0.05, what does that mean
that some percentage of the time, even though you believe the result is
true, it's been peer reviewed by your colleagues.
The peer review actually doesn't involve, as you know, the peer reviewers taking your
data, re-running your experiments.
It doesn't mean any of that.
They just read your paper, looked for logical flaws, didn't find any, and then they recommended
the editor to be published.
So the peer review is not a guarantee that it's true.
You have some significance that say that your data meet.
Even with that, some percentage of the time,
the published results gonna be false.
Now, if you think of science, a priori is hard.
Any result that you publish is most likely gonna be
a false positive result.
So-called negative results aren't incentivized.
They're very hard to get a good paper published
for showing that something isn't true.
It happens.
I had a paper published in Science,
which argued that at least one aspect of a theory
was not true.
It was a very prominent theory. Turns out other aspects of that theory were true.
So sometimes it happens,
but no self-respecting graduate student or postdoc
who values their life is going to say,
hey, I wanna go in and try and disprove the hypothesis
of one of the more famous people in the field.
In fact, I didn't set out to do that.
It just so happened that's the way it landed.
And no one shows up in graduate school and says,
you know, I love these papers.
Let's replicate them.
Yeah, right.
So let's get back to that.
Cause that's, you're absolutely right about the incentives.
And that this is, but this is the,
let's just before we get to that
and the incentives we're talking about,
analyze that, just put a fine point
in the nature of the problem.
The published biomedical literature,
something that I've searched basically every day for
the last 30 years, 40 years, oh my God, 40 years, that published biomedical literature,
most of the time that I'm reading papers in that literature, the papers I'm reading, even
though they say their result is true, is likely not true.
Look, I had a professor in medical school once told me,
it was one of my favorite professors, he told me,
look, half of what we're teaching you is false.
Well, okay, so I'm glad you're pointing this out.
I asked a very prominent neurosurgeon,
perhaps one of the most prominent neurosurgeons in the world,
I said, what percent, someone else asked him,
but I was right there, what percentage of information
in medical school textbooks do you think is false?
And he said half.
And then the second question was, what do you think the implication is for people, for
human health?
And he said, incalculable.
Right, exactly.
And that's true of the biomedical literature as well, right?
So the published peer reviewed biomedical literature is not reliable, is the bottom line.
So a lot of the things that we think we know, even with some fair degree of certainty, are
probably not true.
And the question is, like, which half?
Well, we don't know the answer to that question.
There's probably a mix.
Parts of papers are probably true and other parts are not.
Right.
It's not like all the papers from one, well, there are those labs, but they don't last
long.
And this is done even with pure goodwill and no fraud at all.
The reason is a combination of the fact that science is hard and the incentives we created
for publication.
Those two together mean that the scientific literature is, the biomedical scientific literature
is not reliable.
I've talked with drug developers who tell me that they, before they make vast investments
in a phase three randomized trial or phase, you know, even phase one or phase two trials,
studies, they conduct independent replication efforts of the basic biomedical literature
to see if it actually is true.
Now, those are private replication efforts so that the drug developers know which parts
of the literature are true and false, but the scientific community at large doesn't
know.
We've set up a system, a publication, that guarantees that much of what we think is true
is not true.
That's a major problem for science.
And it's linked to this idea that you have to publish or you're out.
It's linked to this idea that if you fail, if you publish failure, you're out.
It's linked to this sort of reward that we give to scientific volume, like the number of papers we publish, and
scientific influence. That's what citation counts are. There's a number of, I'm sure
you know of this, Andrew, so I'm explaining it to the folks who are listening, something
called the H index. Right? So you go to a site called Google Scholar. Every scientist
listening to this, I'm sure, has gone and looked at their Google Scholar page, they have a little card at the top right that essentially looks like a baseball
card to me.
And it has a few statistics.
And if you're not a scientist, you won't necessarily know what the statistics are.
But what they are, things like an H index is, okay, so if you have an H index of 10,
that means you have at least 10 papers published
in the peer review journals that have 10 citations each, but you don't have 11 papers with 11
citations each.
So in order to get a high H index, you have to have both a lot of papers and a lot of
citations through those papers.
It's a funny number because like you can imagine just to bring back Watson and Crick,
imagine Watson and Crick, the only paper they ever published was the structure of DNA.
Good paper.
The double H. Let's say it has a million citations.
Not peer-reviewed, but good paper.
It's a fantastic paper.
And was never peer-reviewed.
Right.
But a million citations, imagine it was their only paper.
Well, they have one paper with at least one citation, but they don't have two papers with two citations,
so their H index is one.
Or you could have a million papers
in the Journal of Irreproducible Results,
each with one citation each,
and you have one paper,
at least one paper with one citation,
but you don't have two papers with two citations,
so your index is one.
Or you could write a lot of reviews,
because reviews get cited like crazy.
Yes.
Okay.
So now, what you have then is an incentive for scientists embedded in Google Scholar
that says, look, you have to publish a lot of papers, you have to have a lot of influence
because that's what a citation is.
It's a measure of influence.
You go to scientific meetings in order to sort of shop your ideas around, right?
And so we reward scientists for the influence that they have, and we reward scientists for
the volume of papers they publish.
What we don't reward scientists for is honesty about their failures.
We don't reward scientists for pro-social behavior
like the sort you suggested, but where you collaborate, you share your data
openly and honestly. In fact, we punish scientists for that, right? So right now,
if somebody comes to me and says, Jay, I want to replicate your work, I've
trained myself not to think this way, but it's really hard not to given the
structure we're in. I'm gonna to think of that as a threat.
What if they don't find what I've found?
Now I'm a failure, right?
The failure to replicate is seen as a failure of the scientist rather than the fact that
science is hard and is difficult to get results that are true even with the best of will.
And we punish scientists for that.
So we essentially reward scientists for a set of things that creates incentives for
the replication crisis to happen.
I see.
Right?
So the solution to the replication crisis is to address those things, measure the pro-social things that scientists could
do, recreate the incentives away from simply influence and volume.
I'm not saying you shouldn't reward influence and volume.
I'm saying you should reward a fuller set of things.
It's like in baseball, you reward a hitter for home runs, but you don't also measure
strikeouts.
Well, you're going to get a lot of strikeouts and not necessarily, you may get a lot of
home runs, but that may be bad for the team in total.
So you want a full set of statistics measuring the things you want to, you actually want
scientists to do in order to solve the problem. So let's say we had statistics that said, look,
do you share data with others in your published research work?
And we had that among the baseball statistics
we put in Google Scholar.
Let's say we ask, is your work subject to replication?
Actually, if your work is subject to replication,
you have ideas that are worth replic looking at by other scientists
That's a success no matter what they find
Right do you publish how frequently do you publish your false results or results that turn out to be not true?
right
Imagine we had those statistics
We would have a fuller picture of what scientists, like the capabilities of scientists, their outcomes of scientific work,
and we would reward the prosocial things
that would solve the replication crisis.
And so what you have now is a real problem
that's not been addressed.
We've known about this now for decades,
but it's not been addressed adequately.
There've been a number of efforts by the NIH
over the last couple of decades to try to address it,
but it hasn't solved the problem.
Well, I feel like the issue that really cracked this open, the reason the general public might
have heard of the so-called replication crisis is this idea that there were some findings
in the field of Alzheimer's research that were false, but they were wrong potentially
for the wrong reasons.
As a scientist, you learn it's okay to be wrong
for the right reasons.
Meaning your measurement tool was inaccurate
but it was the best you had at the time
and you thought it was accurate.
Better tool comes along, you get a different measurement,
new result, because you were wrong for the right reasons.
But you're not fudging data, you're not hiding data.
There is this idea that in the field of Alzheimer's research
that somebody might have fudged data, made up data,
and that the field kind of went along with it.
That's not my understanding of what happened.
My understanding is that somebody fudged data and then nobody went back to check the primary
data in that paper.
And as a consequence, many years down the line, a number of subsequent findings were
nested on a false finding and the whole thing tumbled like a house of cards, more or less.
The process you just described is the replication prices playing itself out, right?
So you make investments built on a house of sand, on a foundation of sand,
and you eventually get fancy drugs that are supposed to prevent you from getting
the disease that you're trying to prevent, you know, in this case, prevent you from getting the disease that you're trying to prevent, you know, in this case prevent you from progressing so you can't remember the name of your kids and you can't
live your normal, you know, full life as your memory goes away.
The drugs don't work for those things and your question is why?
They're built on the best science going all the way down.
It turns out the best science all the way down is not replicable.
The fraud aspect of it is actually not even the most important, it's important, but it's
not the most important part of it.
It's almost just an afterthought, right?
Ask yourself, why have there been so many scandals brought down the former Stanford
president? many scandals brought down the former Stanford president. The NIH, again, just within Alzheimer's, there was a director of neuroscience who apparently
had a hundred or more papers with his Photoshop fraud.
So the question is, why have so many prominent scientists been brought down where their work
has been shown to be fraudulent?
It's not a moral failure on the part of any individual scientist.
The structure of incentives we've created produce those behaviors.
We created them is what you're saying.
Yes.
We said you will get advances in your career if you publish a lot of papers and have a
lot of influence.
And if you admit that you were wrong about something, your life is over.
Your career is over.
Yes.
I think one of the most beautiful things in science
was when Linda Buck co-recipient of the Nobel Prize
with Richard Axel for the discovery
of the molecular structural factory receptors,
retracted, I think it was three papers from her laboratory.
A postdoc either was sloppy or fudged data.
She retracted the papers because the papers were wrong.
People told her, this stuff doesn't replicate.
Not only did it not hurt her career, it helped her career.
She was right about the olfaction work that got her the Nobel Prize, but she was willing
to admit a mistake.
Someone in her laboratory made a mistake, ergo, she needed to retract those papers.
What happened in the case of our former colleague, but still, well, you're not at
Stanford anymore, was, let's just put it this way, in every major laboratory that's publishing at a
phenomenal rate, inside the field, there is always discussion. Postdocs talk, graduate students talk
constantly, and people know that work is solid and other work, like there's something that just gets said at meetings,
like, no, nobody believes that.
When somebody says, and that gets passed around,
so then no one follows up on it.
But it's rare that somebody goes and whistle blows
the way that those papers got whistle blown.
And then the right thing to do, in my opinion, okay,
is you correct or retract the paper.
If you make a mistake, you correct the mistake.
There are ways to do that.
People publish corrections all the time.
Or you retract the paper if it's wrong.
I think that the system, as you pointed out,
has made it feel very dangerous for scientists
that are approaching the pinnacle of science,
like within reach of Nobel prizes, winning Laskers,
winning international awards,
as was the case in all these instances, that they could admit that they were wrong.
Andrew is all up and down the system.
Imagine you're post-doc and you have to get your paper, you retract your paper.
You're essentially starting over or leaving science.
Yeah, you're leaving science.
It's existential.
Like the structure, so the problem of fraud in science then is a symptom of the broader problem of the replication
crisis rather than the main driver of it.
So the right solution then is not root out the fraud.
The right solution is change the incentives of science so that we as scientists engage
in pro-social behavior.
Pro-social in this case meaning behavior that rewards truth rather than rewards volume and influence alone.
Music to my ears, how is NIH going to do that?
So we were talking about the innovation crisis. That's a much more complicated crisis. This
one actually I think is doable within the context of the NIH. I think you have to do three things. So first, you have to make it a viable career path to engage in replication work in creative
ways.
To some extent, there's some of this with like meta-analysis.
Meta-analysis is the science of analyzing the scientific literature to ask what the
scientific literature as a whole says about a particular question. That's what meta-analysis is. And so there are people who make careers
on meta-analysis. And so that's in a sense a kind of replication work.
Studying studies.
Yes, studying studies. But it's really difficult to make a career out of doing replication work as a general matter within
science.
You can't win a large grant at the NIH currently where you say, oh, I'm going to do meta-analysis.
I'm going to do replication work, which means then you're not going to get tenure at a top
university, because you can't win the large grant that you're required to get in order
to win. So you're not going to focus on replication work as a young scientist, even if you were
very good at it, even if you could think creatively of how to do it at scale.
But it is discovery, right?
I think we need to reframe it, right?
Replication is kind of a dirty word.
It shouldn't be, but years ago when gene arrays first became a gene arrays first became available, where you could look at gene expression
in cells or tissues.
Now you do, you know, single cell sequencing
and you can do deep sequencing and this has really evolved.
None of those, dare I say, are experiments.
You're not testing a hypothesis.
They are hypothesis generating experiments.
You get a bunch of genes and you go,
well, that one's much higher in the cancer cell
and that one's much lower in a non-cancer cell.
I think I'm going to go do like a knockout of that gene or overexpress that gene.
I mean, that's testing hypotheses, but there is work that's necessary, but not sufficient.
And what you're describing in terms of meta-analyses, AKA replication, maybe should be recast simply
because branding matters, which shouldn't, but it does, and incentivize it
as discovering whether or not discoveries are actually discoveries.
What's more important than that?
Yeah.
Essentially saying, is the scientific literature true?
Assessing the truth of the scientific literature, right?
That's what that is.
And that's a real, fundamental, actual advance.
Exactly the way you say.
But we don't reward it.
The NIH doesn't reward it.
That will change.
Well, drug companies, it occurs to me, should be incentivized to do it because it will save
them perhaps-
They do it.
The extensive, but perhaps they won't have to do it as extensively because if the work
that they're getting down the funnel has been checked multiple times
by multiple laboratories, they have an increased confidence
that molecule A, B or C does A, B or C.
Sure, they'll test it again
because they're about to put dollars behind it.
No one wants to put dollars behind something
that they aren't absolutely sure is true,
but you'd like the funnel to be narrower.
Yeah, I mean, and right now they test it.
They do the replication work.
Drug companies, before they make those investments,
do the replication work, but it's private,
so that only they know which results are true
and false in the literature.
I see.
Right, so if the NIH does it,
the knowledge about which results are true becomes public,
which makes the entire scientific
literature much more reliable as a basis, not just for drug discovery, but also for
individual behavior.
Which health behaviors should I, I mean, what food should I eat to make myself healthier?
Well, that-
No one can agree on that.
I know, but the reason why-
We can only agree on what you shouldn't eat. And even there.
Yeah.
I mean, I shouldn't eat the Skittles
with the Pro-Long-Gateau.
I heard that processed foods are bad,
but the other day I saw, not gonna believe this,
but there's a kind of a emerging movement
in one sector of the media
that the demonization of highly processed foods
is a conspiracy theory, which is like,
like if that, but that's a perfect example
of sort of what we're talking about more generally,
which is that language matters.
You can throw something in the trash bin very quickly
by calling it a conspiracy theory until somebody makes,
or a group makes the effort to bring it out
over and over again and determine if it indeed is.
You can also throw something in the trash bin very quickly
if you just call it just a replication study
or a so-called negative result.
A negative result says this particular pathway,
molecule, mechanism, et cetera,
is not doing what we hypothesized it would.
And-
That's a real advance in our scientific knowledge.
Absolutely, without question.
So the reason why we don't have consensus
on what the right thing to eat is
because the scientific literature,
well, first it's a more complicated question
than just science, but like part of it is
that the scientific literature around it is not replicable.
And those studies are really hard.
You have people to eat the same things.
No, I agree.
People are probably sneaking Skittles,
people lie about what they eat.
By the way, I don't like Skittles.
I was more of an M&M person just for the record.
Whatever, just leave that aside.
Well, it's clear that the new administration both
champions healthy unprocessed foods,
but every once in a while you'll see one of them consume it.
I've cut down the Skittles since I've joined the maha movement.
Whatever, so I saw, I mean, were they M&Ms?
OK, so let's go back to what we were talking about.
You asked how do you fix this, right?
So one is you give large grants to people
in the scientific community who do replication work
in creative, important ways, scalable ways.
You farm out to the scientific community
the question of what results in scientific literature
really need replication. The key sort of rate-limiting step kind of results
that we need to know if they're true to advance science and advance human knowledge about
questions of health.
So you reward large grants for scientists.
So now all of a sudden their status is lifted
compared to where they were before,
which is down in the basement.
Will there soon be an institute or a set of grants
set aside specifically for meta-analyses
and to resolve this,
to help resolve some of this so-called replication?
I plan to do that.
Fantastic.
I don't think you'll get any pushback on that.
However, every dollar spent one place
is a dollar not spent elsewhere.
Yes, but at the same time, making the entire scientific literature more reliable is money
well spent.
That is my belief as well.
Second, you have to have a place where you can publish this work. Right now, if you send
your replication result to a New England Journal of Medicine or Science or Cell or Nature,
they will not look at it at all. The NIH can stand up and will stand up a journal
where these replication results can be published
and made searchable in an easy way,
so that you have some scientific paper you ask yourself,
is this something that other people have found?
You can go to the scientific journal that we're going to stand up,
you can search it very easily and ask,
where are the other papers that look at the same question
and what do they find and get a summary of it.
So this is a little bit like community notes on X?
In a way, but it's the scientific literature
producing the community notes.
These are formal papers with method sections
and credentials, not just anyone doing this work.
It's part of the community of people that are looking at this question in rigorous ways,
right?
So the point is that you'll have kind of a Cochrane collaborative.
Cochrane is this group in the UK that grades scientific evidence on a whole bunch of different
health questions in a way where they elevate rigorous, randomized control studies as the
highest level of evidence and then end of one kind of studies as the lowest level and
a whole bunch of things in between.
And they'll produce reports that say, well, there's weak evidence to suggest this is true,
there's excellent evidence to suggest this is true, there's no evidence to suggest one
way or the other on this.
They're very, very nuanced in summaries.
You should be able to do that, but with the published replication work as the core of
it, right?
And a scientific journal put out by the NIH, a high-profile journal, will then make publishing
replication work a high-profile, high-prestige scientific activity.
The journal could also publish negative results.
I tested this idea out, it didn't work.
Published in the journal, and now it's discoverable.
It's no longer the threshold of you have to have a statistically significant result in
order to get your result published.
You just publish the result because it's interesting and true even if it was a negative result, right? The journal then that the NIH will stand up will plug a hole
in the literature where we don't reward, where we punish failure. Instead, we would reward
it where the constructive failures are published and communicated to the scientific community
at large. We reward replication work. So fund replication work, create a place where it's
publishable and essentially rewarded.
And then third, this is probably the most important, measure
prosocial behavior by scientists.
Make it part of the suite of statistics we use to measure
scientific productivity, not just publication, not just influence,
but also do you share your data?
Do you – is your work – has it been subject to replication?
Do you cooperate with those replication efforts?
Do you yourself engage in replication efforts of others?
And make that part of the suite of statistics we measure for scientists to measure their
productivity.
And now all of a sudden replication becomes something you want to participate in, even if you yourself are not doing it. It fundamentally
alters the culture of science so that it rewards truth. Scientific truth is determined by replication,
right, by independent research teams, rather than influence. It's hard to think about
as scientists. We think about scientific truth as,
were you published in the New England Journal?
Were you published in Science Cell or Nature or whatever?
That's truth, peer-reviewed papers.
But in fact, the ground truth of science
is determined by something really much more humble than that.
It's by replication.
We need to reward the things that produce the ground truth
rather than the things that reward just pure influence.
And we don't do that.
It's hard and it's almost impossible as scientists that have grown up in a community of people
that reward influence as the primary measure of success to think what it would be like
if we were to reward truth.
But I think if we do these three things, it'll completely transform the nature of science.
Why would you want to commit any fraud?
You're not going to get a reward for it.
Yeah, you get a published paper.
You might even be a top journal, but no one's going to replicate it.
You won't want to share your data with people because they'll find out you committed some
fraud.
All the incentives to commit fraud will just dissipate.
It'll be liberating for scientists to be able to focus on the things we actually care about,
which is learning about the world,
true things about the world,
the reasons why we went into science to begin with,
rather than this sort of like competitive process
of trying to get, climb up a ladder
that doesn't necessarily produce any truth.
Amen to all of that.
I feel very blessed that I had a graduate advisor who said,
she said, it was wild.
She unfortunately passed away young as well,
but she said, you know,
why would any scientists make up data?
It's crazy, right?
You're trying to figure out what's true.
So that essentially means they're willing to lie
to other people about their data
and to themselves in some sense, right?
The other thing is I'll never forget
revising a paper
with her and I remember thinking like,
oh, well we have this, this, and she said,
whatever we do, we can't give the reviewers what they want.
And I thought, that's a weird statement.
All you ever hear is, you know,
you gotta give the reviewers what they want,
but it's a very dangerous statement.
And the reason she was saying,
don't give the reviewers what they want is,
you have to stay, you to stay wholeheartedly committed
to what you know and observe to be true.
And you were closest to the data, so you would know.
The other thing that I learned from her,
and this relates to what you're saying,
is that it not only is okay,
but it should be encouraged to publish papers
in an array of journals.
I think the pressure to publish in high profile journals in order to get a really great job
is so great that it leads some post-docs, as it did in some of the cases we were talking
about earlier, to either make up data or to throw away data that didn't fit in order to
please the boss.
Then the boss gets pulled into it.
Then the boss tries to dissociate.
This has been going on for so long.
I feel very blessed that I was encouraged to publish some papers if they had a chance
in science and nature, but other papers in fine journals
like the Journal of Neuroscience, where the accuracy
and in some sense, the volume of data was also encouraged.
You could put a lot more data there,
but now with online publishing and electronic formats,
there's no limit to the amount of data you can put.
So you can no longer use the excuse,
well, the high profile journals, you only
can have four figures.
So I think everything you're saying is very reassuring and should be reassuring to people.
It's music to my ears, frankly, and I think it will be music to the ears of graduate students
and post-docs who feel this immense pressure to make a major discovery, to make the lab
head happy so that then they can get promoted to getting a job.
Because most of the job process is powerful PIs picking up the phone and saying, I've
got this postdoc, you should hire them.
That's a lot of it.
It's not all of it, but that's a lot of it.
So having an elder that supports you is huge.
The other thing that I just am so relieved to hear is that the system has been around a long time.
And it sounds like from what you described, it worked really well up until about the nineties,
mid eighties, nineties, and that at some point something happened, something changed.
And I don't doubt that scientific fraud took place a long time ago.
There wasn't replication, but I feel like some of the pure essence of science
that you were alluding to earlier,
people tackling new issues, that there isn't really,
it's more survivalist, careerist now
than it is about the spirit of discovery,
which is really about the spirit of finding out the truth.
So any reflections on this notion
that we're sort of in a more careerist mode of science?
Yeah, I think part of it is just the sheer funding levels have been so high.
I think over the time period you're talking about, there was a doubling of the NIH budget.
There was all kinds of increases.
The sheer volume of research, I think the way that we manage, I think it's worthwhile
investments to have those investments,
but to have such high volume relative to what we had in the 80s, to have such high levels
of funding relative to what we had in the 80s.
Are you saying that we have too many scientists?
No, I'm not saying that.
What I'm saying is that we have to create structures that are appropriate for the volume
that we have so that we produce in this volume.
It's like a fundamentally different problem than we
had in the 80s.
So the structures that we had in the 80s where we rewarded publication and peer review journals
as the measure of success, it might have worked to create incentives for pro-social behavior
in the 80s, but it doesn't work at the volume and the levels that we have now.
We have to change the structures we have so that given this volume of investments, people
have the right incentives to have those prosocial.
We have to change how we think about we structure the incentives in science to create the kind
of prosocial incentives that we once had.
All right.
Now that we're through the easy stuff, let's get to some of the harder stuff.
I'm Just kidding.
You have a tough job, my friend.
Let's talk about some of the recent changes in NIH funding that most people have heard
about and then we will segue to the barbed wire topics of vaccines and lockdowns.
But before we do that, I heard, or at least my understanding was, that when the new administration came in,
they essentially went through and looked for the letters DEI and for the word transgender,
and basically halted or eliminated some lines of funding to particular labs. I also saw on
social media, and I didn't validate this, that some studies that were focused on transgenic,
not transgender, but transgenic mice,
which is a very common tool in biomedical research,
got flushed in that process,
so that maybe it wasn't a clean vetting
of transgender versus transgenic.
Look, every administration, every person makes mistakes.
So I'm not trying to highlight mistakes, but I think this blew up.
And it would be great, because you have an opportunity here to reach a lot of people,
to just sort of clarify what the rationale of eliminating grants that had a DEI or transgender component was.
And then we can talk about this, what appeared to be a mistake.
Yeah.
So first, let me just talk about the mistake.
First, most of this happened before I became an ICE director.
It was like early April when I started.
I think much of the-
When did you start?
April 2nd, I think.
All right.
So don't come after Jay for anything that happened prior to that. And it was actually quite frustrating being on the outside looking in going,
I can't, I can't, just look. Yeah, they were waiting for you to step in so you could take
responsibility. Yeah, anyway. So they could blame you for something you didn't do. I don't mean to
say like, I, you know, like this, I'm still like responsible for like, like addressing this going,
like going forward. So, um, the trans, so I don't, I don I actually don't know specifically about transgenic.
That's obviously a mistake.
The transgenic mice are a key tool for discovery.
If that was cut, I think we-
Maybe it might've been a wording
in a public address from the president.
I don't know that they actually eliminated grants
simply for studying transgenic mice.
I know that grants focused on, look,
years ago I studied sex differentiation
in the brain and body.
So not all studies where you give a male rodent estrogen
or a female rodent testosterone
are studies of transgender biology.
Those hormones are active in both sexes.
And there are a lot of grants that you can imagine
that got flushed that were studying hormones
and sex differences.
My sense is there were some false positives like this.
I've worked very hard to make sure that those are corrected.
There's an appeals process that I've set up so that researchers that were stuck in this
with a false positive, we've restored a whole bunch of grants like this. Great. Where it's good science, but it got caught up in this DEI kind of focus on like refocusing
the NIH portfolio away from sort of politicized ideologies and more toward things that actually
advance health.
So let me just address DEI specifically, okay?
First, this is really important to me.
I, in my own research, I focused a lot on the health
and wellbeing of vulnerable populations.
A lot of my research is focused on the health
of minority populations, and there are legitimate
scientific questions that have, that where somebody's race,
sex matter pretty fundamentally to the biology.
And so of course, as the NIH, we have to be able to look at that.
Yeah, some mutations only exist in certain races or, I mean, breast cancer and the BRCA
mutation, much more common in women.
I mean, you can't pretend this stuff doesn't exist.
Correct.
And so that's part of science.
And the NIH absolutely supports that kind of research still, despite all of the changes
in DEI.
So I want to give you another example of an NIH success is the research on sickle cell
anemia, right?
So the research on the strategy is this gene editing strategy essentially is to switch the cells so that they express
the fetal hemoglobin rather than the adult hemoglobin
that has this problem that causes sickling.
That's a fantastic result that's gonna,
I think, result essentially in a cure
for sickle cell anemia.
Amazing.
Right, so.
Amazing. And? Amazing.
It's a thing that affects African Americans much more frequently than it does white Americans.
Just based on the genetics of the thing.
The NIH has in the past and will continue in the future to focus on research that advances
the health and well-being of minority populations.
It absolutely must.
If the mission is to improve the health
and longevity of the American people, that includes people, African Americans, it includes
Native Americans, it includes women, it includes minorities, it includes people of all different
sexual orientation. All of that is still part of the portfolio of the NIH. I want to distinguish that from DEI.
DEI, I think, is something where,
just to give you a sense of this, right?
So in 2020, I was quite upset with Stanford,
with the way that it was,
we can talk about this maybe later in the podcast or a different podcast.
But I'd grown disillusioned with the academic freedom kind
of that I as a scientist enjoyed at Stanford despite being a 10-year professor. And so
I applied for a job outside of Stanford. I applied to university. And one of the things
they had me fill out essentially was a DEI, a loyalty oath, right? Where you had to say,
essentially your commitment to the DEI ideology.
Which was, I mean, just maybe we put up,
as you would say, a finer point on it,
just because, I mean, I think these words,
diversity, equality, and inclusion,
I think are equity and inclusion are, you know,
they're words, but what are they really talking about?
You're committed to having a lab where you include a certain number of people of different
backgrounds, or is it just sort of saying, I care about these groups?
The key thing is race essentialism.
That what makes you you is your race, first and foremost.
There may be other things about you that matter, but the most important thing about you is
your race and nothing else matters of the same scale. Right?
That essentially is the heart and soul of the DEA. So just to give you another, again,
a concrete thing, the idea that structural racism is responsible for the health outcomes
of minority populations primarily. Right? Now, if you think about that, you say, okay, well, you know, maybe true.
You may think it's true.
You may think it's not true, depending on who you are, what you're listening to.
But all I'll say is that I cannot think of a scientific experiment to do that would in
principle falsify that idea.
Now, I can think of experiments to do that would say, okay, well, look, minorities live
in, are more likely to live in food deserts.
So the food they get access to easily makes their health worse.
That's a scientific hypothesis.
You can test it.
You can imagine the result being not true or true depending on the data you find, right?
That's a scientific question.
That's not DEI.
That's a scientific question about the health outcomes of minority populations.
You can test scientifically.
Whereas the idea that structural racism is responsible for the health outcomes of minority populations, you can test scientifically. Whereas the idea that structural racism
is responsible for the health outcomes
of the minority population of the country,
that's not actually scientific in the same sense.
You mean there isn't a clear variable to focus on?
Well, you could not do more than that.
There are a lot of variables that could support
or refute that idea.
I don't think so.
I think the problem is one of the demarcation between what
is science and not science.
I see.
I think it's like a structural.
So like Carl Popper had this demarcation as a philosopher
of science in the 20th century, probably one
of the most important philosophers of science
in the 20th century.
He had this demarcation
criteria that said, look, is your scientific hypothesis in principle falsifiable? Right?
So the structure of the atom involves certain hypotheses about what you can and can't observe
about the momentum and the position of an electron at a particular time. Like, what's
falsifiable? Now, what's falsifiable?
Now, there's falsifiable questions.
You can do an experiment that in principle could have falsified the Heisenberg idea,
right?
Versus, for instance, Freudian psychology.
He made the point that there was in principle no scientific experiment that was outside
the system so that you could falsify the Freudian idea.
Everything inside the system was, so it's not scientific.
Yeah, I see exactly where you're coming from.
I will just push back a little bit in service to the conversation, which is for descriptive
work in science.
There's no hypothesis.
Billions of dollars of NIH money went to gene arrays,
single cell sequencing.
Those were hypothesis generating experiments.
Could you falsify those experiments?
Okay, a given cell, let's say a cancer cell
and a non-cancer cell from the same tissue
express gene list A and gene list B.
Could you falsify those lists?
We could run it again and get a different list,
but at some point you're running statistics on those.
And did you falsify the first one? Not really. those lists. Well, you could run it again and get a different list, but at some point you're running statistics on those.
And did you falsify the first one?
Not really.
So does anything descriptive like an electron micrograph, for instance, of a nerve cell,
you see lots of stuff, wow, the mitochondria are there, the vesicles are there.
Now I get a more powerful microscope.
And I look and I go, oh, what I thought was one thing is actually two things.
Did I falsify it?
In some sense, yes, but I actually just separated it with a better tool.
So a lot of descriptive science upon which like many of the great truths rest, including the double helix,
right, crystallography to find the double helix structure, it's still a double helix. Thank goodness, as of this morning,
I think it's still a double helix.
No one's proposed different yet, but
most science isn't subject to this idea that you could just falsify
it with a counter hypothesis, or I would say a lot of science doesn't quite work that way.
Now what you're describing is a merge of sociological phenomena and scientific principles.
So maybe I'll just pose the question a little bit differently in an area that falls squarely
in your court.
Up until I think pretty recently, maybe still now,
but I think this was eliminated.
If I had a grant from the NIH,
and someone was potentially coming to my lab
who was an underrepresented minority,
I could call up my program officer,
that's not a parole officer by the way,
but they're kind of similar in that
they control a lot of your life.
And I could say, hey, listen,
I've got like terrific young scientists coming to my lab. I don't even need to say that. I'd say, hey, I've got a terrific young scientist coming to my lab.
I don't even need to say that.
I'd say, hey, I've got a scientist who wants to come to my lab that's an underrepresented
minority.
And they would say, great, we will now add funding to your grant specifically to fund
that person.
I mean, they have to be what we call above the bar.
They have to be capable of doing the work, et cetera.
That has been eliminated.
I'm neither advocating for that nor fighting against it,
but that's something that lands squarely in your camp,
and it is clearly DEI.
It's not a question of whether or not
they're the best person for it.
It's just more taxpayer money specifically
to fund a researcher who would not otherwise
have the opportunity, that's key,
because they are an underrepresented minority.
Okay, so let me, you have two items there, let me address them both.
As per usual, yeah.
So the question about like hypothesis-driven science, right, so like inductive versus deductive
science, the NIH funds both, and it should fund both, right?
So the idea of a scientific project demonstrating differences based on race or some other variable
that's biologically relevant for some health outcome without necessarily having a hypothesis,
that is good science often.
Women get breast cancer more often than men.
So there's nothing wrong with that.
There's no policy at the NIH not to fund that now.
In fact, the NIH still funds and will continue to fund exactly that kind of science, right?
Because it's still science, it's part of the scientific method.
Whereas like purely structural racism causes your health problems for minority, I don't
believe it's science.
That's more of a psychology question than a biosciences question, right?
If it's a psychology question, it's not a scientific psychology question.
I don't think it's science.
I think it fails the demarcation problem.
Again, that's a hypothesis.
It's not falsified.
So there's no problem then with hypothesis-driven science if it's actually focused on health
problems that matter rather than just purely trying to demonstrate sociological outcomes
that are outside the purview of the NIH to try to address.
So let's leave that aside.
Before we do, there's an old saying that I learned from a very famous, excellent scientist,
also deceased.
He used to say-
There are no more lives.
Great dead scientists, my friend.
All my advisors are dead.
So the joke in my field is you don't want me to work for you.
Oh my gosh, okay.
But I didn't have to deal with competing with my mentors and I did not have to deal with
disappointing them or pleasing them.
So there you go.
But I would do anything to have them back.
Truly, they were wonderful people.
I was very blessed.
But there's a saying, which is a drug is a substance
that when injected into an animal or person produces
a scientific paper, which is basically to say
that there are many things that when you, many studies
that when you introduce a variable,
you're sure to get a difference.
Like if I want a paper, I give a drug to a person
and I measure the amount of rapid eye movement sleep
because basically every compound alters
rapid eye movement sleep, usually for the worse.
It's kind of wild, an aspirin will do it.
I don't want to discourage anyone from taking aspirin,
but it's so easy to tease out effects
when you just introduce a dramatic variable.
So I think that's what you're referring to.
Yeah.
And it's not junk science, but it's not great science.
Yeah.
I mean, so like, for instance, that, you don't have a control group.
You're like, okay, what's the-
You're just looking for differences so you can publish a paper.
Yeah.
Okay, so let's just leave that aside.
So some of it's good science, some of it's not good science, some of it's not science.
The DEI shift has been, in terms of funded science, has been to try to excise from the
portfolio things that are purely ideological boondoggles.
Can you give me an example of some of these grant titles that no longer exist?
I don't want to single anybody out, so I don't want to do that.
But just sort of a general flavor.
I mean, I'm having a hard time kind of...
Like structural racism is the cause of worse cardiovascular disease in African American populations.
Okay.
So something like that.
Okay.
That would be an example.
It's not actually a specific example.
Again, I don't want to point to any specific thing.
No, a thematic example.
Yeah, exactly.
So that would be an example, right?
So now let's talk about the support for underrepresented minorities and the set-asides.
The position of the administration is that we should follow the civil rights laws of
the country.
The civil rights laws of the country say that we shouldn't be discriminating against people
based on race.
When you have an institution like the NIH that essentially says, we're going to consider
your race when we decide whether we're going to give you support, you can understand why for a large part of the American public they say, well, why are
you doing that?
With their tax dollars.
With their tax dollars, right?
And actually, I should say, like, from the perspective of a minority student, it's actually
quite condescending.
Like, I believe very fundamentally based on lots and lots of experience with some excellent students I've had that minority students are often, if they make the right investments
in like the time and effort they put in, they can become, have become excellent scientists.
Sure.
There's no barrier to that in the scientific, the only barrier of the structural problems
with like, what the incentive scientists had
to make those investments in young careers and so on.
But those are common across race.
I think that if you solve those problems so that we invest in young scientists, not just
at the level of the, you know, where they're like competing for NIH dollars, but even before,
where everyone has access to those kinds of resources that the URM scientists used to differentially
have.
First, you're going to end up with a set of scientists that actually are more capable,
and you're also going to have minority scientists represented proportionally to the kind of
desire that people have to become scientists.
There's no field of human endeavor where you say, well, I have to have exactly the right
proportion of race.
I mean, if that's the truth, then what you have to have is Indians and Chinese represented
all the time.
That's like almost three billion people of
the eight billion people of the earth. You don't, justice isn't that. Isn't that kind
of like race essentialist representation? Justice is, are people who want to make the
investments to become scientists have the capacity, the resources that we as a society
providing so they can become excellent scientists, right? That has to be the case, the resources that we, as a society, providing so they can become excellent
scientists, right? That has to be the case, right? And we're not, by shifting the investment
portfolio toward this race essentialist thing, all that matters is a URM, underrepresented
minority. It doesn't matter if you're an excellent scientist. It doesn't matter so
much. It may matter some, but that's not the key thing. It doesn't matter if you're an excellent scientist, it doesn't matter so much. It may matter some, but that's not the key thing. It doesn't matter if you
have a fantastic idea that challenges entire fields. All that matters is what's your race.
It moves the emphasis in science away from what really matters in science. Like, what are your ideas? Are they advancing human knowledge?
Are they translating into health for, like, large populations?
Are they true?
Are you working in things that advance our knowledge and reliability
of the entire scientific literature?
I mean, those are the things that matter, really, for scientists, right?
Why are we caught up, then, in this idea that somehow we can address?
I mean, I just want to be very, very clear.
There are real problems that minority populations have faced based on the history of the country.
There are real injustices that have happened as a consequence of them.
But we're essentially asking the scientific institutions of the country to somehow solve
these deeper problems of essentially cosmic injustice in ways that we don't actually have
the capacity to do and in some ways, A, distort the investments we make and B, cause large
chunks of the American people to distrust us.
Say, look, you're not really focused on the things that really will improve my life.
You're interested in sort of cosmic justice rather than actual science.
I think it's the right thing to do to say, let's focus on the mission.
The mission is how do we make investments in research that advance the health and longevity
of the American people?
I don't believe there's any place for this sort of race essentialism in it.
So you've talked about the DEI topic slash issue from the perspective of which science
does or does not get funded.
So testing race as a theory, a non-falsifiable theory is not something that the NIH is going
to continue to support.
We are also discussing DEI in terms of which scientists get to be called scientists and
which ones get funded.
I suppose the universities decide who they hire and then NIH plays a major role in deciding
who gets funded. So if I understand correctly, as now,
the funding of a given grant can't have anything to do
with somebody's race or background, to which I say,
why not just make it blind
to who the investigator actually is?
Now I realize when people write grants,
they say, previously we've shown, or my lab does this,
but why not just eliminate identity entirely
and just say, what is the best proposal on the table?
Let's fund those proposals.
When we talked about earlier, we talked about early career scientists and providing support
to them, that's essentially along the same lines, right?
So we're saying we're going to de-emphasize the track record of scientists in deciding which scientific projects to fund. That's
essentially what you're saying when you say we're going to fund early career
scientists because early career scientists tend to have less of a track record.
I agree with that. I think the key thing is the ideas. Are the ideas powerful? Are
they promising? Are they worthwhile in terms of being able to translate to improved health for populations, right?
So I don't know if it's possible to get rid
of some elements of identity.
Like, you know, you kind of want to make sure
that they've had training as a scientist.
Sure, well, they could check some boxes.
I'm not here to solve every aspect of the mechanics.
But I guess-
Relevant identity, like relevant, like your race is not relevant to
whether you have excellent scientific ideas.
I've learned from people of all races, scientific ideas that have changed how I think about
the world.
And it doesn't matter, the race was not the key element in deciding whether they knew
had a great idea or not.
What was really mattered was the idea. Now, it may be the case that some people have, based on their background, will have an idea,
more likely have an idea in a particular field than a different, with a different background, right?
So, allowing people of lots of different backgrounds to have their say matters, right?
But rather than focusing on the race, focus on the idea.
Is the idea important?
Is it likely to translate improved health for populations?
Well, having sat on a fair number of study sections
over the course of like more than 10 years,
either as an ad hoc or regular member,
I don't recall ever feeling in the room
or anyone explicitly saying, we need to fund this grant because it comes from somebody
who's an underrepresented minority.
There were grants that came
from underrepresented minorities,
some of which were terrific grants
and some of which didn't get funded
because they weren't as terrific.
So are you telling me, and it's been a little while for me,
not a long while, but that there has been a recent pattern.
I'm not trying to c you know, seed the question,
but are you telling me that some grants were getting funded
specifically because of the identity
of the person writing the grant?
I always thought grants were funded or not funded
on the basis of the science in them.
And I never saw that to not be the case.
I mean, I think there are markers of that
that were increasingly emphasized.
You already mentioned one, actually, Andrew.
You said like, you could call up your program officer
and say, look, I've got a great postdoc who's a URM,
which essentially means a minority,
and would you like to fund him?
And the answer would be yes.
But there was a pool of money.
It was always a, it was a, if, no,
it actually ran in the other direction.
It was well, if no, it actually ran in the other direction. It was well communicated from NIH
that if we had someone who was underrepresented minority
who wanted to join on our grant
that there was additional money to be had.
Yeah.
That was a, there was a state,
I think there was a website, it told us this and you know.
Okay, well, it's clear that NIH as it stands now
in the new administration, it's clear that NIH is at stands now in the new administration.
It's clear where their stance on DEI is.
I am relieved to hear that grants that might have been
caught in the filter of this recent change can be
that did not qualify for what you're describing,
that there's an appeal process.
Because I think that shocked some of us
in the science community, we're like,
oh my goodness, there could be terrific grants.
It just got the axe.
Yeah.
You know?
So there's an appeals process to fix that.
I think, let me just make an analogy of something that happened during my career.
I think it was around 2010.
The NIH put out a priority statement that said they was not going to fund health economics
research more or less.
It was in the wake of Obamacare.
There was a whole fight over cost-effectiveness research, and cost-effectiveness research
became this political football where – and the United States said, look, we're not going
to fund this kind of work anymore.
Actually impacted my career.
Some of the work I'd done previously had to do with the relative cost-effectiveness not going to fund this kind of work anymore. It actually impacted my career.
Some of the work I'd done previously had to do with the relative cost effectiveness of
various drugs or whatever.
And so the question was, so I had to like, I had to pivot away from that research if
I wanted research support from the NIH.
It should actually impact my career quite negatively.
There's priorities.
And the thing is, I don't want to argue the wisdom whether that was right or wrong to
do.
I personally think it was wrong, but let's just leave that aside.
I think the thing is, it's normal for the NIH to put out priorities that reflect the
sort of social circumstances that are around us.
Here I think what we have is a shift to priorities that focus on the quality
of the ideas the science has done rather than the racial identity of the people doing the
science. And I think fundamentally, it's more healthy both because we'll end up having
a set of scientific ideas that are more likely replicated and more likely likely be able to translate it into advances for health.
And also, it's better from a sort of social point of view because it de-emphasizes things
that are irrelevant to the progress of, mostly irrelevant in terms of the progress of science,
right?
It shouldn't matter if you're a minority student, a very promising minority student,
or if you're a very promising non-minority student, for the NIH to support you.
Both should get support.
It shouldn't make any difference whether you're minority or not.
And for the American public at large, I mean, a lot of there's a sense of unfairness, right?
Why are you, like, I just, let's leave, let's move aside from the United Nations,
like move to like Harvard University and the case that it lost over the admissions, right?
I'm sure you remember this case, right?
Where Asian students were found to be at a disadvantage in admissions to Harvard.
They had, actually the facts of this case are really shocking, right?
So what happened was Asian students who applied to Harvard and non-Asian students would be
evaluated by alumni interviews where the alumni would evaluate their personality.
Asians and African American kids were both, had roughly the same average personality score
as evaluated by an interview
with alumni.
Then the Harvard admissions officers would find similar kinds of scores based on essentially
personality.
But the admissions officers had never met the kids.
And Asian kids routinely had much lower personality scores than African American kids that applied.
That's what led the Supreme Court to say that was an illegal act of discrimination by Harvard
against Asian kids.
I think this focus on race, I can understand it because we have a history where race has
been the crux of so much pain and
suffering and injustice in this country.
We have a legacy of slavery that goes back centuries.
We have laws against, discriminating against African Americans, like the Jim Crow laws. We have this painful legacy of slow progress in civil rights that goes back, you know,
generations, centuries.
So I understand that that's the backdrop.
I'm not naive about that.
What I'm saying is that these kinds of scientific, these kinds of like, using the NIH to solve that problem is an inappropriate
use of taxpayer funds, and actually I think it makes things worse for those problems than
better.
And in particular, and for me as the director of the NIH, this is the most important thing,
it doesn't allow me to meet my mission.
The mission is to do research, to support research that advances the health and longevity of the American people,
all of the American people, whether you're a minority, whether you're American Indian,
no matter who you are, we should be doing research that advances your well-being.
And that means to me, I shouldn't be using the NIH for the sort of cosmic justice purposes
for which the NIH is poorly suited, but instead we should be using the NIH for the sort of cosmic justice purposes for which the NIH is poorly suited, but instead
we should be using the NIH for the purpose it is well suited, which is to advance science
that advances the health and well-being of the American people.
Yeah, I can see the parallels to something like, you know, the space program where, you
know, the space program is incentivized to try and figure out the best way to meet the
specific goals of the program that year
and in subsequent years.
And if the public thought that taxpayer dollars were being diverted according to a social
justice issue in order to try and advance the space program in that way, as opposed
to getting onto Mars or whatever it is, maybe that's a bad example.
It's so specific to Elon, but you get the idea.
So it's very clear based on what you said
that you believe that the best way to serve everybody
in the country in terms of health and longevity
is to make the discoveries, verify those discoveries
and then distribute the devices and therapeutics
for those discoveries and behavioral tools that will allow
for the health of all Americans.
And anytime someone says all Americans,
it sounds like a political statement, I realize that.
But, and to leave aside social justice issues
in route to that goal, that's what I'm hearing.
Yeah, I mean, except to the extent
that the social justice issues can be articulated
as clean scientific hypotheses that actually
matter. So race differences in biological variables, in fact, matters.
Certain mutations run in certain populations.
Certain advantages run in certain populations.
So the NIH still supports that kind of research, but again, that's in service of the scientific
goal, not in service of some social justice goal that the NIH is ill-suited to achieve.
Yeah, as somebody who worked on vision science
for many years, glaucoma is much, much more common
in darker skin races.
There's certain areas of the world where glaucoma
is at an outrageously high percentage of the population,
and it's not lost on people
that there's a genetic, inheritable component,
and some of the treatments might be,
need to be tailored to those specific populations or-
My grandfather went blind from glaucoma, so-
So get your pressures checked everybody.
Take your drops, get your pressures checked.
I'd like to pivot slightly to some issues related
directly to public health.
We have a kind of fork in the road here as to whether
or not we focus on issues of public
health from the recent past for which you became best known, AKA COVID and the lockdowns,
or whether or not we focus on public health issues that are more relevant now.
I was told by many, many people who are not scientists,
but care a lot about science that quote,
until the scientific community acknowledges two things,
they don't want to give another dollar to science.
Those two things are one, the replication crisis.
We talked about this.
And by the way, I think your plans to deal with that
are fantastic, I love this idea.
And I think many students and post-docs will be excited
to be part of the correction process
that will evolve science.
And the second one is an admittance of error in our past.
I want to be very clear not to protect myself.
I have plenty of work to do no matter what,
but these are not my words, but the words were,
the scientific community did us wrong.
The lockdowns were unfair to,
in particular, working class populations.
We were told one thing about masks, then told another.
We were told one thing about masks then told another
we got a kind of loop-de-loop of
foggy speak politico
messaging about vaccines and what they did do or wouldn't do and
Basically, I hear from a lot of the general population not just people on the MAGA
MAHA, whatever you want to call it, Cy, but also a lot of stated Democrats
and people who are truly in the center, that they lost trust in science and scientists,
and they will not consider restoring that trust until scientists admit that they made
some mistakes.
And it took me a while to hear that message because I'm like, hey, listen, I have friends
trying to cure blindness, cure Alzheimer's, use brain machine interface
to cure epilepsy and get paralyzed people to walk.
And you're talking to me about something that happened,
but I finally had to just stop and listen
because they kept saying, we don't care.
And so it's almost like big segments of the public
feel like they caught us in something as scientists
and we won't admit it.
And they're not just pissed off, they're kind of like done.
I hear it all the time.
And again, this isn't the health and wellness
supplement taking anti-woke crowd.
This is a big segment of the population that is like,
I don't wanna hear about it.
I don't care if labs get funded.
I wanna know why we were lied to
or the scientific community can't admit fault.
I just wanna land that message for them
because in part I'm here for them
and get your thoughts on what you think about,
let's start with lockdowns, masks and vaccines
just to keep it easy.
And what do you think the scientific community needs to say
in light of those to restore trust?
So first, let me just say,
I don't think I'm the NIH director,
unless that were true, unless what you said is true.
Otherwise, I'm not the NIH director.
So I was a very vocal advocate against the lockdowns,
against the mask mandates, against the vaccine mandates,
and against the sort of anti-scientific bent of public health throughout the pandemic.
I've also argued that the scientific institutions of this country should come clean about our
involvement in very dangerous research that potentially caused the pandemic.
The so-called lab leak.
Yeah. Yeah.
Right.
So let's just stay focused on lockdowns.
I want to make the scientific case that they were a tremendous mistake.
And that was known at the time they were a tremendous mistake.
And let me just focus on one aspect of it.
We'll get broadened out to other lockdowns.
Just the school closures.
Right? So what the public at large now sees
is that American kids, especially minority kids, are two years or more behind in their
schooling. We decided during the pandemic that children ought to learn to read as five-year-olds or six-year-olds remotely in Zoom.
We decided that in-person schooling didn't matter anymore. My kids in California
were kept out of school, public school, for a year and a half. If they saw the inside of a classroom, it was with plexiglass separated from their
friends eating lunch isolated alone.
The message to American school kids was essentially your school doesn't matter, your future doesn't
matter.
American public health embraced that entirely.
In Sweden, they didn't close schools for kids under 16 at all. That
was not a policy of the Swedish, Anders Tegnell, the head of Swedish Public Health, explicitly
made that a priority. In the summer of 2020, the Finns and the Swedes compared their results.
The Finns had closed schools in the spring of 2020, and the Swedes had not.
And they found there was no difference in health outcomes for COVID.
The teachers in the schools, in Swedish schools, actually they had no worse outcomes than other
workers in the population.
And on the basis of that evidence, and the fact that we know that closing
schools harms the future health and well-being of kids, even short interruption school, we knew that
for a fact based on a vast literature that existed before the pandemic. Many schools around Europe opened up in the fall of 2020.
The scientific evidence was abundant and clear, even by late spring 2020, that the closure
of schools and kids was a tremendous mistake.
And yet, when I wrote the Great Barrington Declaration with Sunetra Gupta of Oxford University
and Martin Kuhldoff of Harvard University in October 2020, I faced vicious attacks by the scientific community and the medical community
for being unscientific about school closures.
Were there threats to your job at Stanford?
Yes.
Like real threats or just people saying we're going to take away your job?
Okay. In March of 2021, I was part of a roundtable with Governor DeSantis,
a policy roundtable,
where he asked me whether there was any evidence that masking children had any effect on the
spread of the disease.
And the answer is there's not a single randomized study that looked at kids.
The US was an outlier in recommending that kids as young as two years old get masked.
In Europe, like 12 was the age.
There were no studies. In response to that, a hundred
of our colleagues signed a secret petition, essentially, effectively asking the president
of the university to silence me.
Were you contacted by the university administration?
No. I found out about the petition from a couple of my friends who leaked it to me.
And then I went to the press and said, look, this is, you should go ask the
president about this. And then he had to say that he had this mealy-mouthed statement about academic
freedom, but also essentially that it's really important that we obey public health authorities
or something. So like political, like boilerplate speak. Yeah. And in 2020, I'd been subject to
all kinds of sort of attacks on me. I mean, just
I don't want to relitigate this history, but I'll just say that Stanford failed the academic
freedom attempts. It didn't hold a scientific conference on COVID with alternative viewpoints
of what with viewpoints that were anti-lockdown until 2024 when I organized it, even though
I asked to have a conference in 2021 and 2022.
But your job security wasn't threatened in a direct sense.
No, that's not true.
Like no one came along and said, hey, like quiet down or else you're going to lose your
job.
So in that sense, you had academic freedom from the top.
That's not true.
So I was asked to stop going on the press in 2020.
By the dean of the university, the dean of the medical school, right?
My academic freedom was pretty directly threatened.
Like I wrote and published a study on measuring antibodies in the population, a study that
now replicated dozens of times around the world.
And I was essentially ordered to redo that study.
They interfered even before I had sent the paper in for publication.
When I say they, I mean the administration of the medical school.
I mean, my academic freedom was pretty directly attacked.
And I wrote a piece with how Stanford failed the academic freedom test.
You can go read it if folks who want to read about it.
Again, I don't want to relitigate the past. No, I about, read it if folks who want to read about it.
Again, I don't want to relitigate the past.
No, I ask, listen, I'm not trying to dig for dirt.
I ask because, well, I never saw a petition
cross my email path.
I did see a petition pass my email path about Scott Atlas,
who was in our department of radiology,
he's a physician as you know,
and was appointed Trump's coronavirus task force,
head of Trump's
Coronavirus Task Force.
And then there was a petition basically asking him what, to take away his job?
I don't know what it was, but that passed through.
But I see a lot of petitions passed through my email, and as everybody knows and the press
has pointed out, I'm not great at email and communications.
But I guess the reason I ask is academic freedom means many things.
Like can you tweet what you wanna tweet?
I guess I don't call them tweets anymore.
At the time, could you tweet what you wanted to tweet?
Could you continue to do the science that you were doing?
Could you, did you continue to collect a salary?
It sounds like you were able to keep your job
but there was some pressure to not communicate your ideas.
Is that about right?
Yeah, I mean, or more than that, there's a threat to my job as well. I think the issue here is one of like,
okay, imagine what a, there's a sense of like positive and negative academic freedom. A
negative academic freedom means there's no active attack on me and my capacity to do
work. I think Stanford failed that as well. Like there was an active attack on me. So
for instance, there was a poster campaign all around campus with my
face on it essentially accusing me of killing people in Florida for advising Governor DeSantis
that there was no evidence that masking children benefited anybody. Right? And essentially
it was a threat. It was like at the same time I was getting death threats from people.
The former head of the NIH wrote an email to Tony Fauci four days after he wrote the
Great Branching Declaration calling for a devastating take down of the premises of the
declaration.
And then that resulted in essentially press propaganda pieces, the New York Times and
elsewhere essentially mischaracterizing what the Great Branch of
the Declaration said, which was to protect older people better and open schools, let
kids go to school.
Essentially mischaracterizing is in a propagandist way, we're saying we want to let the virus
rip.
And that led to death threats against me.
Same time there's this poster campaign, all around campus, I called the campus police,
I told the department, the folks in the department, the medical school, that this was happening.
And they result, their response was to send me to a counselor to reduce my online presence.
So Stanford absolutely failed during the pandemic. In 2020, the former president, John Hennessey,
approached me wanting to organize a discussion,
some sort of panel, where different perspectives
about how to manage the pandemic, the lockdowns elsewhere,
could be had.
And even he couldn't get this organized.
Hennessey couldn't?
No.
Hennessey is one of the most beloved presidents
of Stanford.
I have tremendous admiration for him,
but the pressure was absolutely enormous.
Like the fact that he approached me at all
was actually a credit to him.
He's one of the few officials at Stanford
who approached me during the pandemic
to try to like get, allow me to have,
I mean, you know, I might've been right or wrong.
It turns out I was right,
but in principle that with Stanford should have had those debates in 2020.
We had prominent faculty, people like Johnny Aniti,
Scott Atlas, and others, like Levitt,
who were opposed to the lockdowns,
and yet we couldn't get a hearing.
Yeah, Levitt reached out to me at one point.
I, you know, as I've been criticized before,
you know, with this podcast, I mainly focused at that time on, we launched in 2021
on ways to deal with anxiety, circadian rhythm, sleep,
because people were dealing with those issues.
I'm not a virologist, so I couldn't talk about virology
or epidemiology, but I-
Andrew, it wasn't on you to like put us on a platform.
There was on the Stanford University administration
to organize discussions and
debates on the most important topics of the day.
And that included in 2020, were school closures the right approach?
I read comments enough and get calls and emails that I do read enough to know that when people
hear this, their minds will go to questions about like, what is the incentive, financial
or otherwise, for Stanford to not allow you
to have these discussions?
Or let's broaden the discussion for any university,
for that matter, right?
I mean, Stanford's not the only university on the planet.
For a panel, a discussion about these issues to be held.
Well, we have a health policy department.
What's the purpose of it if not to like,
and panel the most important debates about health
policy of the day?
So, so what do you think was going on?
I mean, the, the, the vaccine technology was developed at multiple sites, right?
I think Stanford had something to do with the development of the technology.
There were other universities that were involved in the development of the technology as well,
right?
And I think in the back of this conversation, I know what's buzzing, but let's just be direct here.
You and I were, there was a vaccine mandate.
Everyone that-
This is 2020, this is before.
2020, but eventually there was a vaccine mandate.
If you wanted to keep your job,
unless you had a religious or other,
what was a medical reason, religious or medical reason,
you were told you had to take the vaccine.
People did what they did.
Some people did. Some people did.
Some people, I know colleagues that falsified cards.
I know colleagues that got nine vaccines,
everything in between, right?
But there were mandates.
So to be clear, you were opposed to the lockdowns.
Yes.
And you were opposed to vaccine mandates.
Were you also vocal about that?
Yes.
Because that's even, I mean, that's even touchier.
I was an expert witness in a number of cases
on the vaccine mandates, including one
that reached the Supreme Court
and overturned the OSHA vaccine mandate.
So yeah, I mean, I was vocally opposed
to the vaccine mandates.
I was vocally opposed to the mask mandates.
On the lockdowns, I was vocally opposed
to the school closures.
I emphasized the harm that the lockdowns did to was vocally opposed to school closures. I emphasized the harm
that the lockdowns did to the world's poor. So in April of 2020, there was a UN report
that calculated that 100 million people would be subject to starvation as a consequence
of the economic dislocation caused by the lockdowns. I was opposed to that. I think the idea that the
lockdowns were the right strategy, well, they're unique in world history of having lockdowns
at the scale we had. And there were no part of previous pandemic plans where such a lockdown
of such a length, of such a scale, were no part of any previous pandemic plan or any
previous pandemic management experience.
And it was very clear to me with my background in health policy that we were going to harm
the poor, we were going to harm children, and we were going to harm the working class
at scale.
The lockdowns were a luxury of the laptop class.
That's what I was advocating at the time.
The university, it wasn't just Stanford, you're right.
But in fact, there were almost no universities
that impaneled these kinds of discussions into 2022.
So what do you think happened?
Do you think that there was a fear,
I'm not seeding the question, leading the witness, whatever, but do you think that there was a fear, I'm not seeding the question,
leading the witness, whatever,
but do you think that there was a fear
among the academic and science community
that if anyone, if it were allowed for people
to speak out or consider different aspects,
positive or negative about lockdowns or vaccine mandates,
that somehow their existence would be at risk? Like that this got to an issue bigger than about lockdowns or vaccine mandates that somehow their existence would be at risk,
like that this got to an issue bigger than the lockdowns
and bigger than vaccines, because I do.
I think that this whole issue was really a question
of whether or not we consider scientists experts.
The word expert has become a very touchy thing.
Like who gets to be called an expert?
Who designates which experts are really the experts?
I mean, it's all, you know, all you have to do is accuse someone of misinformation and
suddenly their expert card is taken away even if they hold a position in a given area that
they've.
I've been a tenured faculty member at Stanford School of Medicine for decades, right?
I've been a full professor with a long scientific history of published papers in some of the
top medical journals, the top statistics journals, the health policy journals, and so on, economics
journals, and that wasn't enough.
The problem is you have, okay, let me just say one version of this that you can go, there
are other aspects of play?
For instance, I think people were genuinely scared,
scientists were genuinely scared for their own mortality,
especially in the early days of the pandemic.
And that clouded the way they thought about the policy.
Especially since there's a lot of older scientists.
I'm not trying to pick on all of them,
but there are a lot of them.
Yeah. Yeah.
And older people were dying more, correct?
Yeah, I mean, that was actually
the most important epidemiological fact about COVID
was that it was this very steep age gradient
in the mortality profile.
Young people, very low mortality risk,
older people, much higher mortality risk.
What was the rate of mortality among people,
70 to 85 years old, roughly?
Five to 7%, someone in there.
Okay, so not a trivial number.
No, it's huge.
Like one in 20 to one in, you know, one in 18 or whatever, 14.
And that was due directly from COVID itself,
not some confounding variable.
Yeah, especially in the pandemic, right?
Okay.
So, okay, so,
but I wanna leave aside the personal fear.
Although I do think that played a tremendously important
role in the thinking about, of scientists, especially since scientists as a class tend to be part
of the laptop class, right? People who have the economic resources to shield themselves
from extended periods of time without any threat to their livelihood. That's not true
for most of the world, but that's true for scientists.
So let's leave that aside and let's just focus on what I think was a core dynamic, right?
So there's two norms, two ethical norms in science, and they competed with each other.
In science, free speech is an absolute must. If you have an idea that's different from mine, you should be able to express it.
And then we can test each other's ideas out.
We can maybe devise an experiment to decide between us and whatever the experiment says,
we'll say, okay, you're right and I'm wrong, and I'll buy you dinner or something, right?
That's good.
That's how science advances, like through this kind of like this process
of people talking to each other and having free speech, the ability to like come up with
ideas and articulate them, defend them is absolutely fundamental to the progress of
science. Public health has a different ethical norm. Public health has an ethical norm of unanimity of messaging.
This ethical norm has as its moral basis that the communications that public health puts
out are grounded in consensus science.
So for instance, if I was a former professor at Stanford, I mean, emeritus professor at Stanford, I go out and say, I'm the head director of the NIH, I go
out and say, smoking is good for you. Well, I've committed an ethical sin, right? I've
done something really deeply wrong because the scientific basis for the idea that smoking
is a terrible thing for you, it really harms your health in concrete ways, that's, I mean,
that's like rock solid in science. So the idea that I,
as a person who works in public health, shouldn't go out and say smoking is good for you, that
has a good ethical basis rooted in science. The idea that closing schools is good for
you. The idea that you wearing a cloth mask prevents you from getting COVID. The idea that closing schools is good for you. The idea that wearing a cloth mask prevents you from getting COVID.
The idea that immunity after COVID recovery doesn't exist.
The idea that the vaccine will protect you from getting and spreading COVID forever.
None of that was rooted in science.
And yet, the public health authorities of this country decided that they were going
to enforce the same kind of ethical approach, they have ethical constrictions on those topics
as they do to smoking.
When you say none of it was rooted in science, are you saying the science was mixed or there
was literally no evidence?
There's literally no science.
So for instance, the idea that cloth masks prevent you from getting and spreading respiratory
diseases.
There were a dozen randomized trials on flu before the pandemic, and there was a Cochrane
report looking at the literature on masking and influenza.
And they concluded that though evidence was weak at best, that these kinds of cloth masking in population settings
actually prevent anyone from the spread of influenza.
I heard a number of people say like,
what's the big deal about wearing a mask?
There was also that argument.
It's not the same thing as a vaccine.
It's like, it's a mask.
We could argue over inhaling excess carbon dioxide.
You're not, you know, you're not getting smiles.
You're not socially interested.
Listen, I'm just opening this up for sake of consideration.
So why did the masks become such an issue?
Was it because it was a mandate?
Is that what it's really about?
So that mandate mattered.
But I'll say there were work harms,
some of which were recognized, some of which were not.
So like, for instance, I heard from parents
of autistic kids that the wear, or I'm sorry, hearing impaired kids,
that the mask wearing impaired the ability of the kids
to learn to lip read, right?
So it seems illogical.
I heard, but it's also true that if you adopt
and embrace public health messaging
that's self-evidently not rooted
in science, you're going to undermine the public trust
in science and in public health.
I will say based on these voices that I hear from a lot,
that's what they're asking for.
They're asking for the exact message
that you're delivering now, which is,
I'll say it differently.
They wanna hear the scientific community say,
we messed up.
Yeah, and we should, we should absolutely say that.
So for instance, you wear a mask
while you walk into the restaurant,
you sit down to eat and you take your mask off.
And that protects you from getting and spreading COVID how?
Like everyone could see that.
You don't need to be a scientist to see
that that was obviously ridiculous public health messaging.
It was a weird time.
And let's just say, is it, you asked, could this public health messaging be dangerous?
Well, yeah. Imagine someone who's like 80 years old. They're not, they have a lot of
chronic conditions. It's the height of the pandemic, like July 2020 or something, or
June 2020. And they're told, if you wear a cloth mask, you're safe.
They go out in public and take risks that they otherwise would not have taken on the
idea that they're safe wearing a cloth mask and they get COVID.
The recommendation, not rooted in science, actually could end up killing people and probably
did. right?
So it's not, none of these things are just basically,
well, it's low cost.
I mean, it may be low cost to somebody who's like,
you know, who's not particularly,
I mean, particularly bothered by mask wearing,
but they can still nevertheless end up causing harm.
And I think it kind of did.
Why weren't there panels of scientists
as opposed to one individual, Tony Fauci?
By the way, I invited him on the podcast,
did not get a response.
This was a long time ago.
I thought if I was gonna hear about it,
these issues from anybody at that time,
it made sense to contact him
and he apparently wasn't interested.
We would have of course done it remotely.
Why wasn't there a panel?
So my feeling is when you have an individual,
it changes the whole discussion.
But when you have a panel that looks
kind of like the United States,
and this isn't for like diversity reasons per se,
this is about just a collection of smart people is way better
than one person, always, in my opinion.
They could come to some sort of consensus or maybe even disagree publicly.
I think panels would have been better.
Well, I think... Let's leave aside Tony Fauci, because I think he was a very important figure
and of course,
was basically a major spokesperson for the public health point of view.
But there was essentially a group think at scale.
It was impossible to organize a panel with the kind of diversity of opinion that was
needed.
There were a million or more, I know this from the set of people
who signed the Great Grand panel with this kind of point of my point of view about the about the efficacy of lockdowns
until 2024. Right? The idea was that we needed to have unanimity of messaging. If you had
prominent professors in Stanford, Harvard, Oxford, or elsewhere saying that the lockdowns
were a bad idea, which they were, right? Then you're going to undermine public compliance with
the orders that were being put out. You know, just a quick diversion, how do I know that
the lockdowns are a bad idea? If you look at, if you ask which country had the lowest
all-cause excess deaths in all of Europe, all-cause excess deaths in all of Europe.
All-cause excess deaths, meaning deaths from all causes,
excess meaning given the age structure of the population,
how many people to die would you have expected,
even if there wasn't a pandemic,
versus how many there were.
Which country in Europe had the lowest
all-cause excess deaths?
It turns out it's Sweden, which didn't follow the lockdown.
So the lockdowns were not a necessary policy in order to protect human life.
And they weren't sufficient to protect human life either, right?
So you had sharply locked-down countries like Peru that had tremendous deaths. So the lockdowns were neither necessarily sufficient,
and they caused collateral harm at scale to the poor,
to the working class, to children that we're still paying for,
that people are still suffering from the long tail of the lockdown.
For years in the United States, from 2020, 2021, 2022,
deaths from overdoses of drugs were like in the hundred thousand
people died a year.
This past year it was 80,000.
We declared success.
We went down 20,000.
Before the lockdowns, it was maybe 20,000 deaths a year, and that was a catastrophic
failure.
The problem here is that the scientific community embraced an ethical norm about unanimity
of messaging and then enforced it on fellow scientists and then it cooperated with the
Biden administration to put in place a censorship regime that made it impossible even for legitimate
conversations to happen.
So after the vaccines, COVID vaccines came out,
there were a community of people
who were legitimately vaccine injured.
The Biden administration went to Facebook
and told them, essentially ordered them
that you need to shut down the patient groups
that are discussing the vaccine injuries.
Or else what?
The threat was usually implied,
or else essentially destruction of your company.
The President Biden goes on national TV TV says, and he has a
completely right to do this.
He has the right to do this as president, say, look,
Mark Zuckerberg is killing people.
He did that.
He actually did that.
And then quietly behind the scenes, they pressured
Facebook to censor patient groups that were discussing their vaccine
injuries, even in private groups.
And no one was putting their stuff out on X then called Twitter?
X did the same thing, right?
So I joined Twitter in August of 2021.
My first thing I posted was the Great Barrington Declaration.
The day I joined Twitter, I was put on a blacklist to suppress the spread of my ideas on Twitter.
Almost certainly-
That's confirmed.
I'm not questioning the validity of what you're saying.
I saw it with my own eyes.
But that was confirmed by the so-called Twitter files?
Yeah.
When Elon bought Twitter, he opened up the databases, invited me to go see them at the
Twitter headquarters.
I saw with my own eyes, I saw my face and it said the word blacklist on it.
Which meant what?
That when you would post, no one would see your post?
Well, it was a shadow ban type.
It was a trends blacklist.
Yeah, it was a shadow ban.
I didn't know I was on this.
It just made sure that only my strict followers would see the post and nobody else had any
chance of seeing it.
All right.
So shadow ban.
I mean, the whole reason I joined Twitter in the first place was to engage with people that didn't
know my ideas.
The blacklist made sure that my ideas were not seen by those people.
This is part of the reason why I think podcasts like the Joe Rogan podcast became such a lightning
rod for this discussion.
What's interesting is that,
remember they used to put a little tag on podcasts,
you know, it would say, this may contain misinformation.
What they forgot, whoever was imposing that,
because I don't think it was from the podcast houses
themselves, but whoever directed that.
The federal government. Yeah, forgot about the 90s
when there were explicit lyrics and albums
and they would say, warning contains explicit lyrics
and everyone goes and clicks on those or listens to those.
They sort of forgot human psychology.
That's the beauty of the American people.
We are. We like rebels.
Yeah, exactly.
It's so pinheaded, it's almost unimaginable.
Like we basically, the public health authorities of the country and the government around it
decided that it knew best, that it was going to control the conversations of the public
at large, essentially propagandize them.
The real question is why?
And people are probably thinking, ask them about big pharma,
ask them about the amount of money that Tony Fauci was made.
You hear these theories, right?
But most biomedical scientists running labs at universities
aren't gonna make a dime from pharma.
Most, if you saw their salaries,
most people will be unimpressed by those salaries.
If you look at the salaries relative to their hours worked, you would be even less impressed. So sure, some people
stood to get really rich, but I can't imagine that's the reason. So the question becomes
why? Why all this suppression? Why all this group think? What were people so darn afraid
of?
I think, let's put yourself back in 2020, 2021.
I think that while, again, I'm not naive, I do think monetary factors played a tremendously
important role.
I don't think that they were the central reason.
I agree with you about that.
I think the central reason is that the scientists that supported the censorship efforts, the
scientists that embraced the sort of omerta
around opposing lockdown, that supported that, essentially the vilification of fellow scientists
who disagreed with them, were doing it because they thought they were doing good.
They thought they were doing good.
Yes.
I think essentially what happened was that rather than thinking like scientists,
they were thinking like propagandists.
Like, and in this case, they were public health propagandists.
They thought that their job as scientists
was to echo public health propaganda
rather than act like scientists and ask questions
about the messages that the public health authorities
were putting forward.
I'm gonna push back a little bit in fairness,
perfectly valid hypothesis,
and you were at the center of this and I wasn't,
but many of these people aren't very, very smart people.
I mean, we can talk about universities
as like these places,
but these are places made up of people,
and while not everyone is brilliant to these places,
some of them are truly brilliant people.
And they are, dare I say enough,
on a sort of a left-brained-ish spectrumy type phenotype
where they're not pulled into emotional issues
the same way that we might think they are.
And so it's hard for me to imagine that really smart people
would join a dialogue that didn't consider all aspects.
And yet that's exactly what happened, Andrew.
Think about that, right?
So I've thought about that quite a bit.
I don't think it had anything to do with being smart or not smart.
I think there were a lot of really smart biologists in the Soviet Union. When Lysenko got to, told Stalin that Mendelian genetics was a capitalist plot and that Lysenko
was the way forward, a lot of excellent biologists, for fear of not wanting to be sent to Siberia,
kept their head down and said nothing,
even in areas where they were directly in their field.
So it was fear of being ostracized and shamed by one's community.
It took just a few examples.
I think I mentioned earlier Scott Atlas,
who's a colleague of mine and friend.
In 2020, the Faculty Senate of Stanford voted to censure him.
Stanford has a history of censuring three professors ever in its history.
One was a man named Edward Ross who was a eugenicist in the early part of the 20th century.
He was one of the leading eugenicists in the country and Jane Stanford hated him and worked
to get rid of him from the faculty.
He was fired.
He was.
Or resigned or left.
I'm not sure exactly, but he was let go.
I think he was assistant professor.
Then Bruce Franklin, who was an English professor at Stanford, I think he worked on science
fiction, but he was an anti-Vietnam War activist and he brought essentially
a terrorist group to campus.
And he was given, he was like, just like, there'd been like massive public focus on
it, so he was given a chance to like defend his points of view.
Eventually, he was like censored by Stanford.
For being anti-Vietnam War or for bringing...
For bringing the terrorists on the campus.
Yeah, I mean, bringing terrorists on the campus and bad.
Well, in any case, there was kind of due process around both of those things.
Like they got their say.
Scott, his major sin was he advised President Trump during the pandemic.
And he advocated for keeping schools open, again, consistent with what was happening
in Sweden, and for protecting older people better, because they were at a higher risk
of dying if they got COVID.
That was his sin. He was seen next to President Trump. And that led the faculty
senate of Stanford, something they haven't taken back, to issue a censure of him that
has, if you look at it, religious language. They declared him anathema. They effectively
excommunicated him. His family is essentially was ostracized
by their neighbors. He lives on campus. It was, it was a absolutely disgusting act. And
it was meant not just at Scott, but generally to send a signal to anyone who agreed with
Scott to keep their head down. And it succeeded. Not entirely.
He's at Hoover, right?
Yeah, he's at Hoover. But he was formerly at the medical school as a neuro-radiology.
He's a very accomplished scientist and has a leading textbook in neuro-radiology.
For a decade, he'd been advisor to presidential candidates on health policy.
So he understood from a broader point of view.
He also comes from a working class background.
So it was guilt by adjacency.
Yeah. But it was guilt by adjacency. Yeah.
But it was aimed at silencing opposition to the lockdowns.
And it worked in large part.
I lost count of people from inside Stanford and around the country who would write to
me saying, I'm glad you're speaking up on these issues.
Please keep it up.
I don't want to do this because I don't want to risk my job.
Well, you weren't completely alone.
So Levitt has a Nobel Prize.
And you had some buddies who were pretty smart
and pretty powerful.
I mean, they don't give Nobel prizes to anybody.
No, Mike is incredible.
He's a very brilliant man.
But Stanford in that sense was better off, right?
We had a sort of underground that opposed the lockdowns, very prominent scientists like
Johnny Inidis, Mike Levitt, Scott.
There were people at places like Harvard and Oxford.
Harvard, there was Martin Kulldorf at Oxford, there was Sinatra Gupta.
There were folks all over the world.
But institutionally, the universities of the world made it almost impossible.
You had to essentially decide, and this is what I decided in 2020, that I did not care
about my career anymore, that I owed it to the people who were being harmed by the lockdowns
to speak up more than I owed it to myself to preserve my career.
And that's why I continue to speak even with, even with the death threats, even with the
vilification that, and even with essentially the failure of my own institution to protect
my academic freedom.
I did decide I was willing to give all of that up.
That's why I kept speaking.
So given your experience and given this thing that I hear that people want to hear scientists admit
that they are at least sometimes wrong,
maybe not even a specific instance in which they're wrong.
Will the NIH, perhaps you, be making a statement
on behalf of scientists to, I mean,
you have the opportunity to address the entire world.
Here, you're doing some of this, obviously,
but will this be part of the messaging of
the NIH?
Like, we need to revise what we think of when we talk about academic freedom.
We need to revise what we actually do.
And God forbid there's another pandemic, we need to really be ready for the kind of discourse
that is going to unify people as opposed to divide people?
You know, after a patient dies, often in a hospital, there'll be a conference,
where the doctors who manage the patient will bluntly say to each other, often behind closed
doors, what went wrong. And the goal isn't to like actually point fingers, the goal is to figure out what happened so that you don't make the same mistakes.
We haven't really had that conversation as a country or as a world over the pandemic,
and yet the harm from it still persists.
I think what I would love to do as NIH director is, I mean, I want to reform the scientific community so that the values that I thought it had,
which is the values of free discourse and academic curiosity, those are central to the
way we function going forward.
We want to make sure that those values are the center because you can't
do science if you don't have that, right? So you just think about science in the Soviet
Union under Lysenko, right? There was no real biology going on if you couldn't say Mendelian
genetics was real.
No, I actually can imagine that the small scale example that I'm familiar with of a laboratory meeting where you discuss someone's data
is the perfect microcosm for what we're talking about.
Where you sit back, someone presents their data
and the idea is to challenge the data.
The idea is for everybody to try and punch holes in it,
make helpful suggestions.
And sometimes sadly, at the end of that meeting,
you end up sitting there with a post-doctor graduate student
and you're discussing what the next project ought to be
because that one is just an utter failure.
Or you're discussing something much more interesting
than you ever thought was possible in the data set
that neither of you could have thought of
because you needed some fresh eyes on it.
But you can't have a culture in a laboratory where people can't oppose the person in quote
unquote in charge.
I mean, this is so important.
If you can't tell the lab head, no, that's, I think you're wrong.
If you can't say that, the lab can't progress.
The culture of American science has gotten away from that ideal.
In fact, it has this ironically weird thing where like on small matters you can have that
kind of discussion, but on large matters you cannot.
And that actually is anathema to science.
That actually means that we cannot as scientists address the most important questions of the
day without fear of essentially getting our heads cut off.
We had this conversation about DEI earlier.
Wasn't it uncomfortable?
I felt myself being uncomfortable saying what I believe is true
because I know that's one of those issues where as a scientist,
if you start talking about it, you better talk a particular way
or else you're going to get your head chopped off.
Yeah, I mean, all these topics are uncomfortable, frankly.
I, you know, in part because I see them
through a lot of different lenses, the audience lens,
my role as a basic scientist, my role as a podcaster,
you know, the quote unquote field of podcasters
completely transformed this kind of discussion
and public health.
It's really healthy.
We can have these conversations openly in a public.
I mean, maybe I'll get my head chopped off again,
but like once you've had it once, so-
I think you're safe.
I mean, maybe I have to remind you,
you are the director of, it is an incredible thing
if you really think about it, right?
Given your position in 2020 and 2021, 22, 23, it's there.
You're now at the top of the pyramid.
It is hierarchical and I believe your intentions
are pure and good.
I do.
I think it's important to have checks and balances,
but I really believe that you want to do right by people.
I feel that's a felt thing.
But yeah, it's a remarkable arc
that you're now in the position to make major decisions
for the entire enterprise of science.
What I would love to do is I would like to make the lives of scientists who disagree
with me easier.
I want them to be able to disagree with me.
I want to create a culture of science focused on developing truth rather than obeying higher,
like top tops of hierarchies.
If I can accomplish that,
that would be a major thing in my view.
Well, I think that's a magnificent sub vision for the NIH.
I think it's super important that all voices are heard.
It's kind of interesting,
we have these discussions about diversity and inclusion,
but like all voices need to be heard
in the context of analyzing data.
And certainly the revision of the entire structure of the science enterprise, as you point out,
is sociological, it's financial.
There are a lot of different aspects to this.
Vaccines are a very hot button issue these days, in part because Bobby Kennedy has been
associated with the anti-vax movement. I've heard him say with his own words
that he's not anti-vax, but he's suspicious
or very concerned about certain vaccines.
Let's just start with a very basic question.
You're an MD.
Do you believe that there are any vaccines that are useful?
Yes.
Okay.
Well, I think it's just, let's build up from there.
Do you believe that some vaccines save lives?
Yes.
Okay.
Many vaccines save lives.
Do you believe that some vaccines that are given to children save lives?
Yes.
Do you believe that some vaccines are known to be harmful
and yet still given?
Let me say the specific one.
I think the COVID vaccine for children in particular,
I don't think is net beneficial for kids.
Not, but you said not net beneficial.
Does that mean it's harmful?
Net harmful.
You believe that the COVID vaccine is net harmful.
Especially for young men.
Can you define the age cutoff there?
We can argue about this, like there's a scientific, but I think it's pretty clear that, I don't
know, between age 12 and 30 or something for boys and young men, the COVID vaccine is probably
net harmful.
Again, with boys who have no other underlying conditions and all that.
Not obese, no heart condition.
Well, I mean, even obese, you have to look at the numbers.
I mean, there's lots of debates and fights over this in scientific literature.
So I'd like to hesitate to actually give you a specific age threshold.
I think as a general matter, there exist groups for whom the COVID vaccine
was net harmful, specifically young men.
Do you think there's any reason to think that
the adjuvants, essentially what the vaccines are suspended in,
not the vaccines themselves, are potentially harmful?
I've heard this.
I am personally not aware of any strong evidence for it.
I think these are the kind of things
that ought to be investigated,
but it's very difficult to investigate just because of the sort of like political aura around vaccines where if
you ask, if you really do investigate it and find something that the public authorities don't like,
you're going to have trouble. I don't know the answer to that question. From a scientific point of view.
Let's start with COVID vaccine
and dig a little further into that.
The COVID vaccine was promoted slash mandated,
certainly was mandated at Stanford,
but was promoted as the best line of defense
for avoiding infection and reducing the symptoms
of infection and reducing the symptoms of infection and
reducing the probability of death.
That's what I heard.
What is the evidence for or against that statement now, given what we know about who took it,
who didn't take it, and transmission and death rates?
Okay.
So can we go back to December 2020?
Sure.
Because then I'll answer your question, I promise.
Answer all the other questions you have.
So in December of 2020, there were a couple
of really important randomized trials published
regarding the COVID mRNA vaccines.
Can you describe what one of these looks like?
Because I'm not trying to slow your roll here,
but some people get vaccines, some people don't get vaccine
and you look at who gets sick and who lives and who dies.
Yeah, basically.
So the large scale randomized trials,
flipped a coin said 20,000 people,
I forget exact numbers, get the vaccine,
20,000 people get a placebo or something placebo-like, and then you follow
them for a certain number of months and you ask which group's more likely to get COVID,
have a diagnosed version of COVID, which group's more likely to die, which group's more likely
to be hospitalized.
And if the vaccinated group is less likely to get COVID,
you report that.
If not, then you report that.
There are randomized trials then published for several high-profile
vaccines that were used during the pandemic in November 2020.
So the mRNA vaccines from Moderna and Pfizer, the Johnson & Johnson vaccine, the AstraZeneca
vaccine, probably the four most important ones used in England and the United States
or Great Britain and the United States, Europe.
Okay.
So, what did those studies show?
For the mRNA vaccines, and in fact, for all of these studies, they were run, these are studies
that were done, again, randomized, like high quality studies, large numbers of patients,
but they were tracked for about two months. Right? So you can't say from the randomized
trials in December of 2020 what's going to happen after two months.
Because the trials themselves only track patients
for about two months.
What they showed was that among patients who had never before
had COVID, because they excluded them
from the analysis of efficacy, among patients who never
had COVID before, the patients who were randomized to the vaccine
had lower rates of getting COVID in those two months,
I'm sorry, symptomatic COVID in those two months
than the people who were randomly assigned to the placebo.
OK.
The mRNA vaccines had more deaths in the treatment arm than in the placebo arm, but the size
of the samples were such that you couldn't say that that was a statistically meaningful
result.
Okay.
Couldn't say it, right?
Because it's, and that made sense.
The death rate from COVID was something like, you know, three, four out of a thousand,
you would have had to enroll populations in the hundreds of thousands or millions in order
to get a significant result about deaths.
And age range really matters here.
Yeah.
So the vaccine trials tended to focus more on younger people.
It had some older people in it, but it didn't...
If I had designed the trial, what I would have argued for
is to have the older population more represented,
because that's who was dying from COVID,
and then having the prevention of death or hospitalization
as the primary endpoint.
Instead, the endpoint was prevention of symptomatic COVID for two months.
Okay.
Now, they didn't ask whether you got COVID actually,
because there are people who got COVID
and never had any symptoms.
Right?
So, they didn't ask in the trial
about prevention of transmission.
They could have, right?
So, for instance, the people who were in the placebo arm,
you could ask whether their household members
had COVID at higher rates than the household
members of the people who are in the treatment arm.
Compare the household members and ask.
They didn't ask that.
So what could you infer from the trial?
You can infer that for two months, people who had the vaccine were likely to have much
less likely to get COVID for those two months,
symptomatic COVID for those two months.
That's all you could say.
You couldn't say they reduced death rates
because it didn't actually in the point estimate.
And there was not, again,
any statistics significant difference.
In the AstraZeneca and the J&J vaccines,
if you combine those, it turns out
that you actually did get lower death rates
in the vaccinated arm than in the placebo arm.
The J&J vaccine had lower death rates, statistically significant once you combine the trials.
Was the J&J vaccine an mRNA vaccine as well?
No, it was an adenovirus vector vaccine.
And it was the single shot?
Yeah.
And it was like the AstraZeneca vaccine, similar technology, adenovirus vector vaccine.
Okay.
So, but again, those were only two months long.
And the death rate difference was like, you know, it's hard to, it was not statistically
powered to find one, although it happened to find one in the adenovirus vaccines, not
the adenovirus vector vaccines.
And the mRNA vaccines, you couldn't say from the randomized trial, one way or the other.
Okay, so that's the information base we had in December 2020.
I wrote an op-ed in December 2020 with Sinatra Gupta where I argued that that is sufficient
to say we should give the vaccine, recommend that older people get the vaccine, but that
we shouldn't give it necessarily to young people.
The reason was that young people died at very low rates relative to young older people from
when they got COVID. And so the thing you're protecting them from was less of a risk to them
than was for older people. And so the benefit-harm calculation would tilt toward if you have something
that's a big threat and you have something that is known to prevent calculation would tilt toward if you have something that's a big
threat and you have something that is known to prevent symptomatic infection, then it
probably prevents death in the older population.
I can't say that for sure from the trial, but I can extrapolate.
It's extrapolation, right?
It seemed like a reasonable extrapolation in December 2020.
Then it makes sense to give it, even if there are side effects, which
are not known in the trial.
The trial is only tens of thousands of people.
If you give it to billions of people, you're going to find out side effects you didn't
know about, right?
But there are these unknown side effects, but it seems like based on the benefit harm
expectation, older people, it makes more sense to give it to.
Whereas the younger people, the benefit harm calculation runs in the opposite direction.
There are unknown harms.
Some harms actually you saw in the trial itself, but you don't know once you give it to billions.
And the benefit's small.
So what I wrote is you should recommend it for older people and then lift the lockdowns.
That's the op-ed I wrote and it was published in the Wall Street Journal.
Instead what public health authorities decided to do was to take the vaccine and say that
we could use it to eradicate COVID.
They implied it.
They didn't exactly say that, but they very closely, they would say things like, well, if 80, 90 percent of the population
gets the vaccine, then we will achieve herd immunity, as if it were some permanent state,
rather than a transitory state having to do with the fraction of the population that are
currently immune versus the, you know, that's a herd immunity, is a clear mathematical construct in epidemiological models of disease
spread.
The public health authorities were talking about 70%, 80%, 90% were using it as essentially
a synonym for disease eradication, which it is not.
Was this message only in the United States or was this message kind of uniform across
the world?
Yeah. Now, just consider the...
I don't know if she's uniform.
Like for instance, I don't think Sweden ever mandated the vaccine.
With the exception of Sweden, I just sort of-
In a few other places, yeah.
Because for one public health science system to collaborate in this...
Let's assume that the public messaging
was they were about a bit out over their skis,
so to speak, but for Northern Europe to do that
and for Brazil to do that and for Australia to do that,
sounds like there had to have been a collaboration
of kind of massive scale.
It's a little hard to imagine everyone collaborating
in some sort of secret agenda
that extends across
international borders.
Well, just to push this up back in December, November, 2021, the news about the vaccine
came out, right?
It was like a sense of joy that we've been a libero-, like the science that delivered
us from this deadly plague.
It's definitely exciting.
Yeah.
And so like, and there was this sense of hope, right? That sort of large numbers of people around the world,
I think, shared.
Public health authorities shared that sense of hope.
But that, I think, partly led them to extrapolate
far beyond what the data actually showed
and make promises to the public
that were not in the randomized data
that were available at the time.
The companies that made these vaccines, are they American-based companies?
I think AstraZeneca is a UK company.
J&J is an American.
Pfizer, I think, is an American company.
Yeah, Merck.
For some reason, I thought Pfizer was overseas.
Moderna has German roots, I think.
I'm not sure.
BioNTech is German. Moderna I think. I'm not sure. BioNTech is German.
Moderna is American.
I'm not sure exactly.
Because many of the people that are suspicious
about vaccines or skeptical about vaccines
argue that it's all financial incentives.
I mean, was a lot of money generated from the-
Yeah, the billionaires were created out of this.
And a lot of, in fact, the NIH is collecting
patent royalties on the licensing for the
technology that went into the vaccines.
Still now.
Yeah.
But Project Warp Speed, the development of the vaccine, aka Project Warp Speed, was a
Trump program, right?
Yeah.
President Trump authorized the program in order to accelerate the development
and testing of the vaccines.
I remember seeing him getting the injection on the news.
So I think people forget that because of Maha
and this sort of assumption that vaccines in Maha
are diametrically opposed, in some sense that, you know,
Maha and Bobby Kennedy are, to my knowledge,
it's the first time that anyone's
forcing a look at vaccines with the kind of level of detail that they are doing it or
going to do it.
People assume that the Trump administration is not aligned with vaccines, but the Trump
administration initiated Project Warp Speed, correct?
Yes.
Yeah.
The idea that Bobby or President Trump is anti-vaxxers is ridiculous.
This is frankly at odds with what the data actually show.
Okay.
Let's go back to the COVID vaccine because I think the story is really important.
Public authorities on the basis of an extrapolation that they should not have made, decided to essentially promise the public that if they
got the COVID vaccine, they would not ever get COVID again.
That was the implicit public health messaging.
You can become free.
Just take the shot, you become free.
You no longer have to worry about lockdowns and mask mandates or whatnot.
It very quickly became clear that that was not true. Right? So, I remember seeing the
outbreak of cases in Gibraltar, which was like 95% vaccinated, 90 plus percent vaccinated.
And I look at them going, why is Gibraltar like they were using, I think they were using
the AstraZeneca vaccines. Like, why are they seeing this huge spread of COVID?
I saw data from,
well, I forget which country,
it was mostly using the Chinese vaccine, the Sinopharm,
which had a more traditional technology.
Again, with a huge outbreak of cases
in like February or March of 2021 than Israel.
Country after country, they've been heavily vaccinated, seeing large outbreaks of cases.
And that meant that the extrapolation was false, that the vaccine was going to stop
you after two months from getting COVID, and spreading COVID was not true. Instead of acknowledging that fact, public health officials decided that the problem was the unvaccinated.
And they embraced the idea that you have to force people to get vaccinated for the public good.
So they doubled down on their high-volume. It's like July, August, 2021 that the Biden administration decided to use OSHA, to use
CMS.
OSHA is the occupational health and safety.
And then there's CMS, the Center for Medicare and Medicaid Services, to mandate the vaccine
for populations that they had control over. And when we talk about mandates, were there criminal charges or civil charges if somebody
didn't get it?
Just lose your job.
Yeah, you just lose your job.
I recall at Stanford, there was an insistence that everyone get vaccinated, but that if
people had religious reasons to not get vaccinated or some special health reason that they could
essentially not get vaccinated or some special health reason that they could essentially
not get it?
Stanford made it difficult to not get vaccinated, but possible.
Like if you had religious exemptions, they made it possible.
Other universities made it much more difficult.
So for instance, my colleague and friend, Martin Kuhldorf, who was a tenured faculty
member at Harvard University got fired because he didn't take the COVID vaccine, even though
he'd already
had had COVID and recovered.
He is currently still fired?
Yeah.
So there were consequences for not getting it.
Yes.
Because we hear this word mandates, right?
But I don't recall anyone coming around to my house and insisting, I just recall that
if I need to go certain places, I needed a vaccine card signed. I mean, essentially, it's widespread restriction on your basic liberties, civil liberties.
That was a consequence, including potentially your employment.
Other countries were even worse.
So Canada, you couldn't go on public transportation.
You couldn't fly if you weren't vaccinated.
You couldn't go to a restaurant if you weren't vaccinated.
That's true in New York City, by the way.
You had to bring a vaccine card.
Yeah, and if you didn't have one, you couldn't go in.
Essentially, the regime was essentially
to ostracize people who decided that they didn't want
or need the COVID vaccine.
Even though the scientific evidence was that,
there was no scientific evidence that demonstrated that if you had the COVID vaccine, even though the scientific evidence was that there was no scientific evidence that demonstrated that if you had the COVID vaccine, you were less of a threat to other
people as far as spreading COVID than if you hadn't had the COVID vaccine, specifically
for people who had already had COVID and recovered and weren't vaccinated.
Actually, there was quite good evidence from studies in Israel especially that you were
less of a threat for someone who never had COVID and was vaccinated in three or four
or five months since the vaccine.
Evidence out of Qatar showed a pretty sharp reduction in the efficacy of the vaccine against
getting COVID by four, five, six months after the vaccination. And what if any evidence was there
that the COVID vaccine, any of them,
caused any specific harm in adults?
Right, so in young men specifically,
like adults as old as 35, 40 years old,
there was evidence of heart inflammation, myocarditis.
Transient myocarditis?
Yes, but also more severe myocarditis. Transient myocarditis? Yes, but also more severe myocarditis
in post the vaccine.
There was, I mean, that was clear, clear evidence.
Why just boys do we know?
I don't fully understand the biology of that.
A reason to do sex specific studies.
And I'm in favor of that.
Come pass up the opportunity.
Interesting, so was there any evidence that the vaccine
had long-term detrimental effects
that we're still looking at now?
You know, you hear this stuff, you see it circulating,
you hear more about long COVID.
We should talk about long COVID,
but is there any evidence that the vaccine
caused long-term issues for people?
I think that likely that there's some people
who have particular immunological responses
or that there's also like evidence
that the production process for some of the vaccines
involved using DNA plasmids,
which may persist in producing some of the products of the vaccine.
I'm not actually, frankly not...
I mean, I've looked at the literature,
and there's a lot of controversy around the literature,
and I have not made up my mind fully on the extent of it.
What I will say is that it's very difficult to ask questions
about long-term effects of a vaccine just generally.
You can't run a randomized trial. That's done, right? to ask questions about long-term effects of a vaccine just generally.
You can't run a randomized trial.
That's done, right?
That vaccine trial was terminated where the placebo arm was vaccinated in January of 2021.
And so you're not going to tell from the randomized studies about the long-term effects.
So now you're left with observational studies where you need to like have a real control
group constructed
properly. And it's been difficult to get the public health authorities who are supposed
to do this to actually do this at scale. I've seen some of this. Like I think the FDA put
out a report of babies getting the vaccine having epilepsy or seizures at slightly higher
rates.
It was a report in 2022.
There's claims online I've seen about cancer, but I haven't seen anything where they've
done a very careful, people have done careful control groups.
I don't know.
I'm not leaving out the possibility.
I'm just saying that the kind of studies that I would like to see done,
rigorous studies that have control groups,
even in observational settings,
it's hard to find them in the literature.
And whatever they're in the literature,
they seem to get attacked.
Sometimes for reasons that make sense,
and sometimes for reasons that don't.
It's very difficult to address this
from a purely scientific point of view,
because the literature itself seems like it's poisoned.
Do you believe long COVID is a real thing
or is this something that people have constructed?
No, I think it's real.
I think there's, so I do think that the extent of it
is again, unclear, but it's very clear that there are some.
So for instance, I've saw a study,
I think it was in 2021, from France, where they looked
at people who previously had COVID and previously never had COVID among kids.
And then they were measuring subsequent long COVID rates after the infection, long COVID
rates, comparing the matched people who previously never had COVID versus who
did. And in kids, the rates of measured long COVID, which back in that study, I think was
like, did you have one of some number of symptoms in the WHO list of long COVID symptoms? Three
months after the COVID infection, the match study were roughly the same rate for kids.
But for adults, it was higher for the people who had COVID before.
Then, and it was, I mean, so I don't know the exact rate, but it's certainly a real
phenomenon.
I mean, I've met people who've had it, same thing with vaccine injuries.
Like I've met people who have vaccine injuries who report having had concrete, discrete injuries
after they've been vaccinated.
And I believe them. I mean, I think that I generally tend to believe patients
when they say things about themselves,
and especially when they have no incentive
to dissemble about it.
And yeah, so I think that these are real phenomena
that we need to address with open minds.
Will the NIH and or CDC be making public statements
about some of what you just described
that the messaging around vaccines
was in your view inaccurate?
Well, I'm still saying this.
I've been saying this.
I think that-
But in your new, I mean, you're saying it here
and we hear you, but in your new role,
like at the level of a country of 300 plus million people,
like, hey folks, we've looked at this
and I wasn't in charge then, but here's the deal.
I mean, in my role, I have to like focus on stuff
going forward more than, like, I mean, the past,
I think is worth addressing,
but it has to be a broader look than just me coming out
saying my opinion about it.
This podcast is fun, but that's not the purpose. So I'll just give you a specific thing. My colleague Marty
McCary who runs now the, is a commissioner of the FDA, he has issued a new framework
for evaluating COVID booster shots. So rather than just requiring to show that the COVID booster, the new variant COVID booster,
whatever it is, in the future produce antibodies in either in lab animals or in humans in order
to approve the vaccine for use, now going forward, the boosters have to show some efficacy
against preventing COVID and preventing deaths
and hospitalizations.
I see.
In order to get approved.
That's an evidence-based framework to essentially say,
if you're gonna sell the vaccines,
at least show in humans that it actually works
for something we care about.
If you produce antibodies and it doesn't translate
to reduction in morbidity or mortality,
then why recommend it?
Or why approve it? Some people might want to take the vaccine to reduce symptom severity,
not just to avoid death. There's now at this point, there's not evidence, if you've already
had COVID and recovered, there's no evidence that it would do that at this point for the boosters.
I mean, again, like, I want to distinguish,
that's why I wanted to start with December 2020.
This was like, you know, we knew about these large-scale studies
from the vaccines that were new.
And we knew, I want to distinguish what we knew
and didn't know.
The boosters are a different vaccine,
and they don't have the same large-scale studies
behind them.
They've been approved on the basis
of relatively small-scale studies behind them. They've been approved on the basis of relatively small-scale studies asking whether they produce
antibodies, not things that clinically matter to people.
Is it going to prevent me from getting sick?
Is it going to prevent me from being hospitalized?
Is it going to prevent me from dying?
The boosters don't have that kind of evidence behind it.
And so I think it was just a couple of weeks ago, the FDA decided that it was going to
ask the manufacturers to produce much better evidence for the boosters before it was going
to approve them.
It shouldn't just be a routine thing.
This is not a flu shot.
The framework, the regulatory framework that governs flu shots are based on decades of
experience with flu vaccines.
Are you a fan of the flu shot?
I mean, I've had lots and lots of flu shots in my life.
Really?
Yeah.
Do you get it every year?
Generally, yeah.
And it's designed to guard against most of the most common strains of flu that year?
Yeah.
I mean, sometimes they guess wrong, it doesn't do much, and sometimes it gets right, and it does better. But I generally have been gotten, I mean, I don't think I
got it last year. Too busy, I guess.
But you don't, it sounds like you don't have any specific safety concerns about the flu
shot for otherwise healthy adults. Is that right?
Yeah. I mean, as a scientist, I want the safety of these vaccines evaluated in a rigorous way, so I'm
not...
I wholeheartedly support that.
And if the data show that there are bad outcomes, then I say that, right?
But as a general matter, the flu shot, the technology used for it is...
I mean, it's a traditional technology that has a long history behind it.
And the regulatory framework, while I do think that, like,
the production of antibodies is, I think that's actually still the standard for
the flu shot, it makes some sense, right?
The flu strain that circulates is a different one every year.
And if you acquired this long-term clinical trial for
the flu strain that's currently circulating,
by the time you actually recommend it, it would
be useless. Now, you can say that's true for COVID as well,
but we don't have decades-long experience with the safety
profiles and also the efficacy profiles. And the flu shot,
it's hit or miss, right? Sometimes it works and sometimes
it doesn't. What we need is an excellent universal flu
vaccine, which, you know, there's still a lot of research
to try to get.
I think the key thing is, what I want to convey is, if you are in favor of vaccines, you should
not be treating this as a religious matter, where vaccine is good, and you believe that,
therefore you're a good person, vaccine is bad, therefore if you believe that you're
a bad person, vaccine's bad, therefore if you believe that you're a bad person. You should be treating this the same way
we treat other drugs that we recommend
to the population at large.
Evaluate the benefits, evaluate the harms
in rigorous ways, including randomized studies.
Understand patient nuances.
It might be right for some patients and wrong for others.
If you're going to say something, don't extrapolate beyond what the evidence actually shows,
right? Or else you risk losing the trust of the public, especially the public that would
potentially most benefit from the thing. What I'm arguing for is an actual honest,
evidence-based evaluation of vaccines. And that's essentially what Bobby Kennedy's asking for.
So that's what he's asked me to do,
not for vaccines generally, but for the COVID vaccine,
that's essentially the policy.
Now, the problem that we have in public health
is that, as you asked me earlier about,
do I think there are certain vaccines that are worthwhile,
and the answer is yes, I do think that.
I think that if we have a public health authority
that's gotten it so deeply wrong about this one vaccine,
where people lost their jobs over it,
people got injured and they were silenced over it.
People essentially felt like they were made to feel,
like, you remember like in 2021,
where people would disinvite family members
from Thanksgiving if they weren't vaccinated.
Yeah, or worse.
People were kind of excommunicated
from families and workplaces.
Yeah, essentially we created a class of unclean people as a matter of public policy.
You can understand why people who went through that would say, given that the vaccine didn't
turn out to stop you from getting and spreading COVID, why should I trust you on anything
else?
That's where we currently are.
The way forward isn't to force people to say,
look, you must acknowledge how great science is
on these other things.
The way forward is to be utterly honest
about what we know and don't know,
and treat people as partners rather than as subjects.
So in keeping with that,
there's perhaps no issue more sensitive than the vaccine autism issue.
My understanding of the current literature as it stands is that the Andrew Wakefield
data, this British physician who was really the first to popularize the idea that vaccines
could, in his words, cause autism
or were highly correlated with autism.
Those data were essentially retracted by the journals.
He lost his medical license.
And my understanding is there was evidence of fraud,
that he was either made up data or contorted data.
I've had guests on this podcast,
including a colleague from Stanford,
Karen Parker, who works on autism,
who verified that indeed the frequency of autism
is vastly increased in recent years
in ways that cannot just be attributed
to improved sensitivity of tests, et cetera.
One in 32 births is the current number.
And so you can understand why parents
who love their kids more than anything
and would do anything for their kids
are understandably concerned about any possibility
that vaccines could increase the probability of autism.
My stance as a scientist is,
well, if the data are robust
that vaccines don't cause autism,
then run a proper trial.
The Wakefield data are clearly contaminated,
if not outright, certainly by story and narrative.
I mean, there's just no way
that those data are gonna be resurrected.
And I don't think they should be resurrected, right?
I mean, unless there's something I'm not aware of.
He said too many things that weren't true
and whatever happened is history.
So what is the evidence, if any,
that a vaccine, some specific vaccine causes autism?
And is the NIH and CDC and the new administration
going to take a serious second look at this?
Yeah, so I don't want to comment on the Wakefield situation
because I don't know the ins and outs of it.
Well, all we know is what happened.
He lost his medical license.
I just say, like, it's, we're talking about one study, right?
I believe that replication matters.
And so like there are, I think on the MMR vaccine,
some excellent studies that failed to find a correlation
or a causal link between MMR vaccination.
Measles, mumps, rubella.
Measles, mumps, rubella, a vaccine that's really,
I think, important for the childhood for kids, and autism.
Like there's a massive Danish study
that tracks patients who are vaccinated,
kids who are vaccinated, matched with patients,
similar patients who are not,
tracks them for a year
or longer, years, and finds no difference or fails to find a difference in autism rates.
There's people who've...
I mean, there's all kinds of...
If you look online and elsewhere, there's all kinds of fights over that.
But to me, that's pretty good evidence for the MR vaccine.
For some of the other vaccines, there has been less of a focus to ask whether it correlates
vaccines-
Such as polio vaccine?
I don't know this literature, so I shouldn't comment, but I don't remember seeing a study
specifically asking whether the polio vaccine is linked to autism.
When I was growing up, every kid got the polio vaccine,
measles, mumps, rubella, and-
I think there was a DPT, yeah.
Yeah, and a couple others.
Like there were probably four or five vaccines,
as I recall.
I think that there's good evidence on the MMR vaccine
that failing to find a link with autism.
And I don't know the full extent of this literature, so I shouldn't comment too much, but when
I've looked, I haven't seen quite the same level of evidence for some of the other vaccines
failing.
Again, they just haven't looked.
As a general matter, I think it's an unlikely, just from a biological point of view, unlikely
to be the main reason why you, autism, the rise in autism, which is now well-documented
that you talked about, has occurred.
So to me, the question then is, thinking about autism, you're asking, you want to tell parents,
answer for parents, well, what does cause it?
What has led to the rise in the prevalence of autism?
The honest answer is, I don't know.
You focused on, we're focused now on this conversation on just one potential cause,
vaccines.
To me, it's unlikely that they are the reason for the rise in the cause of autism.
But there are many other potential hypotheses for the rise in the prevalence of autism that
I've seen.
You know, alterations of the gut microbiome I've seen.
Retinoids.
There was a paper out of Pashko Rekish's lab at Yale years ago looking at the migration
of cells in the cerebral cortex and developing primate fetuses,
but it's a great model.
And he was exploring the idea that ultrasound
was altering cell migration,
which may lead to changes in circuit connectivity.
Never really got followed up on,
because that would be wild.
It would be wild.
It would be wild.
I'm not suggesting that ultrasound causes autism,
but there were a lot of interesting ideas early on
that I thought ought to be explored.
So the point is that unless you know the etiology,
it's very difficult to talk about the treatment.
Now, of course, autism has a very wide range
of clinical presentations, right?
You have kids who have some social awkwardness, but otherwise are well-adjusted, have no problems.
Think Sheldon from Big Bang Theory or something, right?
Or many of our colleagues.
Maybe me, I don't know.
And then you also have kids who have very severe disabilities, a lot of biologically sort of driven co-occurring conditions,
apraxia, difficulty toilet training.
It will never live on.
So you have a very wide range of outcomes.
It's very possible that biology is very different for folks along the spectrum.
And unless you understand the etiology,
it might be different etiology for kids
in different parts of the spectrum,
then you're never going to have good answers,
both for prevention and also for therapies.
So it's that question that Bobby Kennedy has asked me to answer
or try to get an answer and that President Trump has asked to get an answer.
And I think it's appropriate because if you ask me what is, I mean we just talked about
vaccines as a potential cause, I think it's unlikely to be the cause.
But you can see my mind is open depending on the levels of evidence I've seen.
Now this is not my area.
Right?
I should say this, like I'm saying this as someone
who's now like tried to wade into it some
just to get a sense of it.
But as I waded into it, it's very, very clear
that there is not a scientific question,
a consensus answering the question
of what causes the rise in autism
or what is the etiology of autism.
But it seems that encouraging a spirit of open discourse
about these other potential causes, right?
And I'm not suggesting by the way
that ultrasound causes autism.
I want to be very clear,
but if you read scientific papers focused on brain wiring
and you make the not so outrageous leap
that autism has something to do with brain wiring,
maybe gut and brain and a bunch of other things,
but you come across a number of very interesting
preclinical model hypotheses
that hopefully will be tested at some point.
Well, there's like environmental exposures
to various kinds of chemicals,
tens of thousands of chemicals in the environment.
There's events that happen in utero potentially, there's nutritional
issues potentially, there's, I mean, I've seen a, you name the hypothesis, I've been
just trying to wade in this literature from somewhere from the outside, and it's just,
it's bewildering. I can't even imagine what a parent looking at this would look like.
Oh, it's gotta be devastating.
And to me, when there is no scientific question,
to an important thing that actually impacts health,
the answer is let's do excellent science on it.
Now, I've seen a lot of excellent science
about how to manage autism.
Lots of fights over, is psychotherapy the right approach,
behavioral modification, there's lots of fights over that.
Do we address the co-occurring biological conditions?
How do we address that?
Is it different?
I mean, I've seen lots of literature around that,
which strikes me as more advanced and sort of closer to the answers, although again there's
lots of controversies even there.
On the etiology of autism, it strikes me is that the literature is not all that far advanced,
that there's lots and lots of competing hypotheses.
The data are conflicting on many of them.
I could give you my most promising one,
but that would mean nothing really.
The right thing to do in that setting is to have an open-minded
investigation to try to address this problem.
And the question is why haven't we had that so far?
And I'll tell you, I think the reason we have not had
the kind of open-minded deep investigation by the scientific community at large on the etiology that parents deserve, the kids deserve,
is because it's dangerous to ask that question if you're a scientist.
All of a sudden you're going to be accused, often incorrectly, of being an anti-vaxxer,
and that's the end of your scientific career.
That kind of sort of suppression of scientific curiosity means that we won't have an answer
to this question.
So what I've done is I've organized an initiative inside the NIH to address this question of
the etiology of autism.
Not limited to vaccines.
No, wide ranging.
It includes basic science work.
It includes epidemiological work.
It includes environmental exposure work.
It includes all, and we'll bring together data sets
that we'll make available to the researchers.
We'll have a competition among scientists, just like the normal NIH way, with peer review
panels to ask who should get the awards.
We'll have a dozen or more scientific teams asking the question, what is the etiology
of autism?
We'll have that.
I think that normally it takes a year or longer to set up a thing like this. Well, by September we'll have like an open competition for these scientific projects.
And you know, we can't rush science, but I'm hoping within a relatively short period of
time, you know, who knows how long exactly, it depends on how science works, we'll have
a much better understanding of the etiology of autism than we have at this current moment.
Fantastic. I mean, just fantastic. I mean, regardless of where one sits on the vaccine
discussion. On vaccines, can I say one thing? Now, I don't want to, as the NIH director,
I don't want to put my thumb on the scale on any part of these potential ideologies, right?
As I already said, I'm not particularly an expert in this area. And so, you know, if we were to put
my thumb on the scale, it would be not from the point of view of expertise.
It would just be the point of view of like I just happened to read the literature and
I was impressed by X, Y, or Z.
But if I were to put my thumb on the scale, I think it would make it more difficult, A,
for scientists to ask the question honestly because they want to impress the NIH director
or something, and then B, for the public to trust the result at the end.
I want an open-minded, so this is why, like I was asked,
well, if you don't believe that these vaccines cause autism,
why would you allow people to ask that
as a part of the research agenda?
My answer is that a lot of people,
especially in the public, and even some scientists
who disagree with me.
And I want them to have their say.
I want an honest conversation.
I think that if you have an honest evaluation, you're not going to find that the vaccines
are the primary reason for the cause of the rise in autism.
It's going to be something much more fundamental and complicated.
But I don't want the results to be disbelieved because I put my thumb on the scale.
I eagerly await the results of the unbiased studies.
Yeah.
I really do.
And thank you for spending that time explaining what that initiative is going to look like.
And I'm delighted to hear that it's not emphasizing one particular hypothesis.
The other thing about the initiative,
it's very important to understand,
we're working with autistic parents,
we're working with the autism community, right?
A lot of times scientists, when they study things,
we put ourselves above and we like,
it's like we're examining amoeba or something on a slide.
When you do population research, you have to work with
the communities that you're actually trying to help. And that's exactly the spirit of
this. We're going to work with communities of autistic kids and parents, and we're going
to apply rigorous research methods with control groups and just the normal sort of high quality, the term of art nowadays is gold standard
science.
We're going to try to apply gold standard science to this and subject it to the same
kind of replicability standards I want all science subject to.
Can we expect that the National Institutes of Health, which indeed is a plural statement, Institutes, NIMH, Mental Health, National Eye Institute,
et cetera, will be restructured in some way,
in part to reflect the Maha movement,
make America healthy again.
And by the way, no one told me to ask that question.
I'm asking out of genuine curiosity.
There are these theories, I'm like part of the,
I'm politically, I'm a free agent
because the budget is limited.
There's, it's not an infinite budget.
Depending on how the IDC thing goes,
there may be more or less money to devote directly
to the laboratories around the country.
And given that fixed amount of money,
you can't do everything. I love the way you're encouraging innovative exploratory science that's rigorous with open
discourse.
But can we expect that the National Institutes of Health will take on some new names, maybe
a new institute starting to emerge? I mean, it's really Congress that determines that.
There's a process.
The administration has put forward its suggestion for a reorganization, I think it's down to
eight institutes from 27, or institutes and centers.
Congress over the past decades have had several suggestions for how to do this.
One of these things, like, I could focus my efforts on things that I think are going to
make big, big changes, or I could focus my efforts on, like, reorganization efforts.
I'll do what Congress, the administration asks of me.
But from my point of view, we'll let that fight happen as it happens and we'll respond
to it as it happens rather than like where I'm active.
I think the key thing is not the structure of the institutes to me.
The key thing is the content of the research and the standards we hold ourselves to in
the research.
Those are the things I want restructured.
That's really the fundamental question for me
as an NIH director.
If I can accomplish some of the things we talked about
during this podcast, having replicability be the core
of deciding what scientific truth is, refocusing the portfolio
so that we enable early career scientists
to test their ideas out,
that we aim big for trying to,
and we address the key health problems that Americans face.
If we can do those things,
that I'll consider myself a success.
Well, Dr. Bhattacharya, you have a tall task
and you're clearly ready for it.
I want to thank you for taking time out
of your extremely busy schedule.
And those aren't just words,
you are extremely busy to come here and have this discussion
and to tackle head-on questions
that were not all easy questions.
Some of them quite difficult actually,
because there's a lot of nuance,
a lot of different lenses one can look through.
It's clear to me that you're a data guy.
You love data. And it's also clear to me that you're a data guy. You love data.
And it's also clear to me that you like dissent,
maybe because you've been in the position of-
That's been always true, right?
Okay, well, yeah, it sounds like it's in your nature.
I didn't know the younger you,
but I love that you encouraged dissent.
I do believe that great science emerges from discourse
that includes sometimes even just outright arguments,
provided it doesn't, you get physical or cruel,
where, that are aimed at getting at the truth,
if it's possible, getting at the truth.
And it's also very clear that you care about exploration.
And I am, I must say, especially warmed by your enthusiasm
for protecting and promoting the science I must say, especially warmed by your enthusiasm
for protecting and promoting the science of young investigators, meaning in the first 10 years
of having their labs, as well as trainees,
I think I'm not trying to speak in nomenclature.
This is so important.
It's vital that the future is nice.
It's so important.
And yes, there are some older labs doing some wonderful work,
but even they will eventually retire and die.
We all do.
And the younger generation of scientists in this country,
it's so key.
And so I just really appreciate you coming here to share.
I do want to check back with you in a year or two,
see how things are going.
And science and public health really need you
and to really get behind discovery
and the mission statement of the NIH.
So thank you for coming here today.
You didn't have to do it.
And I look forward to more discussion.
Andrew, thank you so much for having me.
Really a pleasure.
Thank you for joining me for today's discussion
with Dr. Jay Bhattacharya.
To learn more about Jay's previous work and to find links to his current post at the NIH,
please see the show note captions.
If you're learning from and or enjoying this podcast,
please subscribe to our YouTube channel.
That's a terrific zero cost way to support us.
In addition, please follow the podcast
by clicking the follow button on both Spotify and Apple.
And on both Spotify and Apple,
you can leave us up to a five-star review
and you can now leave us comments at both Spotify and Apple. Please on both Spotify and Apple, you can leave us up to a five-star review and you can now leave us comments at both Spotify and Apple.
Please also check out the sponsors mentioned
at the beginning and throughout today's episode.
That's the best way to support this podcast.
If you have questions for me or comments about the podcast
or guests or topics that you'd like me to consider
for the Huberman Lab podcast,
please put those in the comment section on YouTube.
I do read all the comments.
For those of you that haven't heard, I have a new book coming out. It's my very first book.
It's entitled Protocols, an operating manual for the human body. This is a book that I've
been working on for more than five years, and that's based on more than 30 years of research
and experience. And it covers protocols for everything from sleep to exercise, to stress
control, protocols related to focus and motivation.
And of course I provide the scientific substantiation
for the protocols that are included.
The book is now available by presale at protocolsbook.com.
There you can find links to various vendors.
You can pick the one that you like best.
Again, the book is called Protocols,
an operating manual for the human body.
And if you're not already following me on social media,
I am Huberman Lab on all social media platforms.
So that's Instagram, X, threads, Facebook, and LinkedIn.
And on all those platforms,
I discuss science and science related tools,
some of which overlaps with the content
of the Huberman Lab podcast,
but much of which is distinct from the information
on the Huberman Lab podcast.
Again, it's Huberman Lab on all social media platforms.
And if you haven't already subscribed
to our Neural Network newsletter,
the Neural Network newsletter
is a zero cost monthly newsletter
that includes podcast summaries
as well as what we call protocols
in the form of one to three page PDFs
that cover everything from how to optimize your sleep,
how to optimize dopamine, deliberate cold exposure.
We have a foundational fitness protocol
that covers cardiovascular training
and resistance training.
All of that is available completely zero cost.
You simply go to hubermanlab.com,
go to the menu tab in the top right corner,
scroll down to newsletter and enter your email.
And I should emphasize that we do not share your email
with anybody.
Thank you once again for joining me
for today's discussion with Dr. Jay Bhattacharya.
And last but certainly not least,
thank you for your interest in science.