a16z Podcast - America's Autism Crisis and How AI Can Fix Science with NIH Director Jay Bhattacharya
Episode Date: September 23, 2025Dr. Jay Bhattacharya is one of the country’s top medical experts and a 24-year professor of medicine at Stanford. After being censored and deplatformed during COVID for his role in opposing harsh lo...ckdowns, he was appointed Director of the National Institutes of Health by President Trump in 2025.a16z General Partners Erik Torenberg, Vineeta Agarwala, and Jorge Conde join Dr. Bhattacharya to discuss the administration’s role in tackling the autism crisis, how to restore public trust in health authorities, how to make the NIH more dynamic and efficient, and how to streamline publishing and restore academic freedom.Timecodes: 0:00 Introduction1:30 Autism Initiative & New Research2:45 Drug Discoveries: Leucovorin & Tylenol Caution4:35 Preterm Birth & Broader Health Initiatives5:45 The Replication Crisis in Science8:50 Reforming NIH Funding & Scientific Culture14:00 Allocation vs. Execution at NIH17:30 Political & Scientific Decision-Making22:30 Addressing Life Expectancy & Chronic Disease27:00 Supporting Early Career Investigators34:50 Academic Freedom & Open Science37:30 Rebuilding Public Trust in Public Health41:00 Communicating Science Amid Uncertainty47:50 NIH Priorities: Nutrition, Chronic Disease, AI50:00 The Future of AI in Science & Medicine53:30 Advice for Rising Scientists55:00 The Role and Limits of AI in Science Resources:Find Dr. Bhattacharya on X: https://x.com/DrJBhattacharya and https://x.com/NIHDirector_JayFind Erik on X: https://x.com/eriktorenbergFind Jorge on X: https://x.com/JorgeCondeBioFind Vineeta on X: https://x.com/vintweetaLearn more about the NIH: https://www.nih.gov/ Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
The American people are not stupid.
In fact, they're quite smart.
And when we talk to them in ways where we show respect
for their intelligence with data,
allow people to disagree, but then have the evidence
right there in front of people.
I think people will respond with trust where the evidence actually leads.
We need kind of that Silicon Valley spirit.
We should stop punishing scientists who fail.
If they fail productively, let them publish in a journal
to explain why they're, what they learn from it.
That Silicon Valley spirit, I think, needs to come to science a little bit more.
Autism funding, old drugs with new promise, and a reset on American science.
Today, we're joined by Dr. J. Badacharya, Director of the NIH with A16Z Health and Bio-General
Partners, Vanita Agarwalha and Jorge Condé.
We cover the NIH's new $50 million autism initiative, Lucavoren's potential, and fresh scrutiny
of Tylenol and pregnancy.
We also dig into the replication crisis, bold funding models, rebuilding public trust, and
how AI can transform health care from drug discovery to clinical care. Let's get into it.
We'll talk to Dr. Butcher, thank you so much for coming on the podcast. We're stoked to have you.
I'm delighted to be here. So good to talk with you. I'm a little jealous of not in Menlo Park
to be there with you on this. Exactly. And we're talking Monday, September 22nd.
There's big news coming out today. The Times piece on you just came out and what you reflect on that
as well. But maybe you could share with us the big news and why it's so impactful. Sure.
So roughly six months ago when I first started this job,
Secretary Kennedy challenged me to help get answers for families with autistic kids.
I mean, the prevalence has been rising for decades, like one in 31 kids,
I think is the CDC's latest numbers on this.
That's an incredible number.
I mean, we don't have answers.
A lot of times families, they have these behavioral therapies that don't really work very well for a lot of their kids.
We don't know the cause, so we don't know how to prevent it.
And so I launched, worked really hard to launch this new initiative,
50 million new dollars, 250 teams applied for
for large research grants,
and we're gonna announce today that 13 teams
are going to be granted, you know,
these grants for this autism data science initiative.
The other thing, there's two other things
that are gonna get announced today
that it sort of came out of this process
of working with Mehmed Oz at the Center
for Medicare Medicare and Medicaid Services
and Marty McCarrie and Secretary Kennedy,
Marty McCarrie's FDA commissioner.
One is a drug, a very common old drug called Lukovorin.
It's basically like a, it's folinic acids,
but it serves almost like a way to deliver
folate to the brain where for when some kids have folate processing deficient folies you know something you get in vegetables right but some kids have this difficulty processing folate it turns out that a lot of doctors have experience using folinic acid lucavoren in treating autistic kids and kids who have this folate deficiency in their brains that actually works and 20% of the kids I think to restore speech up to 60% of the kids they get much better now not every autistic kid's going to get better with this it's you have to have the specific thing that's happening
your brain, but, you know, making that more widely available, I think this is a really good thing.
The other one is a sort of a caution on Tylenol, and acetaminophen. That is a, you know, obviously
very common pain reliever. It's used, it's the only sort of pain reliever and fever reducer used
recommended during pregnancy. But there's been new evidence that's emerged and what actually
highlighted by a new study put out by the dean of the Harvard School of Public Health just recently,
actually, that suggests that use in pregnancy can correlate with subsequent autism diagnoses
later on for the kids. Now, I think there's a lot of controversy still over that in the scientific
literature, but it's enough, I think, to say to moms, look, just be careful. I mean, you know,
you don't use it all the time, use it only really when you really need it for high fevers,
just to think prudently about it. I don't want to panic anybody. It's not the kind of result
that should panic anybody. It's just a reminder that you should use any medicine carefully.
especially during pregnancy.
Will there be any revised guidelines
around the use of acetaminifin in pregnancy
to help moms and parents
sort of make a decision or have a judgment call
on what they should do?
There will be, yeah.
So that's something that Dr. McCarrie,
the FDA commissioner is working on.
And there'll be also, you know, changes in,
like how Medicare pays, CMS, Medicare and Medicaid
pay for LucaVoren.
So it's a cross-agency collaboration for all of that.
So both the guidelines for parents, as well as sort of payment for drugs.
And then we got the, I'm the most boring part.
I just get to launch vast, interesting science projects for over the next thought.
That hopefully will produce answers over the next few years.
And you're also paying attention to preterm birth,
and you've launched a really fascinating initiative there to, again, you know,
launch not only fascinating science projects, hopefully,
but also science projects which lead to clinical insight
into why that's happening to moms across America.
And so, you know, that's another really interesting adjacency,
if you will, to some of the announcements
that you just made today.
Yeah, I mean, the preterm birth thing is,
it's really interesting.
Like, we have worse outcomes in the United States than Europe does.
And, you know, we don't really have great answers for why.
I mean, there's lots of contributors to preterm birth.
Of course, prenatal care is so important during pregnancy.
Making sure you have access to that is really important.
So that's part of it, but it's not the whole answer.
And we need to get answers to families.
on all these things that concern us.
I've heard from so many people around the country
telling me, asking me answers these questions hard
with that excellent science, and that's my job,
is to make sure that we have rigorous excellent science
to address these questions.
It's hard because, you know, like, science is difficult, right?
You get an answer you think is right,
and then, you know, eggs were bad for me
when I was 18, it turns out.
But then, like, later, it turns out eggs are great for you.
And I was fearful eating eggs forever.
because the science in 1985 told me that eggs are bad for you.
And, of course, now eggs are good for you.
Just, you know, it's one of those things where, like,
science is difficult, but we have to hold ourselves to higher standards.
We have to be, when we talk about people about science,
that's to be rigorous and reproducible,
something I've been focused on really sharply as my time as NIH directors
to make sure that we invest in replication.
The standard for truth in science ought to be replication,
independent teens.
You just don't believe me, just because I have an,
I say something is true.
You know, other people, independently look at the same thing,
should arrive the same answer.
Then we know more likely, we have more confidence
that it's true rather than just, you know,
high authority says so.
For the lay person listening to this,
what's sort of been the cause for the loss,
I'll say the loss of vigor in science or the law
or the challenges around being able to replicate science?
What is the underlying cause for this trend?
I mean, the underlying problem is just that
is hard. I mean, that's really the bottom line. And then the secondary cause is that there's
just a lot of it, a lot more than there was. Like once upon a time, you know, you go back to 1900 or
something, every scientist knew each other or basically knew almost every other scientist and everyone
was checking each other. That was just a normal course. Now you have vast fields where it's very
specialized and it's hard to get people to check other people's work. There's no return for it.
If I spend my career checking other people's work, I'm not going to get a professorship and
at fantasy university. And science is hard, right? It's very easy for a scientist to latch
under an idea and say, this is right. I know this is right. But it may not be right. And so what
matters is other people looking at it find the same thing. But often when other people look at it,
they don't find the same thing, but we don't learn about that, right? There's been the last two
decades, there's been a replication crisis in science with increasing realization. The standards
we hold ourselves to science in determining truth are too low. We basically, you can get a paper
published a peer-reviewed journal. You know, I've had 180 of them myself, which I apologize
for everyone. But the thing is, the fact that it's published in a journal doesn't mean it's right.
It doesn't mean it's true. It's useful. That's my expression of my belief about that scientific
idea. I think most of my things are true. But every scientist thinks that everything they publish is
true. That's not enough. You have to have replication. You have to have other people checking each other's
work because it's so easy to convince yourself in science that you're right. And so,
It's really those two things.
The volume of science means that people are so specialized and there's no returns,
there's no incentives to check each other's work as much as we ought to.
And then the publication standards are too, because science is too hard, science is so hard,
publication standards are not high enough, really.
That's really the reason for the replication crisis.
Well, first, I just want to comment.
There was a joke going around yesterday, sort of a quote tweet on Twitter in response to
sort of any potential reduction in autism that someone said this is a direct attack on Silicon Valley
startup productivity.
And what will this mean for startups?
But yeah, my goodness.
Exciting news there.
Say more just in terms of maybe we could zoom out.
You mentioned, you know, took over six months ago.
What are your reflections so far in terms of your activity and achievements to date and then what you hope to, you know, achieve going forward?
Well, I mean, we've done a lot.
So like one of the first things I did was we looked at, you know, the way we fund foreign collaborations, right?
So it turns out that we fund foreign collaborations, but it's very difficult for the NIH check that the money is going to the right thing.
we couldn't audit, like the Wuhan lab.
The NIH had sent money to the Wuhan lab,
but we couldn't audit it.
So we put in a new system,
I think foreign collaborators are really important for science,
but we need to do it in a way where I can look
to the American people in the eye and say,
look, we're actually tracking the money,
we're checking to make sure things are going
in the right place, doing the right thing.
I put in a new system.
The frustrating thing about that is like we put that in,
and all of a sudden I'm seeing reports
that I want to end all foreign collaborations.
I mean, it couldn't be further from the truth.
I just want to make sure that we just want to make sure
We do it in a way that's auditable.
I can go in front of Congress and say,
yeah, I know we sent money to do it on a lab,
and here's the lab notebooks that they worked on,
which we couldn't do under the old system.
We've changed the way that we evaluate grants.
So we have a fantastic, at the NIH,
we have a great way of valuing grants called the Center for Scientific Review.
It's a world's best peer review organization.
Turns out the bunch of the institutes, the 27 institutes,
a bunch of the institutes had their own parallel review system.
So we centralized that, made it so that everyone,
is viewed the same way.
The other thing, actually, this is related to Silicon Valley.
It's something we're working on right now.
Okay, you guys are going to tell me that I don't know anything about Silicon Valley,
because I didn't work for A16Z, but I just tell you, my view of this is, like,
the reason why you all are so successful is that if you, as a 16Z, you have a portfolio
of 50 projects, and you fund 50 of them, and 49 of them fail, and the 50th is, you know,
Google or something, you view that portfolio as a tremendous success.
and the people that those 49 companies,
they're going to get a second chance,
especially if their failure was productive.
You don't punish failure that much.
You're willing to have a portfolio where you think big, right?
I think that spirit needs to come to science.
I did publish work before the pandemic asking,
essentially, is the NIH willing to think big?
And too often the answer in recent decades has been no.
If you look back in the 1980s and 1990s,
The NIH was funding ideas that were like zero, one, two years old.
The typical scientific project funded by the NIH in the early 2000s and
2000s was like six, seven, eight years old.
We just became too scared of trying new ideas out.
We need kind of that Silicon Valley spirit so that,
and we should stop punishing scientists who fail.
If they fail productively, let them publish in a journal
to explain what they learned from it.
Like that Silicon Valley spirit, I think, needs to come to science a little bit more.
And do you think that the mechanism for a research
reviewing the grants, say, at the NIH, became overly cautious, or did the scientists themselves
become overly cautious?
Well, I mean, those are closely linked.
It's a peer review organization.
I mean, I sat on those scientific view panels for a decade, two decades, and I watched
what happens, right?
So suppose a new idea comes in front of me, right?
Well, I'm really good at methods, and especially methods related with the old idea that's,
this, a new idea is not competing with my idea, right?
And so, like, I look at the new idea, I go, this, there's no way it can work.
and I say that to this peer review panel,
and everyone says, yeah, there's no way it can work.
So easy to do, right?
I'm sure you face the temptation too at A-16,
so you get a thing, you look at the thing,
you're like, this guy's obviously a genius,
but he has an idea that couldn't possibly work.
I mean, that temptation is very strong.
And too often in science, we say, yeah, in scientific funding,
we say, yeah, we don't want to try it out.
And yeah, most new ideas are going to fail.
That's just normal.
You expect that to happen.
But if you don't leave room for people to try them out,
you're never going to make big advances.
I think that's what happened to the culture of biomedical science
the last few decades.
It's too focused on incremental progress,
not enough on enormous.
Now, of course, there have been big improvements,
big scientific discoveries, right?
I don't want to downplay that.
That's true.
But we spend a lot of money,
and per dollar we spend a whole bunch of economists
who have looked at this,
and the science of science folks,
who looked at this, say that we,
are getting too few advances per dollar that we spend.
That's because the culture is too conservative.
Yeah, it's interesting.
It's sort of why many great venture partnerships,
you know, ourselves and included, are not consensus-driven.
You can't drive, you can't require unanimous consent
to fund a big, bold idea,
because someone's going to say, hey, no way that's going to work.
And someone has to be willing to take that bet.
I'm curious, and correct me,
this is kind of not how you think about the NIH structurally,
but it occurs to me, kind of as an outside observer of the organization,
you know, again, for listeners,
our countries and the world's largest federally funded,
you know, federal funder of biomedical research
across 27 different institutes, over 35 billion in funding.
You know, there's a massive organization funding,
essentially, across multiple sub-disease categories,
the most important research that we believe,
will advance our health as a population.
And it seems to me that there are two big categories
in which the NIH has to get decision-making right.
One is allocation and sort of how you decide
how much should go to immunology versus infectious disease
versus maternal health and, you know, versus autism
and behavioral health.
And, you know, there's kind of this fundamental values-based,
you know, population input-based, you know,
citizenship input-based, whatever it might be.
There's some risk-return-based methods that you have to do to decide
how do you allocate funds across these different areas.
And then there's an execution challenge.
Okay, once you've decided you're going to allocate this quantum of capital
in research funding to this area, how do you pick the right investigators,
how do you keep them honest, how do you drive data return,
how do you measure productivity on an ongoing basis,
how do you incentivize ongoing risk-taking
in a multiple-year project?
How do you get your agreement straight
with an international funding,
research partner?
These are sort of all in the bucket of execution.
Is that a reasonable way for people to think about the NIH?
Like you've got a nail allocation
and then nail execution,
and you're in it to reform both?
Okay, first of all, you're very well trained as an economist.
That's very clear to me,
because that's exactly the right way
how an economist would talk about this, right?
So, I'm not, yep, I mean, but no, I mean, that's exactly right, right?
So first, there's a decision about where, which diseases should we focus on?
That's in, it's not only a scientific problem.
It's also a political problem.
Like the, and it ought to be a political problem for the reasons you just articulate it, right?
It, it, the, the things that we focus on to reflect the, uh, the, the real needs of the people that fund us.
if we're just doing science for science's sake
and we're just wandering around
without producing answers or improvements for people's lives,
well, the question is, why should they fund us?
And it's actually Congress that decides this.
Congress and the president together and the budget decide
where does the money go,
how much to infects diseases, how much to heart disease,
how much to cancer, how much to pediatric conditions.
There's a whole allocation that reflects
the political will of the people as well as the scientific
opportunities, right?
So it's a mix of the two that decides that.
And I think it's so completely appropriate that that be the case.
So let me push back on that.
Why? Why do people know enough about science
and our ability to make progress in important disease areas?
They may not even know the names of the diseases.
They may not know anything about the true prevalence.
We've enabled them to be productive in careers entirely outside,
biomedical science expressly so that the experts can weigh in on where science is going to improve
their health on an ongoing basis. And so you may say, oh, that's, you know, that's an overly
paternalistic view. Or you could say, well, that's what people decided they wanted. They didn't
want to have to worry about exactly what research needed to be done. They decided to offload that
cognitive load to you at the NIH. And they may not want a voice in that. You know, at least that's
kind of one argument I'd make in response to the idea
that allocation should be political.
How would you respond to that?
Well, I think, so let me get back
to the second half of your characterization,
because that's where the scientific sort of expertise comes in, right?
So within each area, it is absolutely vital
that scientists have their say, right?
That they can say, well, this idea for addressing Alzheimer's
is promising, this idea for addressing,
you know, autism is promising.
And then they can, and then scientists can check themselves and say,
well, is this, is this actually promising, right?
So it's, and the NIH's role is to mediate that,
take that scientific input and make portfolio decisions
that will actually advance health in those areas, right?
That's, that's basically my job.
And so, so that, I think the scientists have their say.
But in the question of where should the money go, right?
So let me just go back to the HIV epidemic,
just to give us some sense of what can go wrong, right?
So the early rise in HIV was not met with a sufficient response by the NIH.
We're talking very early in the early 80s of money going to research on this vital topic.
And it was the political movement of HIV patients coming together saying,
look, it's really important that we address this, that led to the NIH actually taking that
real public health threat seriously, right?
If you leave it to scientists themselves, or I should say ourselves, I'll say two things.
One is we don't reflect the will of the people.
Like we're not good at mediating between different population groups.
I mean, and it's not right, right?
There's no philosopher king that can decide, well, this much money should go to HIV,
this much money should go to cancer, this much money should go to pediatric.
conditions. It's the will of the people. And so really, I don't see any other way to do it.
You know, like Winston Churchill said that democracy is the worst, uh, worst system of government
on earth except for all the others. I mean, we don't have a philosopher king. Leaving it to
scientists is not an answer. Like the people really should have some say in where, where that
allocation happens, I think. Um, the other part of it is that, frankly, I mean, this is something
related to the, to what we just talked about before. Scientists, if you ask,
us, we're not actually good at predicting the future of the future in terms of like our,
will this investment result in productivity? I mean, actually, frankly, neither is Silicon Valley,
right? You can't say, you can't promise me that every single project you pick is going to work
for your portfolio. You cannot, right? And so scientists play a vital role in deciding what
scientific opportunities there are, letting us know and then we can make decisions. But the portfolio
decision, that's not exactly the scientific decision. That's an economic, microeconomic, small
e kind of decision. And then the macroeconomic decision is where there's these areas we should go
to. It really shouldn't just be scientists that decide that. Of course, there's an interplay, right? So if there's
a scientific opportunity in a particular area, I want to be able to reflect back to Congress and say,
well, this is a great area. You should fund this right now because, you know, there's huge
advances in cell-based therapy for sickle cell disease. We definitely need to fund that.
right? And then Congress can move based on that scientific opportunity.
But that's an exchange between, you know, the people and the scientists,
not just a one-way street.
I like that. That's insightful.
It's awesome.
Yeah. I mean, it seems like a more interdisciplinary approach to allocation and execution.
That includes an understanding of how much we're spending, how much it costs on a go-forward basis,
what the economic impacts might be of getting the research right.
Um, no, thanks for, thanks for sharing that view.
I think it's important for people to understand that you're trying to bring more voices to the allocation question and more rigor to the execution question, but both are not as straightforward as it may seem.
Yeah, this is a weirdly complicated job.
I thought being a professor was complicated, but this turns out this is a little more complicated than that.
Are there certain areas you feel were under allocated or over allocated if you could, uh, if you could, uh, you could.
you know, just wave one.
Oh, we're all, every area is under-allocated, of course.
I mean, I think the thing is about the under-allocation is, I don't know if it's
the question of money.
But if you look at the trends in public health over the last decade and half, the United States
has seen no increase in life expectancy.
We have enormous overhang of patients, people with heart disease, the, the, the, the,
Actually, cancer, we've seen big improvements in life expectancy or sort of life expectancy
after getting cancer, but huge increases in the incidence of cancer.
Type 1, type 2 diabetes, autism.
We've talked about a whole host of other chronic conditions.
I mean, and we've made big advances in other places, right?
So the question is like, how can we address the biggest health needs of the country?
Right.
It seems like we're really good at, like, and we should be good at, some of the, some,
some conditions that have lower prevalence.
Like, we've made tremendous advances in HIV.
It's a huge cause for celebration, right?
We still have some way to go.
40,000 people got HIV last year.
We can end the HIV epidemic.
We should still invest in that.
But the same time, what about all the people that died of heart attacks?
What are all the people who died, have, you know, type 2 diabetes that are suffering from, you know, blindness?
Because they're, you know, because they have, you know, bleeding in their eyes or in their retinas.
I mean, like, so you have, what about the people with kidney fill?
that the prevalence is rising.
What about all?
We have to look at the practical health needs of the country
where people are suffering
and make sure that we address our science
to those things.
I don't think we've done that as much as we ought to.
And just look at the macroeconomics.
You don't have any increase in life expectancy
in this country over a decade.
Science isn't the only reason why.
The fact that the NIH, I mean,
the NIH contributes to that, but it's not the only answer.
Obviously, it's very complicated.
But the NIH ought to contribute to that.
that. The science we do should translate over to better health for people. And so really those areas where people are suffering the most, that's where I want are sort of, I would say, is under-allocated.
I love this idea of comparing or analogizing the NIH to almost like a portfolio manager, right? And, you know, similar to what we do as venture capitalist in Silicon Valley. And if I really wanted to abuse your analogy, which I will, if you'll allow me for a second, you know, the people are almost like your limited partners are the ones that tell you these are the three.
sort of the the feces and the fund areas we want you to go after.
And you all are the investors, the venture capital investors
that have to do the portfolio management and picking and all of that.
You said a few minutes ago that a lot of the grants in the NIH
are going to older ideas.
And there's lots of data that shows they're also going to, you know,
more established, you know, older scientists, you know,
at the very high, you know, highly regarded institutions.
The equivalent of that would be if we only funded 30-year executive,
that came out of, you know, large, established companies
and ignored, you know, the young up-incomeers,
you know, coming right out of university
or dropping out of school or whatever.
You've talked a little bit about, you know, that question.
Like, how do you reform the process, the, you know,
the execution, to use Veneta's phrasing,
on selecting for the innovation that, if you will,
bubbles up from the bottom?
And it's a hard question, actually.
That's something that's the top of my mind.
And actually, what you just described,
described is exactly what we've been doing in science for a long time.
So the data out of the NIH is that in the 1980s, if you were 35, you actually had a chance
of getting a large NIH grant.
Like that was the median age of the first large NIH grant, you were 35 years old.
Now you're in your mid-40s.
We tell young investigators, you've got to do first off one.
Which, by the way, super young.
To be clear.
Mid-40s, super young.
I just want to be clear about that.
I mean, I'm 57, so, like, I don't know.
I mean, they all seem like babies to me.
But the thing is, you have, just like as in Silicon Valley,
the new ideas come from younger investigators, right?
So I did a study a few years back where I looked at,
it turns out that the age of the ideas in your published work
ages by one year for every year of chronological age.
So my ideas get one year older every year that I age.
The very best scientists fight like crazy
to stop that. So every two years of chronological age for Nobel Prize winners, their ideas and
their papers age by a year. If you want the newest ideas, you have to let the young people have
a try. And we just, just bad at that. Like, young people, we fund them, and then they drop out and they
leave for other places. That wasn't true back in the 70s and 80s. The culture of biomedicine says,
you have to have one, two, three postdocs before you have shot at an assistant professor job.
And as a result, the ideas that we support are just, they're just older.
I mean, not necessarily a bad thing.
I mean, of course, you should in the portfolio have some support for older ideas that are still promising.
But if you don't also fund some of the newer ideas, the portfolio is going to produce fewer
advances as a whole than if you do, right?
You have to have a, you have to diversify in that sense.
To solve that problem is hard.
So the NIH has been trying to solve this now for two.
decades, and we made no progress.
So first, we have to, we have to, I think, I mean, I just give you some sense of where we
gone backwards, you know, we used to have a system of peer review where in order to be
a peer reviewer, you had to have a large grant.
Now, think about that.
I got a large grant.
I'm in my 50s, and I see an idea that challenges my 30 years of work.
and I'm a reviewer on a panel.
It's really hard to open your mind and say,
well, I might have been wrong.
That system, now that got changed,
so that we don't longer have that rule.
But like, it's the mindset.
You have to allow,
so what I've done is I've asked the Institute
Directors, I've given them the authority, essentially,
to expand what they can do in terms of the portfolio.
I'm not going to judge them to make,
just like within Silicon Valley.
I'm not going to judge them to, does every single grant succeed?
I'm going to judge them on the portfolio as a whole.
Does it translate it a better health for the people that they're,
for the disease that they're like trying to address
or the diseases that they're trying to address?
Does it result in big advances in biological knowledge, right?
I'm going to assess the portfolio as a whole.
And then the other thing is that does it match the strategic vision of the institute?
The institutes, they have these, like, fantastic strategic plans.
Like, you know, you go look at them and say,
your eyes will say, you look at them, you go,
like, your eyes will get big with the science that they're proposing.
And yet, when they actually end up funding based on their peer review panels is often
they'll get 10 great proposals on one part of the strategic plan and like nothing on another
part of the strategic plan.
And so, like, I'm going to encourage them to be able to pick the portfolio so that matches
the strategic plan.
I'm going to reward them for rewarding and empowering early career investigators more, right?
So I'm going to build incentives into.
to the decision making by the institute directors
so that they have incentives to solve these longstanding problems.
We have to solve the new investigator problem.
And I'm going to start to evaluate long established investigators
because I do believe that they play a pretty fundamental role still.
But in like, how well do they advance the careers
of the early career investigators that work with them, right?
So if they're good at that kind of mentorship
and career advances, I'm going to reward them in their grant.
I'm going to start evaluating the grants for that too.
So because the grant portfolio has to be sustainable
in the long run of producing new ideas.
I mean, just that's, we should, we need to just,
if we don't have the early-care investigators
sort of getting the support they need,
we're going to start to stagnate.
I love to hear the interest in advancing early career investigators,
but we can't have that conversation
without talking about the universities from where they tend to come.
And so, you know, I was a product of NIH,
MSTP funding. I did my MD PhD with the generous support of the NIH and my peers and colleagues, you know, in my class and, you know, decades behind coming up, get trained on those grants today. How can you work with, you know, with the administration to ensure continuity for the training grants that, you know, NIH does,
believe are going to fuel the pipeline of early career
investigators who, as you say, are perhaps most likely
to bring change, big ideas, you know, and take big swings.
Yeah, I mean, we, as you know, you were biophysics, right?
So we have a range of, a range of, like, ways
that we support early care investigators, right?
So there's, there are these awards for pre-docs,
pre-docs, meaning undergrads.
And that's really important.
Like, we want to make sure that the undergraduate,
the very talented undergraduates who are interested in biomedicine
and research biomedicine have the support to do this.
If there's also like support for postdocs, right?
So for people who are getting the PhD and then postdocs.
I want to, it's going to be hard, but we have to structure things so that
that the range of investments we make,
actually translate over to people wanting to stay in biomedicine.
I mean, we have a lot of people who drop out.
But I think the main problem isn't that support for the early.
I think we have a lot of portfolio.
It's pretty good on that.
We could do better, but that's pretty good.
The problem is, like, after you've had this career in biomedicine,
how do you, like, do you, like, do you, have support to, like, make the next leap
into an assistant professor job?
And too often, it's too hard to do that.
You can't get the support you need to do that.
There's these K awards that we have that it's really difficult
to get them.
I think we have to do better at that.
And we have to reward universities that are better at that.
There's problems all across the system.
But I think that missing link is really the, you know,
you finish your MD and your PhD and then can you get that
resistant professor drop?
Or are you going to be asked to do 17 different postdocs
where you have a chance?
Right now, that system is set up to make it difficult.
You mentioned earlier that we're not making advancements in life expectancy.
Why are we lagging?
Why are some European countries doing better?
And what are the highest leverage points you think to get back to improving there?
Well, I think the key thing is we have to, a lot of our science is, you know,
this replication question we talked about earlier is very important.
We have to solve that.
That will help a lot.
And then this portfolio thing, I think both of those things actually will,
will address the sort of scientific rigor problem
and the sort of conservatism problem.
As far as like addressing life expectancy,
that really needs to be,
it's in a sense, not just a scientific problem.
Like we have to essentially get a message from the people
that they want scientists to address those problems.
Like that's just as we talked about earlier,
the political nature of that kind of allocation decision.
But you know, that's exactly what the Mahab movement represents.
The MAO movement is a, it's basically a cry for help from the American people saying,
look, all these chronic disease problems, all these problems with our kids and we're sick,
we're doing much worse than folks in Europe in terms of our health.
And that essentially is a call for the NIH to reform itself to address those problems.
It's a, and to me, it's a tremendous opportunity.
And, you know, this is why I agreed to take this job.
I mean, it was perfectly happy being a professor.
But it's, you know, once-in-lifetime opportunity
to make the NIH really work for the American people.
You know, and I think having that political movement behind us
is really important for that.
Last week, you announced some really interesting initiatives
around academic freedom,
and many folks know your voice kind of reached the national stage
in part because of your ardent desire
to see academic freedom respected, protected across the country.
And, you know, it sounds like you're looking for ways to improve publishing fundamentally
so that people feel freedom at all levels, including early career investigators,
to share their view on science that they think might be interesting.
And we need to figure out, to your point earlier, how to make the point that anything published
is not necessarily fact, but it's one opinion backed by one set of data and one set of analysis
and one set of perspectives.
And you'd like more of those to flourish in the public.
public arena. Say more about the role that you want NIH to play in protecting academic freedom.
Of course, at the NIH, I found out that a lot of folks, at the internal investigators in
the NIH, in order to publish their work, had to seek permission from their supervisors.
I changed that. Like, no more permission. If you're an NIH researcher, you have a scientific
paper, you don't have to get permission for me. People are going to publish research that I don't
agree with. That's wonderful. They should be able to do that.
Also, the places that, like the universities, I think, need to be absolutely committed academic freedom for excellent science to happen.
And, you know, like, there's been a lot of, like, angst over the administration's actions with the universities over the last few months regarding holding them to high standards regarding, you know, anti-Semitism and so on.
But there's also been a message that we really do want academic freedom at the universities.
Scientists really able to say what they think
and explore where they will
or else they're not good environments for research.
As far as journals,
that's a complicated question,
but the problem right now is that the scientific journals,
there's essentially a duopoly,
a very few number of scientists, companies,
for-profit companies control very large number of journals
and they charge tens of thousand,
$10,000 per article for science that they didn't do.
like the American people paid for.
They actually had a sort of a policy where if a regular person
wanted to go find a scientific article, they had to go,
there was a paywall where they pay like $50, $100.
We got rid of that paywall for NIH-funded research.
There's still a lot to do in this area.
We need more academic freedom.
We need more openness and scientific publishing.
And I'm working on policies to do that.
So Jay, you know, one of the, one of the key
questions if you know for the for the American public they're looking for
for better outcomes better health one of the big avenues that of course this country
uses and it's really had as a gold standard in the past is you know having this
extraordinary public health infrastructure but I think what's also true is over
the course of the last several years there's a lot of mistrust now in terms of
public health how do you sort of rebuild that trust for the public because
obviously you know if there's no trust the the the the the
the message can only be so effective.
And so how do we build those bridges back
to the extent that you think they need rebuilding?
You know, I think the problem with public health
and the lack of trust in it, you have to point to the pandemic.
You have no choice, right?
If you look at, you think back to the pandemic,
and you remember the plexiglass that was everywhere.
There's still, every time there's see a plexiglass,
it fills me with rage, but that's another story.
And there was no science mark behind that, right?
There was like the, you wear a mask
when you walk into a restaurant,
and you take it off when you sit down.
You know, again, no science behind it.
A whole host of like things,
and especially, they were really damaging things,
like closing schools, where, again, the science was so weak that it,
and now kids are like years behind in their education as a result,
and they'll be paying the Christ for that for years.
And so a lot of the American people have lost trust in public health
for reasons I can completely understand.
And so the question then is, what can we do about it?
And to me, the key thing is there's two things that have to happen, like two very broad things.
Like one, I think we have to restore gold standard science.
Like that presidential EO on gold standard science is so important because what it says is it articulates things that we thought all science already knew or committed to.
Like replication is really important.
Unbiased peer review, humility and how we talk about the limitations of our scientific findings.
There's a whole host of things where you read it and go, wow, this.
I thought science already did that.
And so if we actually do that, I think that's a major part of this.
The second thing is we have to, just like we talked about earlier,
about the role of the people and politics and deciding what scientific priorities,
what like areas of science to fund, and then scientists to decide what priorities
within the science areas to fund and the portfolio analysis.
We have to convey to people that we are their partners in scientific investigation.
in public health.
Public health, folks in public health are servants of the people.
And too often during the pandemic, it came across like we were sitting above people,
right, telling you what to do, telling you, if you don't take this vaccine,
you can't go to work, you can't get a job, you know, I mean, it was,
it was heartbreaking to watch because if, I believe very fundamentally,
that when science works as partners with people, like, and has this almost servant added
toward people. You can do a lot of good. You can do a lot of good. But I think really that kind of humility and the return to sort of gold standard science, that's the way to solve the problem of trust. It's going to take a long time, though. Because, I mean, I've talked to so many people around the country. And it's not, we're nowhere near solving that public trust problem.
Dan, I think it's an especially challenging thing as you look forward. And I'd love to hear your thoughts on, you know, how do you convey, you know, recommendations?
and guidance in the face of uncertainty
and incomplete information, right?
Because going back to your point, like in an ideal world,
you're always resting on top of gold standard science,
you know, but, you know, a lot of times science,
you know, there's a lot of unknowns in the science.
Science is hard, going back to what you were saying earlier.
And so how do you communicate to, you know, a population,
a nervous populace, you know, a sense of a recommendation
or even guidance in a world where you yourself,
having complete information.
I think you should have to be honest, right?
So if I asked a question about, I mean,
God forbid there's another pandemic during my watch
and then I'm asked, okay, how should we manage,
is it right to wear masks or something, right?
And I don't, there's no good scientific evidence.
I'm gonna just say that.
I, you know, the knowledge I was a medical student once,
I have an MD, so like I can tell you this
from firsthand experience, you're at the first two years
school you do a bunch of class work the third year you finally get to see patients right so you go walk
into a patient room and uh you're wearing a white coat and you know nothing or very little i mean you know all
you can fill with knowledge about biochemistry you can like write you know chemical equations so your
your fingers get tired but what you can't do is understand what a patient really needs
and so you hear you sit down in front of the patient they tell you their stories wonderful
like they they put their trust in you um and you are tempted to
to tell them things, to answer their needs that they're asking you.
But you don't know the answer.
You just don't because you're a third-year-med student.
Of course you don't know the answer.
And there's a, because you're wearing the white coat and because of someone looking at you,
wanting the answer, putting their trust in you, you can feel this urge to, like, say things
you don't know.
You start, like, freelancing.
And that's just a terrible mistake, right?
As a third-year-mast student, you learn that you should just say, I don't know.
I'm going to look it up.
I'll look the answer for you.
I'll get back to you.
I'll consult with people who know more than I do.
You have to be humble.
And especially in the face of new things,
you know, new pandemic or genuine scientific uncertainty,
we in public health have to be humble and say,
look, we're not sure, but here's how we're working
to try to get an answer.
And we have to convey that uncertainty.
And we can't blame the public.
I've gone around and talked to lots of folks.
in public health and science,
then they're like, well, what we have to do is we have to teach
the public more about science and make sure they understand
that science isn't always perfect and science like moves,
you know, you may have eggs are great one day
and eggs are terrible one another day,
that's because we have new science.
To me, that's like blaming the public.
It's not, the public doesn't understand that science is hard.
They understand it fundamentally.
Like they just, this is not a complicated thing
in the sense of like, I mean, everyone knows,
within the public, the science is hard.
The problem is that scientists conveyed certainty
about things that they had no business conveying certainty about
and then changed people's lives
for the worst as a result of it during the pandemic.
I acknowledge that the pandemic was a particular challenge
with respect to both communication
and certainty in the midst of uncertainty.
But how do we acknowledge that challenge
and not lose
trust in some of the bedrocks of public health advancement that we've made over the last several
decades, whether that's newborn vaccinations, you know, HHS held a listening tour and an advisory
update on happy vaccination in babies, and it's great that we're looking at all of the data
holistically there. But in some of those cases, you know, some folks would argue there is
substantially less uncertainty
than there was in the wake
of a new pandemic with a new virus, with no
data, with new, completely new
infections, you know, then there is
in the context of something like a HEPB.
So how do we, you know, and
please don't feel I need to respond to that
specific vaccine example,
but how do we not make it
so that even when you do have
relative certainty and you come out and say
hey, this is not perfect,
but we're pretty darn sure this is
a good idea.
how do you then make it so that people don't say, well, you know, last time you said you didn't know, so I don't know.
Right.
So I think I don't know is a good answer when you don't know.
When you have a little more evidence, a lot more evidence, like just take the MMR vaccine.
Like, I mean, if you want to prevent measles, take the MMR vaccine.
I mean, it's the best way to prevent measles.
And measles can be a deadly disease.
Like I vaccinated my kids with the MMR.
I was really happy I did.
Me too.
And I think that that, um, that, um, that,
You know, I think that that kind of certainty, you know, it's science, right?
So nothing is known tomorrow that someone might come along and they overturn, you know,
Newtonian physics and all of a sudden you're talking about relativity or something, right?
You'll always leave open that possibility.
But some things we do know with like much more certainty.
I'm not saying that we should all have false humility.
I think we should have humility for the things we just should actually have humility about.
Right. But at the same time, when we have an area of more scientific certainty, we have to leave open room for academic freedom so that people can have their say that think differently. We don't cancel them. We just, we reason with them. And we say, look, you say X, Y, Z, but look at all this other evidence. MMR is a good example. Look at other evidence that shows you differently. And they'll just have a public discussion. It's okay. I mean, it's okay to have that contradiction.
And then I think what will come across is when there is actual excellent science replicated,
maybe I'm naive, but I don't think so.
I think that wins scientific debates.
And you can look, there's evidence for this, right?
So the uptake of MMR in this country, the MMR vaccine is like 95% of American parents vaccinate
their kids with MMR.
The evidence is, and I think it's like 13% of American parents vaccinate their kids for
the COVID vaccine. I think that reflects the scientific evidence regarding the relative merits
of those vaccines. The American people are not stupid. In fact, they're quite smart. And when we
talk to them in ways where we show respect for their intelligence with data, allow people to
disagree, but then have the evidence right there in front of people, I think people will respond
with trust where the evidence actually leads. I mean, I just, yeah, maybe that's just a matter
of faith for me, but I don't see any other way forward.
You mentioned that the three priorities of NIH,
that you have are nutrition, chronic disease, and integrating AI.
Maybe can you flesh out a little bit on the last two,
what you see is most promising in terms of reducing the disease burden,
and then also in terms of integrating AI.
I've seen some, like, fantastic new ideas regarding Alzheimer's disease, for instance.
A colleague of mine at Stanford did, has this, like, fantastic papers,
he published using an old,
shingles vaccine called Zostabax.
He found that
in excellent observational studies
that if you had Zastavax,
it reduces the likelihood of developing
cognitive decline for Alzheimer's disease by
up to 20%, 30%, I mean, it's pretty
substantial for a pretty
innocuous, safe vaccine that's no longer
used, actually, because it didn't work for shingles.
I mean, imagine
if you had a very simple, cheap way
to prevent 30% of Alzheimer's cases
or delay Alzheimer's for years,
there's all these, like, huge
advances I've seen that, you know, just need a little bit of scientific love.
I think we just need to focus on those, make our portfolios focused on those, be willing
to take risks in terms of like on things that look like their new ideas, and we're going
to make a lot of progress.
And AI, by the way, I think is going to play a tremendous role in that.
I just, you know, everyone knows about the protein folding and alpha fold.
That has done an amazing job in turbocharging by a medical, a, a, a, a, a, a, a
drug development, because now you don't need to, like, sit there and wait and you can just do
your computations, figure out what the, how the protein folds, what the target sites will
look like, and then ask which of these drug products are, like, more likely to actually work
without having to do very expensive by a lot, you know, in, you know, lab work. You start to do
the lab work, but they focus the lab work in more promising ways. In the way that we deliver medicine,
So you can have AI's help radiologists do a better job at making sure they catch things,
make me catch everything, even simple things.
Like, you know, you go to your doctor, the doctor sits there looking at the computer the entire time
rather than you because they're like filling out their electronic health records.
Have an AI assistant listen to the conversation, fill out the form for the doctor.
So they're just checking afterwards, taking them a couple of minutes, and they're spending all
their attention on you, right?
all of this needs research, by the way.
I mean, does this is going to help patients?
We have to ask those questions.
But to me, that's a tremendous promise.
Like, those simple things can transform by medical research
and how patients are treated.
So that's why AI is so important to me as a potential tool.
We does need research.
I mean, I don't want to, we can't have an AI hallucinating on us
and then treating patients based on hallucinations.
But, you know, that's a matter of research to fix those kind of problems.
We heard that HHS rolled out across, you know, agency-wide, an enterprise-secure version of ChatGPT,
which is, seems like a terrific achievement from the perspective of internal HHS and NIH operations even, right,
to be able to look up internally how new is an idea. Simple queries and data kind of fluidity of that kind seems important.
What's the future? Is an AI going to?
to write the Institute's strategic roadmap
and an AI submit a grant
and an AI review panel, review the grant,
and you know, where are we gonna play a role as scientists?
I mean, I don't, okay, so this is a little to that question.
The answer is no.
Yeah, I mean, I think AI's are really good
at summarizing existing knowledge.
The training data you give it helps it.
It's fantastic at that kind of thing.
really developing brand new ideas that the challenge existing paradigms.
I don't, I mean, your experience with AI's,
but they're not quite as good at that.
We have, just to put a new policy in place where I'm limiting the number
of new public applications you can have, like to, we can have, you know,
six the cycle or something.
We have people writing 60 applications and very clearly AI generated.
And then we have, you know, it's, it's, I mean, it's, I mean, it's.
What it does is a overwhelm system of noise.
Yeah.
Yeah, so I think AI is really important.
As I said, I think it's, but it has to, we have to do research to understand how it can be used to help people.
And I think people, scientists are still have a tremendously important role.
The new AI system, rollout in age, is just exciting.
We're actually been working on a new system also specific to NIH, again, to protect,
in a ways that the protect patient privacy and all that.
But it rolled out across the NIH.
so that people can, like, interact with it in ways that help on NIH-specific tasks as well.
So, I mean, I think that's all very exciting.
But it's an augmentation of capacity rather than a substitution of capacity.
It'll make people way more productive.
It'll help us address some of the key problems.
But scientists are still going to, I mean, we still have work to do as scientists.
We do.
If I could just end on one last question, if you had one message for,
the rising star scientist contemplating a career in science
where they can bring the best of their abilities
to making science better, smarter, faster.
You know, a scientist embarking on a new PhD
in a brave new field.
A scientist thinking about starting a new company
to advance the work that they're doing.
A scientist at the NIH running a lab.
What is your one message to the individual scientist
who's out there?
you know, hoping, hoping to make the biggest impact they can.
I mean, science is incredible.
Like, it has almost limit the capacity to advance human well-being.
And it's the individual scientists who believes in their idea,
it keeps knocking on the door, even when the door is closed over and over again until it opens.
That's who really makes a big difference in this world.
I would say, please, stay in science.
feet knocking that door and change the world with it because that's the only the only way the
scientists can do that i love this story of max perrots i don't know if you've heard of him he was a
university of cambridge researcher in the i think the 50s and he had this idea that he could figure
out the structure of myoglobin it sounds like a very geeky kind of thing it's like but it's uh
but back then there was no protein folding field really i mean it was like and he was a
student and all his professors kept telling him pick an easier problem max this is crazy why are you spending
all your time you're never going to finish and for a decade at the university cambridge
you wandered around everyone knew he was a genius but he was like got nowhere he's not just working at
it until finally he figured it out and it's just transformed like a whole host of things in biomedicine
and you know eventually won the Nobel Prize it's it's um you know it's the kind of thing where
i asked myself do we have a scientific uh sort of infrastructure today that would allow a max
perrits to do what he did back then and and i would love to that the to make that happen to
through the sort of the power of the NIH,
to allow the experts of the world, the new ones,
who are now sitting there with great ideas,
to be able to try them out and change the world with them.
Fantastic.
So maybe on that note, just looking to the future,
if we end where we started, where you talked about
the NIH's highest ambition is to improve the health
of the American people, whether that's measure,
in life expectancy or the rate of chronic disease
that American suffer from,
if you had to guess where we're going to see
the biggest and best gains,
is that going to come from, you know,
how we manage patients, so the management of disease,
you know, new molecules for treating disease
or modifications in terms of how we all live?
Yes.
Yes, yes, yes.
Yes to all of the above.
I mean, you know, I, I am a big believer in portfolios when I have uncertainty.
So I don't know how to answer your question because I see promising advances in all three of those
topics. And I think we have to invest in all of the above in order to see where the most promising
things go. Like who would have predicted that the GLP ones, you know, would actually we saw a
reduction in average body weight in the in this country or the first time in, you know, decades last year.
because of Gila monster molecule
that somehow turns out to, you know,
if you just do the right biology.
There was a scientist knocking on some kind of door
to make that happen, right?
Yeah, I mean, that's the only sad thing about science.
It's hard to predict where the best things are going to happen.
So you have to, like, have a portfolio.
But all of those areas, to me, sound, look like they're very promising.
And as I've gone around the country, talk to people,
I'm excited about all of it.
So I can't wait to see what we produce.
do either of you have a prediction to that question or is it also uh well this is the debate we have
every week in terms of where we want to invest you know that our answer is yes yes yes too correct
all of the above uh well it's a great place to to close uh dr buttochart thanks so much for coming on
the podcast thank you thank you so much thanks for being have a great day
thanks for listening to the a 16z podcast if you enjoy the episode let us know by leaving a
a review at rate thispodcast.com slash a 16Z. We've got more great conversations coming
your way. See you next time. As a reminder, the content here is for informational purposes only.
Should not be taken as legal business, tax, or investment advice, or be used to evaluate any
investment or security and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies
discussed in this podcast. For more details, including a link to our investments,
please see A16Z.com forward slash disclosures.