The Peter Attia Drive - #143 - John Ioannidis, M.D., D.Sc.: Why most biomedical research is flawed, and how to improve it
Episode Date: January 4, 2021John Ioannidis is a physician, scientist, writer, and a Stanford University professor who studies scientific research itself, a process known as meta-research. In this episode, John discusses his stag...gering finding that the majority of published research is actually incorrect. Using nutritional epidemiology as the poster child for irreproducible findings, John describes at length the factors that play into these false positive results and offers numerous insights into how science can course correct.  We discuss: John’s background, and the synergy of mathematics, science, and medicine (2:40); Why most published research findings are false (10:00); The bending of data to reach ‘statistical significance,’ and the how bias impacts results (19:30); The problem of power: How over- and under-powered studies lead to false positives (26:00); Contrasting nutritional epidemiology with genetics research (31:00); How to improve nutritional epidemiology and get more answers on efficacy (38:45); How pre-existing beliefs impact science (52:30); The antidote to questionable research practices infected with bias and bad incentive structures (1:03:45); The different roles of public, private, and philanthropic sectors in funding high-risk research that asks the important questions (1:12:00); Case studies demonstrating the challenge of epidemiology and how even the best studies can have major flaws (1:21:30); Results of John’s study looking at the seroprevalence of SARS-CoV-2, and the resulting vitriol revealing the challenge of doing science in a hyper-politicized environment (1:31:00); John’s excitement about the future (1:47:45); and More. Learn more: https://peterattiamd.com/ Show notes page for this episode: https://peterattiamd.com/JohnIoannidis Subscribe to receive exclusive subscriber-only content: https://peterattiamd.com/subscribe/ Sign up to receive Peter's email newsletter: https://peterattiamd.com/newsletter/ Connect with Peter on Facebook | Twitter | Instagram.
Transcript
Discussion (0)
Hey everyone, welcome to the Drive Podcast.
I'm your host, Peter Atia.
This podcast, my website, and my weekly newsletter, I'll focus on the goal of translating
the science of longevity into something accessible for everyone.
Our goal is to provide the best content in health and wellness, full stop, and we've assembled a great team of analysts to make this happen.
If you enjoy this podcast, we've created a membership program that brings you far more
in-depth content if you want to take your knowledge of this space to the next level.
At the end of this episode, I'll explain what those benefits are, or if you want to learn
more now, head over to peteratia MD dot com forward slash subscribe.
Now without further delay, here's today's episode.
I guess this week is John Ionides. John is by all estimates of polymath. He's a physician
scientist, a writer, an Stanford University professor. He has extensive training in mathematics,
medicine, epidemiology. He's just generally
one of the smartest people I've ever met, and I've had the luxury of knowing John for probably
about nine years, and anytime I get to interact with him, whether it's over a meal or more formally
through various research collaborations, it's just always an incredible pleasure.
John studies scientific research itself, a process known as meta-research, primarily in clinical medicine,
but also somewhat in the social sciences. He's one of the world's foremost experts on the credibility of medical research.
He's the co-director of the meta-research
innovation center at Stanford. In this episode we talk about a lot of things. We talk about his journey from
This episode we talk about a lot of things. We talk about his journey from Greece to the United States, but we talk a lot about some
of his seminal papers.
You're going to see me reference a number of papers beginning with, I think, one of the
most famous papers he's written, although by citation, it turns out to not be the most
famous.
There's actually papers that even exceed it, which is an amazing paper where he describes
through a mathematical model why most published research in the biomedical field is incorrect, which is obviously out of the gate a staggering statement.
We go on to discuss a number of his other seminal papers and then really kind of tackle some of the hard issues in medical research, including my favorite topic, nutritional epidemiology. As always, John is candid and full of insight. So I'm just going to leave it
at that and hope that you trust me and make time to listen to this one. So please, without further
delay, enjoy my discussion with John Hainese. John, this is really exciting for me to be as close
to sitting down with you as I can
be during this time.
I've been wanting to interview you for as long as I've had a podcast.
Obviously, we've known each other for probably close to 10 years now.
Of course, you first came on my radar in 2005 with a paper that we're going to spend a
lot of time discussing today.
Before we get to that, how would you describe
yourself to people because you have such a unique background?
I think that it's very difficult to know yourself and I've been struggling on that front
for a long time. So I'm trying to be a scientist. I think that this is not an easy job. It means
that you need to reinvent yourself all the time.
You need to search for new frontiers, for new questions, for new ways to correct errors
and to correct your previous self in some way.
So under that denominator of scientists in the works, probably it would be a good place to put my whereabouts.
Now your background is also in mathematics and I think that's part of my
appreciation for you is the rigor with which you bring mathematics to the study
of science and in particular we're going to discuss some of your work and how
you use mathematical models as tools to create frameworks around this. Now you you were born in the US, but grew up in Greece, is that correct?
Indeed, I was born in New York, in New York City, but I grew up in Athens.
And I always loved mathematics.
I think that mathematics are the foundation of so many things
and they can really transform our approach to questions that without mathematics, it would
be very difficult to make much progress.
How did you navigate your studies?
Because you were obviously very prolific in mathematics.
If I recall reading somewhere in one of your bios, you even won the highest honor that a
graduating college student could win in mathematics in Greece at the time. How did you decide to also pursue something in the biological sciences in parallel as opposed
to staying purely in the natural or philosophical sciences of mathematics?
Medicine had the attraction of being a profession where you can save lives.
I think that intellectual curiosity is very interesting, but the ability to make a difference for human beings
and to save lives, to improve their quality of life
seemed to be at least in my eyes as a young person,
something that was worthwhile pursuing.
I had a very hard time to choose what pieces of mathematics
and science and medicine I could combine in what I wanted to do.
I think that I have tried my hands in very different things. I have probably failed in all of them.
But in some ways, I saw that these were complementary.
So I believe that medicine is amazing in terms of its possibilities to help people.
You need, however, very rigorous science.
You need very rigorous scientific method
to be applied if you want to get reliable evidence. Then you also need quantitative approaches.
You need quantitative tools to be able to do that. So, I think that none of them
is possible to dispense without really losing the whole and losing the opportunity to do something
that really matters eventually.
And your parents were physicians as well, is that correct?
Indeed. Both of them were physicians actually physician scientists.
So I did have an early exposure to an environment where I could hear their stories
of clinical exposure. At the same time, I could see them working on their research. I remember
these big tables with scientific papers spread all over them. And with what were the early
versions of computerized research. I think that I had the chance to be exposed to software
and computers in a early phase, because my father and my parents were interested in doing research.
So you finished medical school and your postgraduate training also in Greece or did you do part
of that in the United States?
I finished medical school in Greece in Athens in the National University of Athens and
then I went to Harvard for residency training and then Tofsnirno Medical Center for training
and infectious diseases at the same time I was also doing joint training in healthcare research.
So it was very interesting and fascinating years learning from great people.
And who were some of the people that you think back as having kind of shaped your thinking
during those years?
In the medical school, I had some great teachers.
One of them was the professor of epidemiology,
Dmitry Tricopoulos, who was also chair of epidemiology
at Harvard.
And he had some really great statisticians in his team.
So from the first year at medical school,
I went to meet them and tried to use every textbook
that they could give me and every resource
that I could play with.
In my residency training, I was very fortunate to meet great physician scientists, especially
in infectious diseases, actually.
Bob Melehring was the physician in chief and professor at Harvard, professor of medical
research as well, and he was really an amazing
personality in terms of his clinical acumen and his approach to patients.
Also his very temperate mode of dealing with very serious problems and dissecting through
the evidence in trying to make decisions and of course start with making diagnosis. At the end of my residency training,
I had the pleasure to meet the late Tom Chalmers, along with Joe Lau. They were at Topset
that time and my meeting with him was really a revelation because they were the ones who
were advancing the frontiers of evidence-based medicine,
evidence-based medicine had just been coined as a term pretty much by the MacMaster team,
David Sackett and Gordon Guiet and Tom Chalmers was the first person in the US to design
a randomized trial.
He was also one of the first to perform meta-anales that had a major impact in medical science.
At the time that I met them, they had just published an influential paper on cumulative
metanelses in the New England Journal of Medicine.
And it was a revelation for me because somehow what they were proposing was mixing mathematics,
rigorous methods, evidence, and medicine in one coherent hall, which seemed to be a
for a long hope until then for me.
I was just seeing lots of clinical exposures that there was very little evidence to guide
us.
There was no data or very poor data and a lot of expert based opinion guiding everything
that was being done.
And so this is just temporarily, I mean,
Chalmers died in the mid 90s.
So this is what the early 90s that you were fortunate enough
to meet him?
Yes, I met him in 1992, and he died about five years later.
I was grateful that I had the opportunity to work with him
and also with Joseph Lau, who was at that time
at Taf's University Medical Center,
which I went eventually to do my fellowship training. Because there are so many things I want to talk
about, John, and we don't have the luxury of spending 12 hours together. I'm going to fast
forward about a decade. I'm going to fast forward to 2005 to that paper that I alluded to at the
2005 to that paper that I alluded to at the
outset, which was the first time your work came onto my radar,
which is not to say anything other than that's just the first time I became aware of
sort of the gravity of your thinking.
Can you talk a little bit about that? It was in PLOS1. Was that paper correct?
Yes, I was in PLOS medicine.
PLOS medicine. Okay, so this is basically an open source journal that I think another Stanford professor actually was one of the
guys behind this journal, if I recall Pat Brown was one of the forces behind plus,
correct? Well, it was a transformative move at that time, trying to create a new standard
for medical journals. I think that now this has become very widespread in a way.
But I think back then it was something new, something that was a new frontier in a sense.
So you wrote a paper that on the surface seems, I mean, highly provocative, right?
The title of the paper is something to the effect of why most published clinical research is untrue. I mean, that's the gist of it. Can you walk
people through the methodology of this? It's a theoretical paper, but explain to
people who maybe don't have the understanding of mathematics that you do, how
you were able to come to such a stark conclusion, which I want to point out one thing.
I'll give you why I had an easy time believing the results of your paper is my mentor had
shared with me a statistic when I was, you know, sort of doing my postdoctoral training,
which I found hard to believe.
But when I realized it was true, became the bookend to your claim.
And that was at the time, something
to the tune of 70% of published papers were never cited again outside of auto citation,
meaning outside of the author citing his or her own work. And if you think about that for
a moment, if 70% of work can't even be cited by one additional person down the line, that tells you it's either irrelevant
or wrong. So again, that's not the same thing that you said, but it at least primed me to kind of
listen to the message you were talking about. So talk a little bit about that paper.
That paper, as you say, it's a mathematical model that is trying to match empirical data that had
accumulated over time,
both in my work and also in the work of many other scientists who were interested to understand
the validity of different pieces of research that was being produced.
I think that many of us had been disillusioned that when evidence-based medicine started,
we thought that now we have some tool
to be able to get very reliable evidence
for decision-making, and very quickly,
we realized that biases and results
that could not be replicated and results
that were overturned and results that were unreliable
were the vast majority.
It was not something uncommon.
It was the rule that we had either unreliable evidence,
or actually perhaps even more commonly, no evidence. So it's an effort, that paper, to put a
mathematical construct together that would try to explain what is going on and would also try
to predict in some ways what might happen if some of the circumstances would change in
terms of how we do research.
So the model makes for a framework that is trying to calculate what is the chance that
if you come up with a URICA, a statistically significant result that you claim I have found
something, I have found some effect that is not null. There is some treatment effect here, there is some not zero that I'm talking about.
What are the chances that this is indeed a non null effect that we're not seeing just
a red herring? And in order to calculate the chances that this is not just a red herring,
you need to take into account what is your
prior chances that you might be finding something in the field that you're working.
There are some fields that probably have a higher chance of making discoveries compared
to others.
If you're unlucky to work in a field that there's nothing to be discovered, you may be wasting
your time and publishing one million papers, but
you know, there's nothing to be discovered. So it's going to be one million papers that
end up with nothing. Conversely, there may be other fields that may be more rich in
discovery, both the field and the tools, the methods and the designs of the studies that
we throw at trying to answer these questions can be informative. The second component is in
what environment of power are we operating, meaning is the study large enough to
be able to detect non-null effects of some size of interest, or maybe there are
true effects out there, but our studies are very small and therefore they're
not able to detect these effects.
And in my experience, until that time, I had seen, again and again, lots of very small studies floating around
with the results that were very questionable that could not be matched with other efforts, especially when we were doing larger studies, most of them seem to go away. And power is important not only because if
you don't have enough power, you cannot detect things that exist. What is equally bad
or probably worse is that if you operate in an environment of low power, when you do
get something detected, it is likely to be false. And here comes the other factor that is compounding the situation, bias,
which means that you have some results that for whatever reason,
bias makes them to seem statistically significant while they should not be.
And bias could take zillions of forms. I think that
throughout my career I feel like I'm struggling with bias with my own biases
and with biases that I see in in the literature. But bias means that you could
have conscious unconscious or subconscious reasons why a result that should
have been null somehow is transformed into a significant signal.
It could be publication bias, it could be selective reporting bias, it could be
multiple types of confounding bias, it could be information bias, it could be many, many other
things that turn null results into seemingly significant results while they are not.
null results into seemingly significant results while they are not.
Then you have to take into account the universe of the scientific workforce. We're not talking about a single scientist running all the studies.
It's not just a single scientist or a single team.
We have currently about 35 million people who have co-authored at least one scientific paper. We have many, many
scientists who might be trying to attack the same scientific question. And each one of them is
contributing to that evidence. However, there's an interplay of all these biases with all of these
scientists. So if you take into account that multi-scientist environment, multi-effort
environment, you need to account for that in your calculations. Because if you, for example, say,
what are the chances that at least one of these scientists will find some significant signal,
this is a very different situation compared to just having one person taking a shot and just
taking a single shot.
So, this is pretty much what the model tried to take into account putting these factors together
and then trying to see what you get under realistic circumstances for these factors.
These factors would vary from one field to another. They would be different, for example, if we're talking about exploratory research
with observational data versus small randomized trials,
versus very large phase three or even mega trials.
It would be different if we're talking about massive testing,
like what we do in genetics versus highly focused testing
of just one highly specified,
pre-registered hypothesis that is being attacked. Running the calculations, the model shows that
most circumstances were both biomedical research, but I would say most other fields of research
are operating. If you get a nominally statistically significant signal with a traditional p-value of slightly less than 0.05,
then the chances that you have a red herring, that this is not true, that it is a false positive,
are higher than 50%.
There is a huge gradient, and in some cases it may be much lower. The
false positive rate may be much, much lower, but in others it would be much higher. But
in most circumstances, the chances that you got it wrong are pretty high. They're very
high.
That's actually a very elegant description of that paper. I want to go back and unpack
a few things for people who maybe don't have some of the acumen down. So let's go a bit deeper into what a P
value is. Everybody hears about it. And everybody hears the term statistically significant. So maybe
explain what a P value is, explain statistical significance, and explain why it's not necessarily the same
as clinical significance and why we shouldn't confuse them.
I think that there's major misconceptions around significance.
What we care in medicine is clinical significance, meaning if I do something or if I don't do
something, would that make a difference to my patient or it could be in public health
to the community, to cohorts of people,
to healthy people who want to have preventive measures, and so forth.
Do I make a difference?
Does it matter?
Is it big enough that it's worthwhile?
The cost, the potential harms, the implementation effort, perhaps other alternatives that I have,
how does that compare to these alternatives? Maybe
they're better or cheaper or easier to implement or have fewer harms. So this is
really what we want to answer, but unfortunately most of the time we are stuck
with trying to answer very plain frequentist approach question, which boils down to statistical significance. Typically,
this boils down to a p-value threshold of 0.05 for most scientific fields. Over the years,
there's many scientific fields that have diversified, and they have asked for more stringent
levels of statistical significance. A couple of years ago, along with many other people,
we suggested that fields that have not diversified and they do not adjust their levels of
statistical significance to more stringency by default, they should be using a
more stringent threshold, for example, use a threshold of 0.005 instead of 0.05.
However, most scientists are trained with statistics light to use some statistical test that gives
you some statistic that eventually translates to a p-value.
And what that p-value means, it needs to be interpreted as what are the chances that if
I had an infinite number of studies like this one, I would get a result that would be
as extreme or more extreme, and even that is not a complete definition because it does not take
into account bias, because maybe you would get a result that is as extreme, but it's largely because of bias. For example, there's many, many fields that you can easily get p-values that are astronomical.
They're not just less than 0.005,
but they may be 10 to the minus 100 with some of the large databases that we have.
We can easily get to astronomically small p-values, but this doesn't mean much.
It could just mean that you have bias, and this is why you get all these astronomically
low p-values, but they don't really mean that the chance of getting such an extreme
result is extremely implausible and that there is something there.
It just means that certainly there is bias, no more than that.
There has been what I call the statistics wars
over the last several decades.
People have tried to diminish the emphasis
on statistical significance.
I think I have been in the camp of those who have argued
that we should diminish emphasis or at least try
to improve the understanding of what that means
for people who use and interpret these p-values.
In the last few years, this has become probably more aggressive. Many great methodologies have suggested that we should
completely abandon statistical significance that we should just ban the term, never use it again,
and just focus on effect sizes, focus on how much uncertainty we have about effect sizes, focus on perhaps
basing interpretation of research.
I have been a little bit reluctant about adopting the ban statistical significance approach, because
I'm afraid that we have all these millions of scientists who are probably not very properly trained to understand
statistical significance, but they're completely not trained at all to understand anything
else that would replace it.
So in some ways, for some types of designs, though, so I would argue that if you pre-specify
and if you are very careful in registering your hypothesis and you have a protocol
that you deposit, for example, what is happening or should be happening with randomized trials,
and you have worked through this that it makes sense that your hypothesis is clinically important,
that the effect size that you're trying to pick is clinically meaningful, it is clinically
significant, then I would argue that
statistical significance and using a pth value threshold, whatever that is,
depending on how you design the study, makes perfect sense. It's actually a very
transparent way of having some rules of the game that then you try to see
whether you manage to succeed or not. So if you remove these rules of the game that then you try to see whether you manage to succeed or not. So if you remove these rules of the game after the fact in these situations,
it may make things worse because you will have a situation where people will just
get some results and then they will be completely open to interpret them as they wish.
And we see that they interpret them as they wish even now without any rules in the game
or at least by removing
those rules post-hog. But if we could have some rules for some types of research, I think
that this is useful. For other types of research, I'm willing to promote better ways of interpreting
results, but this is not going to happen overnight. We have to take for granted that most scientists are not really well trained in statistics
and they will misuse and misinterpret and misapply statistics, unfortunately. So we need to find ways
that we will minimize the harm, we will minimize the error and maximize in medicine the clinically
significant pieces and in other sciences, the true components of the research enterprise now at the other side of that
statistical field is power right so we go from alpha to beta and
You alluded to it earlier. I want to come back to it because you actually said something very interesting
I think most people who dabble enough in the literature understand that if you underpower
a study, so if you have two few samples, two few subjects, whatever the case might be,
and you fail to reach statistical significance, it's not clear that you failed to reach statistical
significance because you should be rejecting the null hypothesis or because you didn't have
a large enough sample size. So that's always the fear, right? The fear is that you get a false
negative. But you said something else that I thought was very interesting. If I heard you
correctly, which was, no, you actually run the risk of a false
positive as well, if you're underpowered. Can you say more about that? Indeed. In an underpowered
environment, you run the risk of having higher rates of false positives if you take the performance
of the field at large. If you take hundreds and thousands of studies that are done in an underpowered
environment, even if you manage to detect the real signals, you know, signals that do exist,
if these signals are detected in an underpowered environment, their estimates will be exaggerated
compared to what the true magnitude is. And in many situations, both in medicine and in other sciences,
it's not important so much to find whether there's
some signal at all, which is what an all hypothesis is
trying to work around, but how big is the signal?
I mean, if a treatment has a minuscule benefit,
then I wouldn't care about it.
I wouldn't use it because the cost and the harms
and everything on the other side of the balance is not making care about it. I wouldn't use it because the cost and the harms and everything
on the other side of the balance is not making it worth it. So most scientific fields
have been operating in underpowered environments and there's many reasons for that
and it varies a little bit from one field to another but there's some common denominators.
Number one, we have a very large number of scientists. Scientists
are competitive, there's very limited resources for science. It means that each one of us
can get a very thin slice of resources. We need to prove that we can get significant results
so as to continue to be funded and to be able to advance in our career. This means that we are stuck
in a situation where we need to promote
seemingly statistically significant results even if they're not. We need to do very small studies
with these limited resources and then do even more small studies rather than aim to do a more
definitive large study. There's even a disincentive towards refuting results that are not correct
because that means
that you feel that you're back to square zero, you cannot make a claim for continuing your
funding. All the incentives, at least until recently, have been aligned towards performing
small studies in very selectively reported circumstances and with flexibility in the way
that results are analyzed and presented.
And I think that this leads to very high rates of results that are either completely false positives
or they may be pointing to some real signal, but the estimate of the magnitude of the signal is grossly exaggerated.
In recent years, we have started seeing the opposite phenomenon as well.
We start seeing some fields that have overpowered studies.
Instead of just having very small studies, in some fields, we have big data, which means
that you can access records, medical records from electronic health records on millions of
people, or you may have genetic information
that is highly granular and gives you tons of information.
And big data are creating an opposite problem.
It means that you're overpowered
and you can get statistically significant results
that have no clinical meaning that have no meaning really.
And even with a tiny little bit of bias,
you may get all these signals just because bias is there.
So you're just measuring bias.
You're just getting a big scale assessment
of the distribution of bias in your data sets.
That's becoming more of a problem in some specific fields.
I think that the growth of this type of problem will be faster compared to the growth of the
problem of small underpowered studies.
I think in most fields, it's a more common problem though, until now, that we have very small
studies rather than very large studies.
Now, you've commented on GWAS studies.
Do you want to talk a little bit about that here? It sort of fits into this a little bit,
doesn't it? Genetics was something that I was very interested in from my
early years of of doing research because it was a new frontier for quantitative
approaches. Lots of very interesting methodology was being developed in genetics.
Many of the questions of evidence that had been stagnating in other biomedical fields, they had a new opportunity to give us some
new insights with much larger scale evidence in genetics compared to what we had in the
past when we were trying to measure things one at the time, especially genetics was a
fire hose of evidence in some way.
So I found it very exciting and for many years I did a lot of genetic research, I still
do some.
And very early on we realized through genetics that the approach that we had been following
in most traditional epidemiology, like looking at one risk factor at a time and trying to
see whether it is associated with some disease outcome,
was not getting very far.
We could see in genetic epidemiology of candid genes,
that most of these papers that were looking at one or a few genes at a time,
with association with some outcome,
just trying to cross the threshold of statistical significance
and then claiming success, they would just false positives.
We saw that pretty early, it took some time for people to be convinced, but then they
were convinced and genetics said took some steps to remedy this.
They decided to do very large studies to start with.
They also decided to look at the entire genome, look at all the factors rather than one at
the time.
And they also decided to join forces, not have each scientist try to publish their results
alone, but share everything, have a common protocol, put all the data together to maximize
power, to maximize standardization, to maximize transparency also, and then report the cumulative
results from the combined data from all the
teams that had contributed to these large meta-analysis of primary data.
So this is a recipe that I think should be followed by many other fields, especially
fields that work with observational data in epidemiology, and some fields have started
moving that direction as well, but
not necessarily as much as the revolution that happened in genetics and population genomics.
So I was going to actually ask you exactly that question. I was going to save it for a bit later,
but let's do it now. Why did the field of genetics basically have the ability to self-police and
undergo this cultural shift in a way that let's just put
every card on the table here.
Nutritional epidemiology has not.
Nutritional epidemiology, which we're going to spend a lot of time talking about, is the
antithesis of that.
And it continues to propagate subpar information, which is probably the kindest thing I could
say about it.
So what is it culturally about these two fields that has produced such stark
contrasts in the response to a crisis? There's multiple factors. One reason is that genetics
managed to have better tools for measurement compared to nutritional epidemiology. We managed
to decode the human genome, so we developed platforms that could measure the entire
variability more or less in the human genome with pretty high accuracy. If you have genotypic
platforms that have less than 0.01% error rate, this means that you have very accurate measurement.
As opposed to nutrition, where the traditional tools have been questionnaires or survey tools that have very high biases, very high recall bias, very low accuracy, and they do not really capture the diversity of nutritional factors with equal granularity as we can capture the genetics in their totality of the human genome. The second reason was that I believe in genetics, there were no strong priors, no strong beliefs,
no strong opinions, no strong experts who would fight with their lives for one gene variant
versus another.
We had some, you know, I think that some of us probably might have published about one
gene and then we would fiercely defend it,
because obviously if you publish a paper,
you don't want to be proven wrong.
I think it's very human.
But it was nothing compared to the scale that you see
in nutrition research where you have a very strong
expert opinion base, people who have created careers.
And they feel very strongly that this type
of diet is saving lives and it should have policy implications, it should change the world,
it should change our guidelines, it should change everything.
Many of these beliefs are interspersed with religious or cultural or, you know, non-scientific
beliefs in shaping what we think is good diet.
And as you realize, none of that really exists for genetics.
Polymorphism RS249214 is unlikely to be endorsed by any religious, cultural, political,
or dietary proponents.
It's a very different beast, and I think that you can be more neutral
with genetics research because of this objectivity as opposed to nutrition, where there's a lot
of heavy beliefs interspersed. Methodologically also, genetics advanced faster. Nutrition has
been stuck mostly in the era of using p-values of 0.05 thresholds and using those
thresholds in mostly post-hoc research, research that is not registered, that is selectively presented,
people are trained in a way that they need to play with the data, they need to torture the data,
they need to try to unearth interesting associations.
And in some cases, of course, this becomes extreme, like what we have seen in the Cornell case,
where pretty much goes into the situation where you have fraud. I mean, it's not just poorly done
research, it's fraudulent research. But fraudulent research aside, even research that is not fraudulent in nutrition
has some standards of methods that are pretty suboptimal compared to what genetics has adopted that
they decided that we have such a huge multiplicity that we need to account for that. So, you know,
we're not going to claim success for a p value of 0.05, we will claim success for a p value of 10 to the minus 8.
And if it's not that low, then forget it.
It's not really a finding.
We need to get more data before we can say whether we have a finding or not.
Or they decided that they will share data, that they will create large coalitions of researchers
who would all share their data.
They would standardize their data.
They will standardize the analysis.
They would perform analysis in a very specific way.
And they would also sometimes, actually I think this is becoming the norm, have two or
three analysts teams analyze the same data and make sure that they get the same results.
These principles and these practices have started being used and feels like nutrition,
but to a much lesser extent.
And I think that gradually we will see more of that, but it's going to take some time.
So there's multiple scientific and behavioral and cultural and statistical and methodological
reasons why these fields have not progressed at the same pace of revolutionizing their research practices.
Let's talk a little bit about Austin Bradford Hill. I'm guessing you didn't have a chance to
meet him. He died in 91. Would you have crossed paths with him at all? No, I didn't have that fortune
unfortunately. Do you think he would be rolling around in his grave right now if he saw what was
being employed based on the
criteria he set forth, which I also want to talk about your thoughts around the revision
of these. But even if you just take his 10 criteria, which we'll go through for a moment
as a bit of a background on epidemiology, do you think that what he had in mind is what we're
doing today? I think that Austin Bradford Hill was very thoughtful.
He was one of the fathers of epidemiology, and of course, he didn't have the measurement
tools and the capacity to run research and such large scale as we do today, but he was
spot on in coming up with good questions and asking the right questions, asking the important
question. So, his criteria, I don't think that he thought of them
as criteria and I don't think that he ever believed
that they should be applied as a hard rule to arbitrate
that we have found something that is causal
versus something that is not causal.
If you read through the paper, it's a classic, it's very
obvious that he has a very temperate approach, he has a very cautious approach. Basically,
he says none of these items is really bulletproof. I can always come up with an example where
it doesn't work. And I think that this is really telling what the great scientist he was,
because indeed in science
there's hardly anything that is bulletproof.
I don't know, the laws of gravity might be bulletproof, but even those as you realize
they're just a...
Only down to atomic levels, yeah.
Exactly.
In the theory of relativity, they would start failing.
He was very cautious.
I think that paper had tremendous impact. I think that we have not been
very cautious in moving forward with many of our observational associations and the claims that we
have made about them. I don't want to give any holistic perspective and I don't want to give,
let's say, a very negative perspective of epidemiology because we run the risk of
entering the other side where you will have some science and wires saying, so you're not certain and therefore we can
have more air pollution, you know, we can have more pesticides, we can have more, that's
not clearly the case.
I mean, we have very solid evidence for many observational associations.
There's not the slightest doubt that tobacco is killing right
and left. It's likely to kill one billion people over the last century.
Let's go through tobacco as the poster child for Bradford Hills criteria. So I'm going
to rattle off the quote unquote criteria and just use tobacco as a way to explain it. So let's start with strength.
How does the association between tobacco and lung cancer
fit in terms of causality vis-a-vis this criteria of strength?
It is huge.
I mean, we do not see odds ratios of 10, 20, and 30,
as we see with tobacco with many types of cancer
and with other outcomes like cardiovascular
disease. And I think that that really stands out. And we see that again and again and again.
We see very strong signal. We see signals that are highly replicable. And that's the exception.
In most of what we do nowadays in epidemiology, we don't see adraceous of
20. If I say an adraceous of 20 in my calculations, I'm almost certain that I have something wrong.
I always go back and check and I find an error that I have done.
Yeah, you're probably off by a log if you're getting a 20 nowadays.
Probably too log. I think in genetics, we are dealing actually
with odds ratios of 1.01 at the time.
So, so 1.01 may still be real.
And of course, you know, then the question is,
is it clinically relevant?
Yeah.
It's unlikely to be clinically relevant,
but you know, how much certainty can you get
even for its presence?
So the strength is huge.
You really essentially covered the next one, which is consistency.
If you look at all of the studies in the 1950s and the 1960s,
they were all really moving in the same direction.
And that's whether you looked at physicians who were smokers, non-physicians who were smokers,
whichever series of data you looked at, you basically saw this 10x multiplier in smoking. And I think on average, it worked
out to be about 14x. There was about a 14 times higher chance. I mean, that's a staggering
number. What about specificity? What is what does specificity refer to here?
I think that if you have such strength and such consistency, I would probably not worry that much about the rest of the criteria.
I think that criteria like specificity or analogy, they're far more soft in terms of what they would convey.
And also, we just don't know the nature of nature
in how it operates.
Many phenomena may be very specific,
but it doesn't have to be so.
We should not take it for granted
that we should see perfect specificity
or low specificity.
We see many situations where you have multi-dimensional
situations of causality, you have multiple factors
affecting some outcome or you have one factor affecting multiple outcomes. The density
of the webs of causality can be highly unpredictable. So I would not worry that much about other other criteria if you have some like strength and consistency being so
impressive in these cases. Now in most cases we don't have that right we'll get
an odds ratio of 1.14 which of course is a 14% relative increase as opposed to
you know 14x. So in those situations when when strengthen consistency or out the window, which is
essentially true of everything in nutritional epidemiology, I can't really think of examples
and nutritional epi where you have strengthen consistency. Well, major deficiencies, I think,
would belong to the category of very clear signals, major nutritional deficiencies, you know,
if you have like, yeah, yeah, very, very, for example.
Sure, sure.
Yeah.
You're, you know, thymine deficiency where you're out to lunch.
But do you then look at, I mean, even biological gradient gets very difficult with the tools of
nutritional epi.
Do you start to look at experiment?
Plosibilities, to me, as always, struck me as a very dangerous one because I don't know,
it just seems a bit of hand-waving. I mean, where do you then look?
I think the first question is whether you can get experimental evidence. To me, that's the priority,
and I realize that in some circumstances when you know that you're dealing with highly likely
harmful factors, you cannot really have equipoists to do randomized trials.
But for most situations in nutrition, to take nutrition as the example that we have been
discussing, you can do randomized trials.
And actually, we have done randomized trials.
It's not that we're not doing randomized trials.
We have done many thousands of randomized trials.
Most of them, unfortunately, are pretty small and underpowered.
And they suffer from all the problems that we discussed earlier
with underpowered studies that are selectively reported
with no pre-registration and with kind of
haphazardly done analysis and reporting.
I mean, they're not necessarily better than observational data
that suffer from the same problems.
But we also have a substantial number of very large randomized trials and nutrition.
We have over 200 large randomized trials.
Most of those focus on specific nutrients or supplementations.
Some are looking at diets like Mediterranean diet.
And with very few exceptions,
they do not really show the benefits
that were suspected or were proposed
in the observational data.
There are exceptions, but they're not that many.
That, to me, suggests that most likely,
the interpretation that most of the observational signals
are false positives or substantially exaggerated
is likely to be true.
We shouldn't be throwing out the baby with a bath water.
There may be some that are worth pursuing and that may be true.
And I think that this means that we need to do more trials.
The counter argument would be that well in a randomized trial, especially a large one,
especially with long-term follow-up, people will not adhere to what you tell them to do
with their diet or nutrient intake or supplementation.
My response to this is that when it comes to getting evidence about what people should eat,
that lack of adherence is part of the game. It's part of real life. So if a specific diet
is in theory better than another, but people cannot adhere to that.
It's not really better because people cannot use it.
So I get the answer to the question that I'm interested in, which is, is that something
that will make a difference?
Of course, it does not prove that biochemically, or in a perfect system, or in the perfect
human who is eating like a robot, that would not be helpful. But I don't care
about treating robots. I care about managing and helping real people.
I agree with that completely, John. I would throw in one wrench to that, which is in a world
of so much ambiguity and misinformation. I do think it's important to separate efficacy
from effectiveness. What you're, of course, saying is in the real world, only effectiveness matters.
So real-world scenarios with real-world people.
But I still think there is a time and a place for efficacy
where we do have to know what is the optimal treatment under perfect circumstances
if we want to have any chance at, for example, informing policy.
I'll give you an example.
Food stamps, should food stamps preferentially target the use of certain foods over others?
Well, again, if you had really efficacious data saying this type of food is worse than that type of food,
you could steer people towards healthier foods.
It could impact the way we subsidize certain foods.
In other words, it's really all about changing the food environment. So it is very hard to follow.
I think any diet that is not the standard American diet. So any time you opt out of the
standard American diet, whether it be into a Mediterranean diet or a vegetarian diet or a low
carbohydrate diet, or basically anything that's not the crap
that we're surrounded by requires an enormous effort.
And I think a big part of that is because
there is still so much ambiguity around
what the optimal nutritional strategies are.
We haven't answered the efficacy question
because I think we keep trying to answer
the effectiveness question.
I agree. And I think I would not abandon efforts to get some insights on efficacy,
but we're not really getting these insights the way that we have been doing things. I think that
if you want to get answers on efficacy, there are options. One is through the experimental
approach. So you can still randomize trials, but you can do them under very controlled supervised circumstances
that people are in a physiology or metabolism clinic
that they're being followed very stringently
on what they eat and what happens to them.
And you can measure very carefully these biochemical
and physiological responses.
I think that a second approach in the observational world or between
the observational and the randomized is Mendelian randomization studies with the advent of genetics.
We have lots of genetic instruments that may be used to create designs that are fairly
equivalent to a randomized design. So you can get some estimates that are not perfect because Mendelian randomization has its own assumptions
and sometimes these are violated.
But at least, I think that they go a step forward
in terms of the credibility of the signals that you get.
And then you have the pure observational evidence
which I don't want us to discard it completely.
I think that these are data which you need to use them.
We just need to interrupt them very cautiously.
If we use some of the machinery that we have learned to deploy in other fields, for example,
one approach is what I call the environment wide or exposure wide association testing,
instead of testing and reporting on one nutrient at a time.
You just run an analysis of all the nutrients
that you have collected information on,
and you can also do it for all the outcomes
that you have collected information on.
So that would be an exposure outcome-wide association study,
and then you report the results
taking to account the multiplicity
and also the correlation structure
between all these different exposures and outcomes.
You get a far more transparent and complete picture,
and if you get signals that seem to be recurrent and replicable across multiple datasets,
multiple cohorts that you run these analysis, you start having higher chances of these signals to be
reflecting some reality.
Still, it's not going to be perfect because of all the problems that we mentioned, but it
is better compared to what we do now where we just go after finding yet one more association
what at a time and coming up with yet another paper that is likely to be very low credibility.
John, if you're 2005 paper on the frequency with which we were going to come across
valid scientific publications is arguably the one that's, is that your most cited paper? No,
it's not the most highly cited. It's received, I think, close to 10,000 citations, but for example,
the Prisma statement for Met Analysis has received far more.
Okay. Well, if that, I was gonna assume
that the 2005 paper was the most cited,
but I was gonna say the most entertaining
is your 2012 paper, which is the systematic cookbook review.
And again, this is just one of those things
where I remember the moment this paper came out
and just the absolute belly laughing that I had reading this.
And frankly, the sadness I had reading this because it is a sarcastic commentary in a way
on a problem that I think plagues this entire field.
So in this paper, you basically, I don't know if it was randomly, but you selected
basically 50 common ingredients from a cookbook, right? Was there any method behind how you did
this or was it purely random?
Well, we used the Boston cookbook that has been published since the 19th century, and we randomly chose ingredients by selecting
pages, and then within those the recipes and the ingredients that we're in these recipes.
So, yes, it is 50 ingredients, a random choice they're of, and trying to map how many of those
have had published studies in the scientific literature in terms of their association with cancer
risk. And not surprisingly, almost all of them had some published studies associating them with
cancer risk. Even the exceptions were probably exceptions because of the way that we searched,
for example, we didn't find any study on vanilla, but there were studies on banalins. So we had
changed, we had screened with the names of the biochemical constituents
of these ingredients probably, I guess,
all of them might have had some studies
associating them with cancer risk.
How was this paper received
by the nutritional epidemiology community?
I think it created lots of enemies and lots of friends.
And I'm grateful for the enemies
who some of them have pushed back with constructive comments.
I think that most people realize that we have a problem.
I think that even people who disagree with me on nutrition, I have great respect for them,
and I'm sure that they're well-intentioned. I think that at the bottom of their heart,
it's not that they want to do harm. They want to save lives, they want to improve nutrition, they want to improve
our world. So I think that it should be feasible to reach some synthesis of these different
approaches and these different trends. And I do see that even people who have used traditional
methods do start using some of the methods that we have proposed.
For example, these exposure-wide approaches
or trying to come up with large consortia
and meta-analysis of multiple cohorts
to strengthen the results and the standardization
of the results.
I worry a little bit about some of the transparency
of these efforts.
To give you one example, I have always argued
that if you can have large-scale meta-analysis of multiple teams, ideally all the teams joining forces
and publishing a common analysis with common standards, and ideally these would be the best
standards and the best statistical
tools thrown at the analysis.
This is much better than having fragmented publications.
So in some questions of nutrition, I have seen that happen, but here's what goes wrong.
The invitation goes to other investigators who have already found results that square with the beliefs of the inviting
investigator. So there may be 3000 teams out there and the invitation goes to the 100
teams that have claimed and believed that there is that association. And then these data are
cleaned, combined and analyzed in the way that has found the
significant association already, and you have a conclusion with an astronomically low
p-value that here it is.
We have concluded that our claim for a significant association is indeed true, and here's a large
metanel says, now this is equally misleading or even more misleading than the single studies because you have cherry-picked
studies based on what you already know to be the case.
And putting them together, you just magnify the cherry picking, you just solidify the cherry
picking.
So one has to be very cautious.
Magnitude and amount of evidence alone does not make things better. I actually
can make things worse. You need to ask what is the foundational construct of how
that evidence has been generated and identified and synthesized. And in some
cases it may be worse than the single small studies that are fragmented because
some of them may not be affected by the same small studies that are fragmented because some of them may
not be affected by the same biases.
There also seem to be sort of institutional issues around this, right?
I mean, your alma mater has a very strong point of view on nutritional epidemiology, right?
I think this is unavoidable.
There are schools of thought in any scientific field and Harvard has an amazing team of nutritional
epidemiologists. I have
great respect for them, even though probably we do not agree on many issues. I think that
we should look beyond, let's say, the personal differences or opinion differences. I think
that my opinion has less weight than anyone else's weight in that regard.
If I want to be true to my standards, I'm not trying to promote something because it is
an opinion.
What I'm arguing is for better data, for better evidence, for better synthesis, and more
unbiased steps in generating the evidence, synthesizing the evidence, and interpreting it.
And I'm willing to see whatever result emerges by that process.
I'm not committed to any particular result.
I would be extremely happy if we do these steps and we come up with a conclusion that
oh, 99% of the nutritional associations that we're proposed were actually correct.
I have absolutely no problem with that if we do it the right way.
What I'm worried is resistance to doing it the right way.
I think your point earlier though about the difference between say how the genetics community
and the nutrition community were able to sort of approach this problem.
I don't think you can forget your second point, right, which is it's very difficult to overcome
prior beliefs. And when an individual has made an entire career of a set of beliefs, I think it
requires a very special person to be able to say, you know, that may have been incorrect. And that is independent
of what that belief is, by the way, that can be a belief that maybe correct or maybe fundamentally
incorrect. You know, it's funny. I recently saw this thing on Netflix. It was the kind of
documentary about this DB Cooper case. Do you remember, do you know, this DB Cooper case. Do you remember this DB Cooper case? It's the only
unsolved act of US aviation crime that's never been solved. So do you know this case, John,
the guy who hijacked an airplane and then jumped out the back in 1971?
Oh, I may have heard of it somewhere, but yeah, I don't recall it very well.
Well, it's interesting in that this guy, Hydex and Airplane, with a bomb and requests that
the plane be landed while they pick up $200,000 in four parachutes, he then gets the plane
to take back off and jumps out the back with the money.
And he's never been found.
Nine years later, they found a little bit of the money that's the only real clue.
And this documentary focused on four suspects, four of many suspects. And
you basically hear the story of each of the four suspects and each of the people who today
are making the case for why it was their uncle or their husband or whatever. And my wife
and I are watching this and we're thinking it's interesting. And at the end, I just said to her, I said, you know, this is a great sort of example
of human nature, which is I believe every one of those people truly believes that it was
their relative or friend or whomever who was DB Cooper.
And yet I think all of them are wrong.
I think each of those four suspects is categorically not the person
and yet each of them I am convinced by their sincerity. And I think that's the problem is,
I don't think science should be able to be that way. That's the problem I think I have with
epidemiology is that I guess I'm just not convinced it's a science and the way that we talk
about science. Well, we have to be cautious because we are human and scientists have beliefs and I
think that there's nothing wrong with having beliefs. I think the issue is can we map these
beliefs, can we be transparent, can we be as much restrained about how these beliefs are influencing the contact
of our research and the way that we interpret our findings?
It will never be perfect.
We are not perfect.
And I think that aiming to be perfect is not tenable.
But at a minimum, we should try to impose as many safeguards in the process as to minimize
the chances that we will fool
ourselves, you know, not fool others, but fool ourselves to start with as fine-man would say.
This is not easy in fields that have a very deeply entrenched belief system, and I think nutrition
is one such. Again, there's no bad intention here. People are well-intentioned. They want to do good.
I will open up our emphasis. Of course, there is some bad intentions. There's no bad intention here. People are well-intentioned. They want to do good. I will open a parenthesis.
Of course, there is some bad intentions.
There's big food, there's industry who wants to promote their products and sell whatever
they produce.
And that's a different story.
And it is another huge confounder, both in nutrition and in other fields that we have
very high penetrance of financial conflicts.
But I think that non-financial conflicts can also be important.
And at a minimum, we should try to be transparent about them,
try to communicate both to the external world, but also to our own selves,
what might be our non-financial conflicts and beliefs in starting to go down a specific path
of investigation and a specific interpretation of results. You referred to it very, very briefly
earlier. What were the exact details of the case of Brian Wandsick at Cornell? That was a lot to do
and it seemed that that went one step further. That seemed like there was something quite deliberate going on.
Well, in that case, it was revealed based on the communication
of that professor with his students
that practically he was urging them to cut corners
and to torture the data until they would get
some nice looking result.
And practically he was packaging nice looking results as soon as they would
become available based on that data torturing process.
So the data torturing was the central force in generating these dozens of papers that
were creating a lot of interest, and probably they were very influential, many of them in
terms of decision making, but if you create results and significance
in that fashion, obviously the chances that these would be reproducible results is very,
very limited.
Yeah.
And, of course, he was a very prominent person in the field.
It makes you wonder how often is this going on with someone maybe less prominent, where they're part of
that 35 million people who are out there authoring the, what are we about, 100,000 papers
a month make their way onto PubMed?
I mean, it's an avalanche, right?
We have a huge production of scientific papers, as you say.
And if you look across all sciences, probably we're talking about easily five million papers added every year and the number is accelerating every single year.
Of course, very few of them are both valid and useful.
And it's very difficult to sort through all that mountain of published information.
I think that research practices are substandard in most scientific fields for most of the research being done.
There's a number of surveys that have been conducted
asking whether fraud is happening and whether
suboptimal research practices are being applied.
There results are different depending on whether you ask the person being interviewed
on whether they are doing this or whether people in their immediate environment are doing this.
So fraud, I think, is uncommon.
I don't think that fraud is a common thing in science.
It does happen now and then, but I don't think that it is a major threat in terms of the
frequency.
It is a threat in terms of the visibility that it gets and the damage
that it gets to the reputation of science as an enterprise, but it's not common.
What is extremely common is questionable research practices or harmful research practices,
which means cutting corners in different ways. And depending on how exactly you define that,
the percentage of people who might be cutting corners at some point is extremely high,
maybe approaching even 100%, if you define it very broadly,
and if you include situations where people are not really
cognizant about the damage that they do
or the suboptimal character of the approach that they're taking
and how it subverts the results and or the conclusions of the study.
Now, how do you deal with that? Do you deal with that with putting people away to jail or
making them lose their jobs or making them pay $1 million fines? I don't think that
that would work because you would probably need to fire the vast majority of the scientific
workforce and all of these are good people. They're not there because they're their frauds. But you need to work through training,
through sensitizing the community, having a grassroots movement,
about realizing what the problems are, how you can avoid these traps, and how you can use better
methods, how you can use better inference tools and how you can enhance
the credibility of your field at large, not only your own research, but the whole field needs
to move to higher level. And I think that no scientific field is perfect. There are different
stages of maturity, at different stages of engagement with better methods. And this is happening in a continuous basis.
It's an evolution process.
So it's not at one time that we did one thing and then science is going to be clean and
perfect from now on.
It is a continuous struggle.
And every day you can do things better or you can do things worse.
Of those 35 million people who are out there publishing science today, how many of them
do you think are really fit to be principal investigators and be the ones that are making
the decisions about where the resources go, what the questions are that should be asked,
and what the real and final interpretation is.
I mean, that has to be a relatively small fraction
of that large number, right? Well, 35 million is the number of author IDs in Scopus, and even that
one is a biased estimate, like any estimate, it could be that you have a much, much smaller number
of people who are what we call principal investigators. The vast majority of people who have authored at least one scientific paper have just authored
a single scientific paper, and they have just been co-authored.
So they may be students or staff or supporting staff in larger enterprises, and they never
assume they're all of leading research or designing research or being the key players in doing research.
There's a much smaller core of people who I would call principal investigators.
We're talking probably at a global level, if you take all sciences into account, probably
they're less than one million.
But still, this is a huge number, of course.
They're level of training, they're level of how familiar they are with best methods, their
beliefs and priors and biases, it's very difficult to fathom.
Some people argue that we need less research, that probably we should cut back and really
be more demanding and asking for credentials and for training and for
methodological rigor for people to be able to lead research teams. I'm a bit
skeptical about any approach that is starting with a claim we need to cut back
on research because I think that research and science eventually is the best
thing that has happened to humans. Science is the best thing that has happened to humans.
And I think that if we say we need to cut back on research because research is aboptimal, we may end up in a situation where you create an even worse environment,
where you have even more limited resources and you still have all these millions of people struggling to get these even more limited resources, which means that they have even more incentives
to cut corners.
They have even more incentives to come up with striking, splashing results.
And then you have an even more unreliable literature.
So less is not necessarily the solution.
Actually, it may be problematic. Improved standards, improved circumstances of doing research,
an improved environment of doing research is probably what we should struggle for, creating
the background where someone who's really a great scientist and knows what he or she is doing, will get support and will be allowed to thrive.
Also, allow to look at things that have a high risk of failing.
I think that if we continue incentivizing people to get significant results,
no matter how that is defined, we are incentivizing people to do the wrong thing.
We should incentivize them to try really interesting ideas and to have a high chance of
failing.
This is perfectly fine.
I think if you don't fail, you're not going to succeed.
So we need to be very careful with interventions that happen at a science-wide level or even
discipline-wide level.
We do not want to destroy science.
We want to destroy science, we want to improve
science and some of the solutions they run the risk of doing harm sometimes.
Based on your comment about the sort of the risk appetite that belongs in science, to
me it suggests an important role for philanthropy because industry obviously has a very clear risk appetite that is going to be driven by a financial
return. By definition, everybody involved in that is a fiduciary, whether it be to a private or
public shareholder. And therefore, it's not the time to take risk for the sake of discovery.
Conversely, at the other end of that spectrum, it might seem like the government in the pure public sector
should be funding risk, but given the legislative process by which that money is given out and the
lack of scientific training that is in the people who are ultimately decision makers for that money,
it also seems like a suboptimal place to generate risk.
That seems to be the place where you actually want to demonstrate a continued winning career,
even if you're not advancing knowledge in the most insightful way.
And so what that leaves is an enormous gap for risk, which I think has to be filled with
philanthropic work.
Do you agree with that? gap for risk, which I think has to be filled with philanthropic work.
Do you agree with that?
I agree that philanthropism is very important.
No strings attached philanthropy can really be catalytic in generating signs that would
be very difficult to fund otherwise.
Of course, public funding is also essential, and I think that we should make our best to
make a convincing case that public
funding should increase.
And you know, not decrease, as I said, decreasing public funding makes things far, far worse
for many reasons.
I think that we need to realign some of our priorities on what is being funded with each
one of these mechanisms.
Currently, a lot of public funding is given to generate translational products that are
then exploited immediately by companies who make money out of them.
And conversely, the testing of these products is paid by the industry.
I find that very problematic because the industry is financing and controlling the studies, primarily randomized trials or other types of evaluation research,
that are judging whether these products that they're making money of are going to be promoted, used, become blog busters, and so forth,
which inherently has a tremendous conflict. I would argue that the industry should really pay more
for the translational research,
for developing products through the early phases.
And then public funding should go to testing
whether these products are really worth it,
whether they are beneficial, whether they have benefits,
whether they have no harms or very limited harms.
That research needs to be done with uncomflicted funding
and uncomflicted investigators, ideally through public funds.
Of course, philanthropy can also contribute to that.
Philanthropy, I think, can play a major role
in allowing people to pursue high risk ideas
and things that probably other funders would have
a hard time to fund.
I think that public funds should also go to high risk ideas.
The public should be informed that science
is a very high risk enterprise.
If you try to create a narrative,
and I think that this is the traditional narrative
that money from taxpayers are used only for paying research
grants that each one of them is delivering
Some important deliverables. I think this is a false narrative most grants if they really look at interesting questions
They will deliver nothing or at least you know they will deliver that sorry. We tried
We spend so much time we spent so much effort, but we didn't really find something that is interesting.
We'll try again. We did our best. We had the best tools, we had the best scientists.
We applied the best methods. But we didn't find the new laws of physics.
We didn't find a new drug, we didn't find a new diagnostic test.
We found nothing. That should be a very valid conclusion.
If you do it the right way with the right
tools, with the right methods, with the best scientists being involved, putting down legitimate
effort, we should be able to say we found nothing. But out of one how thousand grants, we
have five that found something. And that's what makes the difference. It's not that each
one of them made a huge contribution. It is these five out of 1,000 in some fields and in other fields, obviously, it may be
a higher yield that eventually transformed the world.
I mean, this seems like a bit of a communications problem because that's clearly the venture
capital model that seems to work very well, which is on any given fund that your fund is
made back by one company or one bet.
It's not an average.
It's a very asymmetric bet.
Similarly, when you look at other landmark public high-risk funding things, the Manhattan
project, the space project, these were upsetting high-risk projects.
Yet, I don't get the sense that the public wasn't standing behind those. So it almost seems like there's a disconnect in the way scientists communicate their work
to the public versus the way NASA did. I mean, NASA was a PR machine. And obviously, in
the case of the Manhattan Project, I think you're in the duress of war. But we can't
lose sight of the fact that the scientific community was the one that stood up. The physicists
of the day are the ones that said to Roosevelt, like this has to be done.
I mean Einstein took a stand.
So I don't know.
I guess it all comes back to scientists need to lead a bit and lead to be better communicators
with the public, right?
Science communication is a very difficult business.
And I think that especially in environments that are polarized, that have lots of conflicts,
inherent conflicts, lots of stakeholders in the community are trying to achieve the most
for themselves and for their own benefits, it can be very tricky.
As scientists have a voice, but that voice is often drowned in the middle of all the
screams and the Twitter and social media and media and the agendas and lobbies and everything.
How do we strengthen that?
I think that there's two paths here.
One is to use the same tricks as lobbies do, and the other is to stick to our guns and
behave as scientists.
We are scientists, we should behave as scientists.
I cannot prove that one is better than the other.
I think that both myself and many others feel very uneasy
when we are told to really cross the borders of science
and try to become communicators that are lobbying
even for science.
It's not easy. You want to avoid
exaggeration. You want to say that I don't know. I'm doing research because I don't
know. I'm an expert, but I don't know. And this is why I believe that we need to
know because these are questions that could make a difference for you. How do you
tell people that most likely I will fail? That most likely 100 people like me will fail, but maybe one will succeed.
We need to keep our honesty.
We need to make communication clear cut.
We need to also fight against people who are not scientists and who are promising much more.
And they would say that, oh, you need to do this because it will be clearly a success.
And they're not scientists, but they know they they're very good lobbies
It's very difficult. It's difficult times for science. It's difficult times to defend science
I think that we need to defend our method. We need to defend our principles. We need to defend
the honesty of science in trying to communicate it rather than
build
exaggerated promises or narratives that are not realistic.
Then even if we do get the funds, we have just told people lies.
I completely agree.
I don't think what you and I are saying is mutually exclusive.
I think that's the point, right?
I mean, you said at a moment ago, right?
I mean, Feynman's famous line that, you know, the most important rule in science is not to fool anyone, and that starts with yourself. You're the most, you're the
easiest person to fool. And once you fooled yourself, the game is over. And I think the
humility that you talk about communicating with the public is the necessary step. I think
people, I mean, I guess for me, just having my daughter who's now just starting to understand or ask questions
about science is so much fun to be able to talk about this process of discovery and to remind
ourselves that it's not innate, right?
This is not an innate skill.
This is something.
This methodology didn't exist 500 years ago. So for all but 0.001% of our genetic lineage, we didn't even have
this concept. So that gives us a little bit of empathy for people who have no training
because if you weren't trained in something, you know, it's, you know, there's no chance
you're going to understand it without this explanation. But I feel strongly that there
can't be a vacuum, right?
Because the vacuum always gets filled.
And if the scientists aren't the one speaking,
then, you know, if the good scientists aren't the one speaking,
then it's either gonna be the bad ones
and or the charlatans who will.
And before we leave, Epi, there's one thing I wanna go back to
that I think is another really interesting paper of yours.
This is one from two years ago.
This is the challenge of reforming nutritional
epidemiologic research.
And this is the one where you looked at the single foods
and the claims that emerged in terms of epidemiology.
I mean, some of these things were simply absurd.
Do you remember this paper that I'm talking about, John?
You've written a couple outlawing these lines, but this is the one that, you know, where you found
a publication that suggested eating 12 hazelnuts per day
extended life by 12 years, which was the same as drinking three cups of coffee and eating one
mandarin orange per day could extend lifespan by five years, whereas consuming
one egg would shorten it by six years, and two strips of bacon would shorten life by a decade,
which by the way was more than smoking.
How do you explain these results?
And more importantly, what does it tell us again about this process?
Well, these estimates obviously are tangen cheek. They're not real estimates.
They're a very crude translation of what the average person in the community would get if they
see the numbers that are reported, typically with relative risks in the communication of these
findings. They're not epidemiologically sound. The true translation to change in life expectancy
would be much smaller, but even then,
they would probably be too big
compared to what the real benefits might be
or the real harms might be with these nutrients.
I think it just shows the magnitude of the problem
that if you have a system that is so complicated
with so inaccurate measurements, with so convoluted and overtly correlated variables with selective reporting
and biases superimposed, you get a situation pretty much like what we described in the nutrients and cancer risk
where you get an implausible big picture, where you're talking about huge
effects that are unlikely to be true. So it goes back to what we have been
discussing about how you remedy that situation, how you do, you bring better
methods and better training and better inferences to that land of irreproducible results. Now in, gosh, it might have been 2013,
14, a very interesting study was published called Predamid,
which we'll spend a minute on.
And it was interesting in that it was a clinical trial.
It had three arms and it relied on hard outcomes.
Hard outcomes, meaning mortality or morbidity of some sort
rather than just
soft outcomes like a biomarker.
If you had told me before the results came out, this is the study, you're going to have
a low fat arm and two Mediterranean arms that are going to be split this way and this way,
and we're going to be looking at primary prevention. I would have said the
likelihood you'll see a difference in these three groups is quite low because it just
didn't strike me as a very robust design. But I guess to the author's credit, they had
selected people that were sick enough that within, you know, I think they had planned
to go as long as seven or so years, but under five years, they ended up stopping this study,
given that the two arms in the Mediterranean arm,
one that was randomized to receive olive oil,
the other, I believe, received nuts,
performed significantly better than the low fat arm.
And that's sort of how the story went
until a couple of years later.
What happened then?
So here you have a situation where I have to disclose my,
my own bias that I love the Mediterranean diet.
And I have been a believer that this should be a great diet to use.
I mean, I grew up in Athens and obviously I had something that I enjoy
personally a lot.
And I would be very happy to see huge
benefits with it.
For many years I was touting these results as here you go.
You have a large trial that can show you big benefits on a clinical outcome and actually
this is Mediterranean diet, which is the diet that I prefer personally even better.
And just to make the point, it was both statistically and clinically very significant.
Indeed. Beautiful result, very nice looking and I was very, very happy with that. I would use
it as an argument that here, here's how you can do it the right way. And so clinically relevant
results. But then it was realized that unfortunately this trial was not really a randomized trial.
The randomization had been subverted, that a number of people had not actually been randomized,
because of problems in the way that they were recruited.
And therefore, the data were problematic.
You had a design where some of the trial was randomized, and some of the trial was actually observational.
So, in English journal medicine, retracted and republished the study with lots of additional analysis
that tried to take care of that subversion of randomization
in different ways, excluding these people
from the calculations and also using approaches
to try to correct for the imposed observational nature
of some of the data.
The results did not change much, but it creates, of course, a very uneasy feeling that if really the crème de la crème trial,
the one that I adored and admired, had such a major problem, you know, such a major basic, unbelievably simple problem in its very fundamental structure
of how it was run, how much trust can you put on other aspects of the trial that require
even more sophistication and even more care, you know, for example, arbitration of outcomes
or how you count outcomes.
As you say, this is a trial that originally was reported with limited follow-up compared
to the original intention.
It was stopped at an interim analysis.
The trial has had lengthier follow-up.
It has published a very large number of papers as secondary analysis, but still we lack
what I would like to see as a credible result.
I mean, it's a tenuous, partly randomized trial, and
unfortunately, doesn't have the same credibility now compared to what I thought when it was
a truly randomized trial, and there was one outcome that was reported, and that seemed
to be very nice. Now, it's a partly randomized, partly subverted trial with, I don't know, 200, 300 publications floating
around with very different claims each time.
Most of them looking very nice, but fragmented into that space of secondary analysis.
It doesn't mean that Mediterranean diet does not work, and I still like to eat things
that fit to a Mediterranean diet, and this is my bias.
But it just gives one example of how things can go wrong, even when you have good intentions.
I think that I can see that people really wanted to do it wrong, but one has to be very cautious.
Yeah, I mean, I think for me that take away, if I remember some of the details, which I might not,
I mean, one of the big issues was the randomization around the inner household subjects, right? They wanted
that you couldn't have people in the same house eating the different diets, which is a totally
reasonable thought. It just strikes me as sloppiness that it wasn't done correctly in the first place.
You know, the cost of doing a study, the cost and duration of doing a study like that is so significant
that it's just a shame that on the first go, it's not nailed.
Because it could be seven years on $100 million to do that again.
This is true, but one has to take into account that in such an experiment,
you have a very large number of people who are involved, and their level of methodological training
and their ability to understand what needs to be done may vary quite a bit. So it's very difficult
to secure that everyone involved in all the sites involved in the trial would do the right thing.
And I think that this is an issue also for other randomized trials that are multi-center.
Very often now we realize that because of the funding structure, since, as we said,
there's very little funding from public agencies.
Most of the multi-center trials are done by the industry.
They try to impose some rigor and some standards,
but they also have to recruit patients from a very large number of sites, sometimes from countries and from teams that have no expertise in clinical research.
And then you can have situations where a lot of the data may not necessarily be fraudulent, but they're collected by people who are not trained, who have no expertise, who don't know what they're doing, and sometimes depending on the study design,
especially with unmasked trials,
or trials that lack allocation concealment, or both,
you can have severe bias interfere,
even in studies that seemingly appear to be
like the crem de la crem of large scale experimental research.
Yeah.
John, let's move on to one last topic, at least for now, which is the events of 2020.
In early April, I had this idea talking with someone on my team, which was, boy, the
boy, the zero prevalence of this thing might be far higher than the confirmed cases of this thing.
And if that were true, it would mean that the mortality from this virus is significantly lower than what we believe. This was at a time when I think there was still a widespread belief that five to 10% of people infected
with this virus would be killed.
And there were basically a non-stop barrage of models suggesting two to three million
Americans would die of this by the end of the year.
The first person I reached out to was David Allison and I said, hey, David, what do you think
about doing an assessment of zero positivity in New York City?
And he said, let's call John Ioannidis.
So we gave you a call that afternoon.
It was a Saturday afternoon.
We all hopped on a Zoom and you said, well, guess what?
I'm doing this right now in Santa Clara.
And I don't think it had been
published yet, right? I mean, I think you had just basically got the data, right?
I believe that was about that time. Yes. Tell me a little bit about that study and what
did it show? Because it was certainly one of the first studies to suggest
that basically the seropositivity was much higher than the confirmed cases.
This is a pair of two studies actually.
One was done in Santa Clara and the other was done in LA County.
And both of them, the design aimed to collect a substantial number of participants and tried
to see how many of them had antibodies to the virus.
So which means that they had been infected perhaps at least a couple of weeks ago.
And they were studies that Aaron Ben David and Jay Batacarya led and also we had colleagues
from the University of South California also leading the study in LA County.
They were studies that I thought were very important to do.
I was just one of many co-investigators but I feel very proud to have worked with that
team.
They were very devoted and they really put together in the field an amazing amount of effort
and very readily could get some results that would be very useful to tell us more about
how widely spread the viruses.
The results, I'm not sure whether you would call them surprising, shocking, anticipated,
it depends on what your prior would be.
Personally I was open to the possibilities of any result.
I had no clue how widely spread the virus would be, and this is why I thought these studies
were so essential.
I had already published more than a month ago that by that time that we just don't know,
we just don't know whether we're talking about a disease that
is very widely spread or very limited in its spread, which also translates in an inverse mode
to its infection fatality rate. If it's very widely spread, the infection fatality rate per person
is much lower. If it is very limited in its spread, it means that fewer people are affected, but
very limited in its spread, it means that fewer people are affected, but the infection fatality rate would be very high.
So whatever the answer would be, it would be an interesting answer.
And the result was that the virus was very widely spread, far more common compared to what
we thought based on the number of tests that we were doing and the number of PCR documented
cases at that time in the early months of the pandemic, we were doing actually very few tests. So it's
not surprising at all that the under-assertainment would be huge. I think that once we started doing
more tests and or in countries that did more testing, the under-assertainment was different
compared to places that were not doing much testing or we're doing close
to no testing at all.
I think that the result was amazing.
I felt that that was a very unique moment seeing these results when I first saw that that's
what we got, that it was about 50 times more common than we thought based on the documented
cases.
But obviously generated a lot of attention and a lot
of animosity because people had very strong priors.
I think it was very unfortunate that all that happened in a situation of a highly polarized,
toxic political environment.
Somehow people were aligned with different political beliefs as if, you know, political
beliefs should also be aligned with the scientific fact.
It was just completely horrible. So it created massive social media and media
attention, both good and bad. And I think that we were bombarded with comments
both good and bad and criticism. I'm really grateful for the criticism because
obviously these were very delicate
results that we had to be sure that we had the strongest documentation for what we were
saying. And we went through a number of iterations to try to address these criticism in the best
possible way. In the long term, with several months down the road, hindsight, we see that these results are practically
completely validated.
We have now a very large number of surreparival and studies that have been done in very
different places around the world.
We see that those studies that were done in early days had, as I said, the worst under
ascertainment, we had tremendous under ascertainment in several places around the world.
Even in Santa Clara, there's another data set
that was included in the national survey of a study
that was published in the Lancet about a month ago
on Himadiala suspicions.
And the infection rate, if you translated that was a couple
of months after our study,
if you translate it to an infection fatality rate,
it's exactly identical to what we had observed in early April.
So the study has been validated.
It has proven that the virus is a very rapidly and very widely spreading virus, and you need
to deal with it based on that profile.
It is a virus that can infect huge numbers of people. My estimate is as of early December, probably we may have
close to one billion people who have already been infected, you know, more or less around the world.
And there's a very steep risk gradient. There's lots of people who have practically no risk or
minimal risk of having a bad outcome. And there are some people who have tremendous risk of being devastated.
We have, for example, people in nursing homes who have 25% infection fatality rate.
You know, one out of four of these people if they're infected, they will die.
So it was one of the most interesting experiences in my career, both of the fascination about
seeing these results and also the fascination and some of the intimidation of some of the
reaction to these results in a very toxic environment, unfortunately.
I don't necessarily mean by name, but what forces were the most critical?
Presumably, these would be entities or individuals
that wanted to continue to promote the idea
that the risk here were warranted,
greater shutdown, slow down,
helped me understand a little bit more
where some of the vitriol came from.
I think that there were many scientists
who made useful comments.
And as I said, I'm very grateful for these comments
because they helped improve the paper.
And then there were many people in social media.
That includes some scientists who actually,
however, were not epidemiologists.
Unfortunately, in the middle of this pandemic,
we have seen lots of scientists who have no relationship
to epidemiology become kind of Twitter or Facebook we have seen lots of sciences who have no relationship to epidemiology
become kind of Twitter or Facebook epidemiologist all of a sudden and you know have very vocal opinions about how things should be done
I remember scientists who was probably working in physics or not who was sending emails every two hours
To the principal investigator and I was C. C. in them saying, you have not corrected
the paper yet.
And every two hours, you know, you have not corrected the paper yet.
I mean, his comment was wrong to start with, but as we were working on revisions, as you
realize, we did that with ultra speed, responding within record time to create a revised version and to post it, but even
posting it takes five days, I'm more or less. But what do you think was at the root of this
anger directed towards you and the team? Unfortunately, I think that the main reasons were not
scientific. I think that most of the animosity was related to the toxic political environment at the moment.
And personally, I feel that it is extremely important to completely dissociate science
from politics.
Science should be free to say what has been found with all the limitations and all the
caveats, but be precise and accurate, I would never want to think about what a politician is saying in a given
time or given circumstances and then modify my findings based on what one politician or
another politician is saying.
So I think that one of the attacks that I receive was that I have conservative ideology,
which is like the most appendisc penned this claim that I can
think of, you know, looking at my
track record and how much I have written
about climate change and climate
urgency and emergency and the problem
with gun sales and actually, you know,
gun sales becoming worse in the
environment of the pandemic and the need
to promote science and the need of the pandemic and the need to promote science
and the need to diminish injustice and the need to provide health, good health to all people
and to decrease poverty, you know, claiming that I'm a supporter of conservative ideology, sick
conservative ideology is completely weird. And then smearing of all sorts that the owner of an airline company had
given $5,000 to Stanford, which I was not even aware of, the funding of the trial, which
I was not even the PI, was through a crowdsourcing mechanism going to the Stanford Development
Office, which I never heard of who were the people who had funded that. And of course,
none of that money came to me
or to all the other investigators
who completely volunteered our time.
We have received zero dollars for our research,
but tons of smearing.
Sorry, just to clarify, John,
you're saying the accusation was that because an airline
had contributed $5,000 to Stanford Stanford for which you saw none of it,
that your assessment was really a way to tell everybody
that the airlines should be back to flying.
Yes, but I heard about it when the bus seemed reported.
Yeah, yeah, of course, yeah, yeah, of course,
not all I get it.
So it's very weird.
And because of all the attacks that we received, I received tons of emails that we're hate
mail and some of them threatening to me and my family.
My mother, she's 86 years old and there was a hoax circulated in social media that she
had died of coronavirus.
And her friends started calling at home to ask when the funeral would be and
when she heard that from
multiple friends she had a life-threatening hypertensive crisis so these people really had a very toxic
response that did a lot of damage to me and to my family and and to others as well and
I think that it was very unfortunate of damage to me and to my family and to others as well.
And I think that it was very unfortunate.
I asked Stanford to try to find out what was going on.
And there was a fact-finding process to try to realize, you know, why is that happening?
And of course, it concluded that there was absolutely no conflict of interest and nothing
that had gone wrong in terms of any potential conflict of interest and nothing that had gone wrong in terms of any potential conflict of interest.
But this doesn't really solve the more major problem.
For me, the most major problem is how do we protect scientists?
It's not about me.
It is about other scientists, some of them even more prominently attacked.
I think one example is Tony Fauci.
He was my supervisor.
I have tremendous respect for him. He was my supervisor. I have tremendous respect for him.
He was my supervisor when I was at NIAD, at NIH. He's a brilliant scientist. He has been
ferociously attacked. There's other scientists who are much younger. They're not, let's say,
as powerful. They will be very afraid to disseminate their scientific findings objectively,
if they have to ponder what the environment is at the
moment and what do different politicians say and how will my results be seen. We need to protect those,
we need to protect people who would be very much afraid to talk and they would be silenced.
If we see examples that, you know, can you see what happened to Johnny and Edy's or what happened
to Tony Fauci, if I were to say something, I would be completely devastated.
So I think that we need to be tolerant, we need to give science an opportunity to do
its job, to find useful information, to correct mistakes or improve on methods.
I mean, this is part of the scientific process, but not really throw all that smearing and all
that vicious vitriol to scientists. It's very dangerous regardless of whether it comes from people
in one or another political party or one in another ideology, it ends up being the same.
It ends up being populist attacks of the worst possible sort, regardless of whether they come
from the left or right or middle or whatever part of the political spectrum.
Well, I'm very sorry to hear that you had to go through that, especially at the level of your
family. I knew that you had been attacked a little bit. I was not aware that it had
spread to the extent that you described it. What do we do going forward here? I mean, it still
seems to be a largely opinion outside of science.
I mean, in science, that's a hallmark of a great thinker, right?
Someone who can change their mind in the presence of new information.
That's a core competency of doing good science.
In fact, much of what
we've spoken about today is the toxicity of not being able to update your priors and change
your mind in the face of new information. But yet somehow in politics, that is considered
the biggest liability of all time. Somehow in politics, anytime you change your mind,
it's wishy-washy and you're weak and you don't know your ideology,
there seems to be an incompatibility here. And in a crisis moment like this,
which is if this was a crisis, that seems to bring these things to the fore, right?
It is true, and I don't want to see that in a negative light necessarily because
somehow the coronavirus
crisis has brought science to the line light in some positive ways this way. I think that people
do discuss more about science. It has become a topic of great interest. People see that there are
lives depend on science. They feel that their world depends on science. What will happen
in the immediate future and mid-range future depends on science
and how we interpret science and how we use science.
So in a way, suddenly we have had hundreds of millions, if not billions of people, become
interested in science acutely.
But obviously, most of those, unfortunately, given our horrible science education, they
have no science education. And they use the tools of their traditional society discourse,
which is largely political and sectorized,
to try to deal with scientific questions.
And this is an explosive mix.
I think it creates a great opportunity to communicate
more science and better science.
At the same time, it makes science a hostage
of all these lobbying forces and all of this turmoil
that is happening in the community.
Well, John, what are you most optimistic about?
I mean, you have lots of time left in your career.
You're going to go on and do many more great things.
You're going to be a provocateur.
What are you most excited and optimistic about in terms of the future of science and the type of work that you're looking to advance.
Well, I'm very excited to make sure that and it does happen that there's so many things that I don't know and every day I realize that there's even more things that I don't know.
I realize that there's even more things that I don't know. I think that so far, if that continues happening,
and every day I can find out about more things
that I don't know, things that I thought were so,
but actually they were wrong,
and I need to correct them and find ways to correct them,
then I really look forward to a good future for science
and a good future for humans.
I think that we are just at the beginning.
We are just at the beginning of of knowledge. And I feel like a little kid who just wants to learn a little bit more,
a little bit more each time. Well, John, the last time we were together in person, we were in
Palo Alto and we had a Mediterranean dinner. So I hope that, I hope that sometime in 2021,
that'll bring us another chance for another flaky white fish
and some lemon potatoes and whatever
other yummy things we had that evening.
That would be wonderful.
And I hope that it does increase life expectancy as well,
although even if it doesn't, I think it's worth it.
John, thanks so much for your time today.
Thank you, Peter.
Thank you for listening to this week's episode of The Drive.
If you're interested in diving deeper into any topics we discuss, we've created a membership
program that allows us to bring you more in-depth, exclusive content without relying on paid
ads.
It's our goal to ensure members get back much more than the price of the subscription.
I thought that'd end.
Membership benefits include a bunch of things.
1.
Totally kick ass comprehensive podcast show notes,
the detail every topic paper person thing we discuss in each episode.
The word on the street is nobody's show notes rival these.
Monthly AMA episodes are asking me anything episodes, hearing these episodes completely.
Access to our private podcast feed that allows you to hear everything
without having to listen to spills like this. The Qualies, which are a super short podcast that we release
every Tuesday through Friday, highlighting the best questions, topics, and tactics
discussed on previous episodes of the drive. This is a great way to catch up on
previous episodes without having to go back and necessarily listen to everyone.
Steep discounts on products that I believe in, but for which I'm not getting paid to
endorse.
And a whole bunch of other benefits that we continue to trickle in as time goes on.
If you want to learn more and access these member-only benefits, you can head over to
peteratiamd.com forward slash subscribe.
You can find me on Twitter, Instagram, and Facebook, all with the ID, Peter atiamd.
You can also leave us a review on Apple podcasts
or whatever podcast player you listen on.
This podcast is for general informational purposes only.
It does not constitute the practice of medicine, nursing
or other professional healthcare services,
including the giving of medical advice.
No doctor-patient relationship is formed.
The use of this information and the materials
linked to this podcast is at the user's own
risk.
The content on this podcast is not intended to be a substitute for professional medical
advice, diagnosis, or treatment.
Users should not disregard or delay in obtaining medical advice from any medical condition they
have, and they should seek the assistance of their healthcare professionals for any such conditions.
Finally, I take conflicts of interest very seriously.
For all of my disclosures in the companies I invest in
or advise, please visit peteratiamd.com forward slash
about where I keep an up to date and active list
of such companies. you