The Peter Attia Drive - #389 - Thinking scientifically: why it's hard, why it matters, and a practical toolkit
Episode Date: April 27, 2026View the Show Notes Page for This Episode Become a Member to Receive Exclusive Content Sign Up to Receive Peter's Weekly Newsletter In this episode, Peter explores one of the most foundational topi...cs underlying nearly everything discussed on the podcast: how to think scientifically. Framed as an introspective deep dive, he examines why scientific thinking is inherently difficult for humans, the cognitive biases and tendencies that make it challenging to separate belief from evidence, and why these challenges are even more consequential in today's environment saturated with misinformation. He also offers a framework for improving our ability to evaluate claims, question assumptions, and identify a personal panel of experts, providing listeners with practical tools to become more disciplined and effective thinkers. We discuss: Topics to be covered and goals for this episode [2:00]; Scientific thinking: hypotheses, uncertainty, and the process of ruling out explanations [3:45]; How scientific knowledge differs from mathematical proof: useful approximations, evolving evidence, and acting under uncertainty [8:00]; Why scientific thinking is difficult: evolution, social instincts, and the need for deliberate practice [13:30]; Systems and tools designed to correct human bias [18:15]; How to think scientifically pt. 1: Notice when you're feeling certain [20:30]; How to think scientifically pt. 2: Judge the process, not just the conclusion [23:00]; How to think scientifically pt. 3: Notice when identity is shaping your beliefs [28:15]; How to think scientifically pt. 4: Don't confuse criticism with understanding [33:45]; How to think scientifically pt. 5: Outsource your thinking carefully [36:15]; Evaluating who to trust: incentives, consensus, and red flags in scientific credibility [45:15]; Science as a self-correcting system: why updating with evidence is a strength, not a weakness [49:00]; The key principles of scientific thinking, and a practical framework for evaluating claims and improving judgment [50:45]; and More. Connect With Peter on Twitter, Instagram, Facebook and YouTube
Transcript
Discussion (0)
Hey everyone, welcome to the Drive podcast.
I'm your host, Peter Attia.
This podcast, my website, and my weekly newsletter all focus on the goal of translating the science of longevity into something accessible for everyone.
Our goal is to provide the best content in health and wellness, and we've established a great team of analysts to make this happen.
It is extremely important to me to provide all of this content without relying on paid ads.
To do this, our work is made entirely possible by our members, and in return,
we offer exclusive member-only content and benefits above and beyond what is available for free.
If you want to take your knowledge of this space to the next level, it's our goal to ensure
members get back much more than the price of the subscription.
If you want to learn more about the benefits of our premium membership, head over to
peteratia-md.com forward slash subscribe.
Welcome to a special episode of The Drive.
In this episode, I step back to share how my thinking,
is evolving around a topic that sits actually well upstream of almost everything else we discuss
on this podcast, which is how to think scientifically. Now, I get asked this question all the time,
and frankly, I don't think until this episode, I really had a comprehensive way to approach
this important topic. So this is really an introspective episode about why scientific thinking is
so difficult for us as a species, why it matters more than ever in an environment flooded with
misinformation, and what each of us can do to get better at separating what we want to be true
from what the evidence actually suggests. So without further delay, I hope you'll enjoy this
episode of The Drive.
Today I want to talk about a skill that sits upstream of nearly every
decision you make about health, policy, risk, and even how to evaluate other people in this space.
I want to talk about how to think scientifically. By that, I don't mean how to run a lab or memorize
statistics. I mean how to evaluate claims, how to update your beliefs when the evidence changes,
and how to figure out who to trust when you can't do the analysis yourself, which, as we're going to
come to appreciate as often. If you get good at that, you put yourself in the position to make
better decisions than somebody who simply knows more facts but doesn't know how to weigh them.
We're going to cover four things here today. First, what scientific thinking actually is beyond,
you know, spending time in a lab. Second, why it's so hard for us, which has less to do with
intelligence than you might expect. Third, what you can do as an individual to get
better at it. And fourth, how to find people you can trust when you can't do the analysis yourself,
which, as I said a second ago, is going to be most of the time for most people. One idea is going to
thread through this entire episode. And I want to put it on the table right now. The goal of thinking
scientifically is not simply to be right. It's to be less wrong over time. Science is a process
built around that principle. And what I want to do today is help you engage with it more skillfully.
This is a topic I get asked about very often. And I think, honestly, until now, I haven't had a
great consolidated approach for laying it out. So let's start with what we actually mean when we say
think scientifically. There's a common misconception that scientific thinking is something
scientists do in labs and the rest of us just receive in the form of results. But that's not what I'm
talking about at all. Thinking scientifically is a way of engaging with claims about the world, any claims,
not just ones that come with a citation attached. At its core, it means generating hypotheses,
possible explanations for why something might be the way it is, or how something works. It means
testing those hypotheses against experimental evidence. It means updating your beliefs when the evidence
changes, and it means tolerating uncertainty throughout this entire process. It means separating
what you want to be true from what the evidence suggests is true, and recognizing, really recognizing
how often those two things are in tension. As Richard Feynman, someone we're going to refer to a few
times today. One of the greatest scientific thinkers in history once said, the first principle
is not to fool yourself, and you are the easiest person to fool. Scientific thinking means
being more invested in the process that produced a conclusion than in the conclusion itself.
Again, I want you to say that again with me, because that is not intuitive. Scientific thinking
means being more invested in the process that produced a conclusion than in the conclusion itself.
Most of us evaluate claims by asking, is it true? A scientific thinker asks a different set of
questions first. How did they arrive at this? What's the evidence? How strong is it? What are the
alternative hypotheses or explanations? And scientific thinking means understanding that I don't know
and It Depends are often the most honest available answers.
This idea, I don't know, is critically important, and I don't think we discuss it often enough.
In many ways, I don't know can always be the first answer to any scientific question.
The second answer is then our best understanding based on the available evidence.
But out of ease and out of confidence and out of trying to avoid sounding like a broken record,
we often just skip straight to the second answer.
We drop the uncertainty.
And when we do that, we lose something essential.
We lose the thing that makes scientific thinking scientific.
Because here's the thing.
One of the most useful ways to think about science, especially in medicine, is by focusing
on two of its core functions.
The first is ruling things out.
And the second, getting less wrong over time.
There's a famous saying within scientific research, often attributed to George Box, but honestly I've seen it attributed to 20 other people.
All models are wrong, but some are useful.
More often than not, we are not proving a claim in some final absolute sense.
We are comparing explanations, testing predictions, and gradually gaining confidence in the ones that survive contact with data.
We rule things out one by one until we're left with the explanation.
We can't rule out.
And then we make a logical leap.
We say we've eliminated every other possibility we can think of,
so we have growing confidence that this explanation is correct.
But notice the qualifier here.
Every other possibility we can think of.
It's not a proof.
Hard proof only exists in formal logic and mathematics,
where we can demonstrate within a set,
of defined rules that something must be true within those defined rules.
The rest of science relies on experimentation, trying to discover what the rules are, doing our best
to accept or reject rules based on available evidence, and deducing the best possible explanations
within the landscapes we've identified.
This is fundamentally different from a derived proof.
Now, I started my, you know, academic career in mathematics.
and we spent a lot of time working on proofs.
And it was very difficult for me
when I transitioned from mathematics to medicine,
which was so fundamentally messy.
Instead, we rely on our best approximations of reality,
but true certainty is not even on the table.
Our models are, at their core, probabilities built on probabilities.
They aren't proof.
They're simply the best we've got.
Sometimes that distinction barely matters in practice. Take gravity. The idea that objects with mass attract to each other is an empirically derived theory, one that is not derived from a true mathematical proof, but has experimentally outcompeted all other explanations and provided countless verified predictions. This theory, first proposed by Isaac Newton in 1887, has proven so successful,
that future iterations didn't destroy it. They refined it with incredible implications.
At the turn of the 20th century, Einstein proposed that gravity doesn't only lead to objects
attracting each other. Gravity quite literally bends time and space, slowing and speeding up
the passage of time, a proposition that was, you know, quite frankly, insane sounding at the time,
and insane sounding maybe now, except that it works. For example, due to the
incredible speed of man-made satellites circulating the Earth, together with their distance decreasing
the pull of Earth's gravity, the satellite systems responsible for GPS have to adjust their
clocks by 38 microseconds every day. Time literally passes slower on these satellites. And without
these adjustments, predictions made by Einstein's theory of relativity, our GPS system would misalign by
8 meters per minute, or 11 kilometers per day. From principles discovered by experimentation,
we have satellites orbiting over our heads, perfectly combating the pull of Earth's gravity to
rotate around the planet near endlessly. On these satellites, we adjust their clocks using
experimentally derived principles of the time dilation due to gravity, instead of sending these
hunks of metal crashing onto Earth, or getting data from them that air,
on the order of kilometers. Our theory of gravity permits the coordinated movement of hundreds of
satellites over decades, able to pinpoint exactly where your room is in your house or where you forgot
your phone. Now, do we know everything about gravity? No. In fact, combining our best theory of gravity
with our best theory of particle physics is one of the greatest unsolved scientific problems of our day.
But have we discovered enough about gravity to be useful?
Undeniably.
And confidence in our models isn't restricted to physics, although admittedly that's where
the highest confidence tends to concentrate.
But take something that impacts biology, take smoking.
We have overwhelming evidence that smoking causes cancer.
The epidemiologic data, with its enormous hazard ratios, the mechanistic understanding,
the dose response relationships, taken together, they've really.
ruled out any plausible alternative. At that point, is it really so different to say we've proven it?
Not much. When the evidence is so overwhelming, the gap between our least wrong model and
capital T true becomes vanishingly small. But most questions in medicine don't look like that.
Most live in the middle. The evidence is suggestive, sometimes highly suggestive, but imperfect.
The model is useful but incomplete, and the conclusions are right enough to act on now, but not final.
Dietary cholesterol is a good example. For decades, the accepted answer was straightforward.
The cholesterol we eat raises the cholesterol in our blood, which raises cardiovascular risk.
This was treated as settled. Dietary guidelines were built around it, eggs became the enemy,
and the evidence did point in that direction. It wasn't fabricated and it wasn't a conspiracy,
but it was incomplete. The relationship turned out to be far more complex and far more individualized
than the simple causal chain suggested. If the field had held onto that finding as our best current
model rather than settled, the guidelines might have been updated much sooner. Now, here's the part
that's conceptually, maybe even effectively difficult. The implication is this. Some of the
guidance that exists right now, today, as we're having this discussion, is going to
turn out to be as incomplete as eggs and cholesterol. Some of the guidance that you and I believe
in and follow. Now the kicker is I don't know which parts. Nobody does yet, at least not at a
conscious level. But the history of science tells us that some of what we currently treat as settled
simply is not. We have to live with that. We have to make decisions based on the best
available knowledge while staying aware that the best available truth is rarely the whole truth.
It means holding room for doubt and living confidently anyway.
It requires being a walking contradiction.
The good news is that thinking scientifically works precisely because it's built to address its own imperfections.
The system has self-correcting baked in, as long as you don't freeze your conclusions in place and defend them like territory.
And this ability can be honed.
It's not a talent or a personality trait per se.
discipline, a practice that requires effort, humility, and repetition. You can train it. You can get
meaningfully better. That said, scientific thinking is a practice. It's not an achievement. Good
thinking now doesn't guarantee good thinking later. It's something you have to keep doing or it atrophies.
Most of us think we already do this, but most of us are wrong about how consistently and effectively we do it.
So let's talk about why.
Here's the core thesis.
Thinking scientifically is not just hard.
It's unnatural.
And I mean that in a very literal, biological sense.
We're primates.
That means we have been evolved as social animals for roughly 50 million years.
For the vast majority of that time, your survival depended on your standing within a social group.
If the group accepted you, you had access to food, mates, and protection.
If the group rejected you, you were in real danger.
Exile wasn't an inconvenience. It was often a death sentence.
Social belonging was a survival imperative, and our brains were shaped over tens of millions of years to be exquisitely good at navigating social environments, reading faces, building alliances, signaling loyalty, maintaining status.
This is what our cognition was optimized for.
We do not just use social skills.
We fundamentally rely on social groups and social information.
We are also an intelligent species, but most learning, for most of our history, happened
through imitation or through language.
And both of these are intrinsically social.
You learn from members of your group, be it watching or listening.
The information you receive is filtered through trust, status, and identity.
Even reading a study alone in your office by the very act of using language is participating in a social system of knowledge.
Now, here's where the timeline narrows.
The first stone tools are around 3 million years old.
Homo sapiens, which is what we are, showed up about 250,000 years ago.
Formal logic was systematized roughly 2,500 years ago.
And a formal system of empiricism, the basis of the scientific method, is maybe 400 years old.
That's it.
A few hundred years of empiricism built on top of 50 million years of primate social cognition.
Logic and hypothesis testing are not our default state and can even be at odds with our fundamental sociability.
We form groups, form identities around these.
groups and let group membership shape what we believe and how we interpret evidence.
Social information can and will override logical information. This isn't a bug that only shows
up in uneducated people. It's a basic feature of human cognition. Evolution shaped our brains,
but evolution works on things that are good enough. Good enough to survive as a hunter-gatherer
was good enough for evolution. We weren't shaped to be the ultimate.
logicians. We were shaped to be good enough logicians to out-compete other animals. To access resources,
other animals couldn't, to make fast decisions in uncertain environments, and to do it within an
intricate social structure. That's a very different optimization target than figure out what's
inviolably true. The pursuit of science requires almost the opposite, holding multiple hypotheses,
tolerating uncertainty for years, understanding counterintuitive concepts like conditional probability
and effect size, willingness to change beliefs that are deeply held and socially costly to abandon.
When you frame it that way, the real question isn't why scientific thinking is hard.
It's frankly, how do we manage it at all?
It's frankly more amazing that we can do this than it is surprising that we struggle with it.
And yet we do manage it.
Now, if there were a second thesis within this section, it would be this.
Despite the limitations of our biology, despite our pull toward fast, social, identity, protective reasoning, we have invented,
notice I use the word invented, a remarkable set of corrective tools.
But importantly, we built structures, formal structures, specifically designed to counteract our natural tendencies.
peer review, blind experiments, pre-registration of hypotheses, statistical frameworks.
These aren't just tools. I think of them as prosthetics for objectivity. They exist
precisely because we've recognized, at some point collectively, that we couldn't trust our own
unassisted judgment. And instead of giving up, we engineered workarounds. Think of
about what a double-blinded clinical trial actually is. It is an explicit admission that even
well-trained experts can't be trusted to evaluate outcomes without being influenced by what they
hope to find. So we remove this information. We build a system that assumes we're biased and
corrects for it. Science also institutionalize productive disagreement. Peer review is adversarial by
design, the norm of replication, the idea that your finding doesn't count until someone else can
reproduce it, says something remarkable. We don't trust any one of us, but we trust the process.
And at the individual level, we can train ourselves to be better, to engage in science as a process
rather than a set of facts to accept or reject. It is slow and it is humbling, but it works.
All right. So far, we've been talking about the structural level, how science as an institution has built systems to overcome our natural cognitive limitations.
Now I want to shift gears and talk about something that's probably more directly relevant to most of you.
How we as individuals, even if we're not professionally interacting with the scientific method on a daily basis ourselves, can integrate scientific principles into our daily.
lives. We're going to look at five ideas. And while they're all important tools, we're going to
really dive into the fifth, how to approach outsourcing our thinking when necessary.
First, let's start with the idea that we want to treat certainty as a cue to slow down.
When you encounter a claim and you feel certain about it, treat that certainty as a signal
to pause. Certainty is a feeling, not an indicator of truth. Your brain generates it for all sorts of
reasons that we've talked about. Social consensus, emotional resonance, familiarity, repetition,
the confidence of the speaker. None of those have anything to do with whether the claim is correct.
When you notice certainty, ask yourself, why do I believe this? If the answer is social or based on
identity, everyone in my feed agrees, the person sounded confident, people I identify with believe this,
I want this to be true. That's a red flag. It doesn't mean the claim is wrong, but it means your
basis for believing it is social, not evidential. If the answer is that you've seen the data,
you've understood it, you've considered alternatives, and you still find this conclusion most
compelling, you're in a much better place. Now, here's the recursive part that makes this so
powerful. Asking yourself this question in the first place is step one. And the more you do it,
the more honestly you'll do it. If you start by being certain that you're always logical,
you'll eventually learn to ask yourself why you're so certain you're being logical.
The questioning deepens over time. It's hard to say when one masters this skill,
because frankly, I believe it looks different for different people. But this is the process.
Question your certainty, question your questioning, find the certainty and uncertainty
and build comfort in that uncomfortable space.
You won't become perfect at this. I know I sure haven't. But we can get monumentally better. We can get better at knowing what we do not know. And that awareness is worth a lot.
Okay. Second, judge the process, not just the conclusion. When someone makes a claim, most of us instinctively evaluate the conclusion. Is this true? Do I agree? That's natural. But it's not the first question to ask. The first question should be,
How did this person arrive at this?
What evidence? How strong?
What alternatives were considered?
What do critics say?
And have they engaged with those criticisms?
When you start asking these questions, something shifts.
You stop evaluating claims as things to agree or disagree with,
and you start evaluating them as products of a process.
And the quality of that process tells you far more than the conclusion itself.
A good process can produce a wrong.
conclusion. That happens in science all the time, but a bad process that happens to produce a right
conclusion is not something to trust, because it got there by accident, and it won't be reliable
going forward. Engaging with the process, how did we get this conclusion is the key. Now, let me make
this concrete with a tangible example. Detox cleanses, juice cleanses, supplement protocols,
products that claim to remove toxins from your body.
It starts with a real observation.
You don't feel as good as you used to.
Maybe you're tired, your digestion is off, your skin is feeling dull,
and we all know there are chemicals, pollutants, and food additives in our environment
that aren't doing us any favors.
All of this is true.
The basis is real.
That's what lures us in.
Then comes the conclusion.
Drink this, stop eating that, take this capsule, and those things go.
go away. The process failure is the absence of everything in between. How specifically does this product
remove toxins? Which toxins? Were they measured before and after? How were they measured? What was the
control? What's the mechanism by which the juice or supplement bind to mobilize and eliminate
a specific harmful substance from your body? Almost every time the answer to those questions is
silence, or vague gesturing at flushing and purifying. No real mechanism and no real study.
A conclusion has been asserted with no specific hypothesis being tested, no mechanism being
described, no blinding, no control. It's a leap straight from a real observation, a real problem,
to a marketed solution, with none of the work done in the middle. We can even be tricked by a
lived experience here. Maybe your headaches do go away when you consume nothing but lemon juice for
three days. Maybe your skin does clear up or your digestion feels better. Maybe there is a real
effect, but what confidence do we have in its cause? By dramatically altering our diet,
we alter numerous features of our physiology. By definition, we're eating different foods,
drinking different fluids, we're changing the very inputs our bodies metabolize,
giving rise to how we feel and think.
How do we know without an appropriate process that a toxin was purged from the body,
rather than a toxin was removed from your diet?
Even if the effect is real, its explanation can miss the mark entirely
and get you stuck repeatedly enduring three to seven days of intentional starvation
in some proprietary placebo for years.
When cutting some element of your diet, some processed,
food, for example, or portion size is actually doing the work. Without a controlled investigation,
we're fed a conclusion, but not the means to judge its validity. Now, detox cleanses might feel
like an easy target, but the argument structure, real observations, straight to confident
conclusions, can be far more subtle than a bottle of green juice. It shows up in supplement
marketing, in wellness claims, and in things that sound much more sophisticated than a cleanse.
What we're training ourselves to notice is the jump. Problem to conclusion, with nothing rigorous
connecting them. And while we're on supplements, here's a related example of why process questions
matter. You'll often see supplement companies claim their products are third-party tested. And that
sounds reassuring. But if you ask a process question, what specifically are they testing for?
the answer is often simply heavy metal contamination, which means you aren't getting assurance that your ashwaganda capsule contains ashwanda.
You're getting, at best, assurance that your ashwaganda capsule doesn't contain toxic levels of lead.
It's not that lead contamination isn't a problem, nor is it an overt lie that the product was tested.
But by omitting the step where you question the process, asking what the third party testing was for specifically,
we can be lulled into a sense of confidence that the testing process in reality wasn't even designed to provide.
This is what evaluating processes looks like in everyday life,
not necessarily reading clinical trials per se, just pausing long enough to ask how someone got from the problem to the conclusion
and noticing when the answer is they didn't bother or they didn't bother to do it right.
Okay.
Third, notice when identity.
entity is doing your thinking. This is a hard one. Maybe the hardest on the list.
Coalitional thinking is our default mode for all of the reasons I described earlier. It is hard
wired into our DNA motherboard and it can be the enemy of scientific thinking. No group is always
right. No political group, no activist group, no scientific group. If you find yourself believing
that your team has the right answer on every issue. That's not a sign that you found the right
team. It's a sign that your group identity is doing your thinking for you. There's a great
line from the movie Men in Black. A person is smart. People are dumb, panicky, dangerous animals,
and you know it. Individual thinking can be remarkably rational. Groups driven by identity
often aren't. The discipline is to consider arguments on their merits,
not based on where they're coming from. That means engaging with arguments from people you generally
disagree with and questioning arguments from people you generally trust. Let me give you two
examples. Most of us are familiar with Galileo and the heliocentric model. Galileo presented evidence
that the earth revolves around the sun. But this conflicted with Aristotle's physics,
which the church had adopted as essentially doctrine.
Galileo was tried, forced to recant, and spent the rest of his life under house arrest.
The evidence didn't matter because the conclusion threatened the identity and authority of the institution evaluating it.
This example is famous, but it isn't terribly relatable.
The example I find even more instructive comes from inside the medical community itself,
and it's actually something I wrote about in Outlive.
In the 1840s, Ignus Semmelweis was working in the maternity ward of the Vienna General Hospital.
The hospital had two clinics, one staffed by doctors and one by midwives.
Mortality from childbed fever in the doctor's clinic was roughly five times higher than in the midwife clinic.
And Semmelweis wanted to understand why.
He systematically ruled out explanations, birthing positions, the root of the root of the
clinic's priest, until a colleague died after cutting his finger during an autopsy on one of the
childbed fever patients. And the colleague died of symptoms identical to childbed fever. This was the
light bulb moment for Semmelweis. The dead bodies were carrying something, and doctors were carrying
that material from autopsies directly to deliveries. Midwives didn't perform autopsies.
so they couldn't carry material from the autopsy to the maternity patient.
He required physicians to wash their hands with chlorinated lime before working with maternity ward patients,
and mortality dropped from 18% to under 2% in some months all the way to zero.
And yet, the medical establishment rejected it.
Now, this is where it gets really interesting for our purpose, because the rejection wasn't
purely religious or political. It wasn't just that doctors didn't want to believe it.
Germ theory didn't exist yet. They had what sounded like a legitimate scientific objection.
The dominant theory of disease transmission was miasma. The idea that disease was caused by bad
air, by noxious fumes. Now, under that framework, the idea that invisible material on your hands
could transmit disease didn't make any sense. You can't.
wash bad miasmus off your hands. Samelweiss had gone through the right process, found the right
conclusion, but he couldn't explain why his intervention worked in terms that fit any accepted
theory over time. His findings let doctors tell themselves and each other that they were
rejecting Samel Weiss on scientific grounds. His conclusions couldn't fit the prevailing theory.
But layered underneath that objection was something much.
more primal. Accepting Semmelweis' data meant accepting that doctors had been killing their patients,
that their own hands, the instruments of healing, the symbols of their professional identity,
had been vectors of death. That was an identity-level threat, and the stated scientific
objection gave cover to the unstated identity defense. That pattern, identity-based motivation
hiding behind scientific-sounding skepticism undoubtedly happens today. The lesson isn't that you
should distrust doctors. It's that even trained experts can resist evidence when accepting it
threatens identity, status, or the story they've been telling themselves, even in the face of
overwhelming evidence, even when the process is right. That's how powerful the pull of identity
can be. Becoming aware of it is difficult, but critical for scientific decision-making.
Okay. The fourth one. Don't confuse criticism with understanding. This one is practical and I think
underappreciated. In science, we need to respect the asymmetry between building knowledge
and attempts to discredit it. It is vastly easier to criticize a study than it is to design and run one.
It's vastly easier to poke holes in evidence than it is to generate evidence. This is just a structural fact about how
how science works. Every study can be criticized. I mean that literally. Show me any study ever published,
and I, or any expert in that field, can find a legitimate methodological concern. The sample
size could have been bigger, the follow-up period could have been longer, the control group
wasn't perfectly match, there's residual confounding, the primary endpoint was a surrogate,
the population study doesn't quite generalize the way the authors generalize. Those are real
concerns and they matter, but they apply to everything. So the question is not, can this study be
criticized? The answer to that question is always yes. The question is, is this study informative
despite its limitations? And answering that question requires a kind of judgment. A willingness to
synthesize, to weigh evidence, to say this isn't perfect, but it moves the needle, that pure
criticism doesn't require. There's a concept called Brandelini's Law, the bullshit asymmetry
principle, which says the energy needed to refute bullshit is an order of magnitude larger than what's
needed to produce it. Someone committed to casting doubt can always outrun someone committed to
building understanding. As Mark Twain said, a lie can travel halfway around the world while
the truth is putting on its shoes. When hot button issues arise, the goal of science is to move the
field forward, gathering better evidence, improving models. Be wary of people who only criticize and
never synthesize. It's not that scientists should be shielded from the public, but the goal
should be focusing on what is built, what data exist, and how this helps us get closer to the truth,
not playing whack-a-mole with whatever mess seems to be sticking to the wall this week.
Okay.
The fifth and final principle outsource your thinking carefully.
It is important to recognize that no human being can exist in the modern world given the sheer expansion of knowledge without relying on the expertise of others.
This has nothing to do with intelligence.
It is simply not possible.
It is metaphysically impossible for any one individual to be a true expert across all domains.
And every day each of us make decisions from boarding an airplane to navigating a complex system
that depend on the judgment and competence of people whose domain knowledge exceeds our own.
There are effectively infinite examples of this reliance, which makes the central question unavoidable.
how do we decide in whom to place our trust?
Before we dive in, I'll take a moment to step back and tell you where I want to take us.
The goal here is for you to build what I think of as a personal board of advisors.
For any topic that matters to you, identify two or three people or outlets whose judgment you trust,
and be honest about why you trust them.
When you find these people, have them help you cut through the noise.
When I'm evaluating whether or not to trust someone on a scientific topic, I run through a set of questions.
Not necessarily formally, not with a clipboard or an Excel sheet, but these are the things I'm thinking about when I'm reading a paper, listening to a podcast, or watching someone on YouTube.
I'm thinking about it in three layers. First, who is this person? Second, how are they thinking?
Third, what should make me cautious? And we'll walk through each of these layers.
Okay, layer one, I ask, who is this person?
What is their actual expertise?
We'll get this out of the way first.
Credentials aren't conclusive, but they're a meaningful starting point.
If someone has a PhD in molecular biology and they're talking about a molecular biology finding,
the starting probability that they know what they are talking about is higher than for someone
who learned about it from a YouTube video last week.
This is not elitism.
It's called Bayesian reasoning. Credentials set a prior, but that prior should be updated based on how the person actually reasons. Do they show their work? Do they engage with criticism? Do they acknowledge what they don't know? The worst mistake is dismissing the importance of credentials entirely. The second worst mistake is treating them as conclusive, which is to say that credentials aren't the whole story. With or without them, the question is the same. Has this person done the work? Are they
deeply embedded in the field, or are they weighing in on something they're passingly familiar with?
How long have they been at it? What is their track record? And it's worth remembering nobody is
an expert in everything. There are several examples where a Nobel laureate in one field
goes on to make outlandish and conspiratory claims that contradict every single expert in a different
field. In fact, this was the case with Kerry Mullis, the inventor of PCR and winner of the Nobel Prize in
chemistry, who denied that HIV was the causal virus in AIDS. His ideas drove policymaking in South
Africa in the early 2000s and likely led to the death of more than a quarter of a million people.
The people you trust on one topic may be completely out of their depth in another. Someone can be
the right person to trust in one domain, but the board of advisors needs other people to fill in the gaps.
Another big question to ask when judging someone's credibility is how they approach presentations.
Are they explaining or performing?
Science uses a lot of technical language or jargon.
That isn't necessarily a part of normal day-to-day language.
How the jargon is used matters.
Using jargon without context is kind of a hallmark of an attempt to mislead a listener,
deploying technical terms to impress an audience rather than inform them.
It is the performance of being scientific sounding in an attempt to appear credible
rather than using technical terms to add precision to complex ideas.
We use jargon on this podcast, but we try to use it to bring you with us not to create a gate.
If someone is hiding behind jargon, instead of using it to elevate their audience,
they could very well be hiding something in hopes the audience won't catch it.
Credentials and familiarity with technical language are good on paper.
but how they're utilized, how this person interacts with their field and with their audience
are the deciding factors in building trust.
Which brings us to the second layer, how are they thinking?
Now this layer is of course critical, but it's a bit more nuanced because again, science
is a process.
We want to know the person we are listening to is engaging with a process, not simply a series
of conclusions.
So the first question here is, do they show
their reasoning, not just the final answer, but how they got there. Why they believe what they
believe, what evidence they're relying on, and what alternatives they've considered. Transparent
reasoning is one of the clearest signals of someone worth listening to. We also want to know
how they treat disagreement. Pay attention to how an expert talks about people they disagree with.
A steel manner presents the strongest version of the opposing argument and then explains
why they still disagree. A straw manner presents the weak position so that it's easy to knock down.
If someone consistently engages with the best versions of the other side, they're doing real
intellectual work, and they're far more likely to update when the opposing case gets stronger.
If they only attack the weakest version, they're performing, and not seriously engaging
with the fundamental possibility that their conclusions could be wrong.
And to know how they're reaching their conclusions, how they are engaging with disagreements, we ask, are their opinions anchored to data?
Let's go back to Richard Feynman, one of my favorite thinkers, but this time in his own words from a lecture that he gave probably sometime in the 1960s.
Now I'm going to discuss how we would look for a new law.
In general, we look for a new law by the following process. First, we guess it.
Then we compute the consequences of the guess to see what if this is right, if this law that we guessed is right,
we see what it would imply, and then we compare those computation results to nature.
Or we say compare to experiment or experience, compare it directly with observations to see if it works.
If it disagrees with experiment, it's wrong.
In that simple statement is the key to science.
It doesn't make a difference how beautiful your guess is.
It doesn't make a difference how smart you are who made the get or what his name is.
If it disagrees with experiment, it's wrong.
That's all it is to it.
Concepts matter.
Mechanisms matter.
Hypothesies matter.
But data are the anchor.
If someone relies mostly on gut, charisma, or an elegant story that hasn't been tested, that's a red flag.
On the other hand, if they can cite studies, discuss limitations, and locate key results within a larger body of evidence, that tells you they've done the work.
Data and experimentation are the currency of scientific research.
Without understanding what data exist and how those data were collected, we are missing the bedrock upon which scientific conclusions are built.
Finally, do they acknowledge uncertainty and have they changed their mind?
These two go together. Someone who tells you what they don't know, not just what they do know, is operating with the right kind of humility.
And someone who has publicly changed their mind, who has said, I used to think X, here's Y, now I think Y, here's what changed.
That's one of the strongest signals available. The social cost of changing your mind publicly is enormous.
Doing it anyway tells you this person cares more about getting it right than being perceived as an infallible authority.
These people, the people who care about getting it right, are the people we want to trust.
With these positive signs to look for, it's also worth considering the opposite.
This is our third layer.
What should make you cautious?
Or what are the red flags to look for when building our board of advisors?
First up is asking how do they make money? What is their reward? Is this person's value from providing
insight and truth by being consistently right and useful over time? Or do they make money by selling
products or engagement? If a person's scientific pitch always ends in a link to their company
or their affiliate link to their supplement or a promo code, you are listening to an advertisement,
not science. Now, sometimes scientists endorse or make deals with products they genuinely believe in.
That absolutely happens. But if someone is always making money off a product rather than off education,
that's a major red flag. Their financial incentive is not aligned with your well-being. It's aligned
with your purchasing behavior. Those are not the same thing. The more insidious version of this is
engagement-based incentives. If someone is always controlled,
contrarian, always telling you everyone is lying except them, especially on platforms like TikTok and YouTube and Instagram, their business model is your engagement.
What gets engagement? Outrage. Contrarianism. The feeling that you're getting secret knowledge and that if you're not listening to them, you're getting duped.
Their product isn't getting truth to you. It's your attention to advertisers.
Another key question when looking for who to trust is to ask, is their position consistent with the weight of the evidence?
Here is where I want to talk about scientific consensus and how to best interact with it.
Scientific consensus is not a vote. It's not a popularity contest among scientists.
Consensus forms when the evidence in a particular area becomes so overwhelming that virtually all qualified people who genuinely engage
with it arrive at the same conclusion. This does not make it infallible. Consensus has been wrong
before and will be wrong again. But the prior probability that the consensus position is accurate
is much, much higher than the prior probability that any individual dissenter is correct.
Now, countering consensus is part of the scientific process. It's very important. It's how science
moves forward. But, and this is the key, it should be built on critiques of data interpretation,
or on new data, or on identification of missing pieces of information. Consensus is built on data.
It takes data, therefore, to change the consensus. If someone is opposing the consensus based on
vibes or ideology, or the claim that everyone else is corrupt or compromised, that's not science.
That's identity. It is, again, performative. And for our final red flag, is this person always right,
and everyone else always wrong? Descent is a normal part of the scientific process, but is the
consensus really always trying to trick us? And is one person or group really the only clear thinker
in a sea of charlatans? The reality is that we are all wrong to some degree. The difference is that
Serious thinkers, serious scientists, work to become less wrong over time.
There is not always a boogeyman.
There is not always a conspiracy.
In science, there is always more data to collect.
And I want to address something really directly here.
This same fact that science updates and sometimes gets things wrong
gets used as a weapon against science itself.
You'll hear people say,
They used to say eggs were bad, so why should I trust them about anything?
That sounds compelling for about three seconds.
Yes, science got the egg cholesterol story wrong.
But it updated it.
That is the system working.
It finds weaknesses in the armor and seeks to repair or replace it.
But the person who uses that to argue that all guidance is equally unreliable,
well, that's just nonsense.
That's throwing the baby out with the bath,
water. Some things have been tested so exhaustively that the remaining uncertainty is vanishingly small.
And treating all scientific conclusions as equally shaky isn't skepticism. It's a performance
of skepticism designed to let you ignore what you find inconvenient. It's an attempt to rope you
into our camp circumventing your ability to critically evaluate unique claims on their own
unique strengths and weaknesses. We will continue to make mistakes. But we update with data. If you have
a legitimate challenge, show me the data science will follow. That's what it's built to do.
The existence of past errors isn't evidence that all current conclusions are wrong. It's evidence
that the process works. If I had to distill this into a single practice, it would be this,
Before you accept a claim from anyone, run through these questions.
Who is this person?
How are they thinking?
And are there any red flags that should make me cautious?
You won't run through every sub-question every time, but the more often you do,
the better your filters get.
And as a final point, look for people who reason the way science reasons.
People who say things like, if X were true, we'd expect to see Y, but you don't see
why so X becomes less likely, and Z is our best current X.
explanation, that logical structure, ruling things out, building confidence in what survives,
is the heartbeat of good science.
It isn't the only thing that makes a good scientist, and it isn't how good scientists always talk
about science, but when you see it, you'll know ruling things out is the argument structure
of someone deeply familiar with the scientific process.
And even when they're trying to simplify the narrative, they can't help but slip in the
fundamentals. All right, let's land this plane. You do not need to become a scientist to think
more scientifically. You need to get better at three key things, noticing when certainty and identity
are misleading you, judging the quality of a process rather than just the conclusions it's
producing, and choosing who to trust when you can't do the analysis yourself. The goal is not
perfect certainty. That isn't what science can offer us. It offers us a disciplined way to become
less wrong through time. The goal is better calibration, better judgment, and a willingness to update.
And that's a goal every one of us can work toward. Thank you for listening to this week's episode of
the drive. Head over to peteratia md.com forward slash show notes. If you want to
to dig deeper into this episode.
You can also find me on YouTube, Instagram, and Twitter, all with the handle Peter Attia
MD.
You can also leave us review on Apple Podcasts or whatever podcast player you use.
This podcast is for general informational purposes only and does not constitute the practice
of medicine, nursing, or other professional healthcare services, including the giving
of medical advice.
No doctor-patient relationship is formed.
The use of this information and the materials linked to this podcast.
is at the user's own risk.
The content on this podcast is not intended to be a substitute for professional medical advice,
diagnosis, or treatment.
Users should not disregard or delay in obtaining medical advice from any medical condition
they have, and they should seek the assistance of their health care professionals for any
such conditions.
Finally, I take all conflicts of interest very seriously.
For all of my disclosures and the companies I invest in or advise, please visit peteratia-md.com
forward slash about where I keep an up-to-date and active list of all disclosures.
