This Podcast Will Kill You - Special Episode: Adam Kucharski & Proof
Episode Date: April 21, 2026Why do we believe what we believe? Is what we believe the truth? How can we convince others of our beliefs? If you’ve ever found yourself pondering these questions, you know that the answers are... rarely clear-cut. We need to form beliefs in order to navigate the world, but how skilled are we at evaluating evidence for those beliefs or weighing new data that contradicts them? In this week’s TPWKY book club episode, Adam Kucharski, Professor at the London School of Hygiene and Tropical Medicine joins me to discuss latest book, Proof: The Art and Science of Certainty. With this book, Dr. Kucharski presents a compelling and thoughtful examination of the concept of proof, delving into topics ranging from the justice system (what’s a reasonable doubt?) to infectious disease, clinical design to the founding of this country. And he leaves us with a powerful lesson: what convinced you of something might not convince someone else. Tune in for a fascinating conversation! Support this podcast by shopping our latest sponsor deals and promotions at this link: https://bit.ly/3WwtIAuSee omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
This is exactly right.
This season on Dear Chelsea, with me, Chelsea Handler,
we have some fantastic guests like Amelia Clark.
When like young people come up to me and they want to be an actor or whatever,
my first thing is always, can you think of anything else that you can do?
Rather be disappointed in.
Do that.
David O'Yello.
I love this podcast, whether it's therapy or relationships or religion or sex or addiction
or you just go straight for the guts.
Dennis Leary, Gaten Moderato from Stranger Things,
Tana Monjou, Camilla Morone, Carrie Kenny Silver, and more.
Listen to these episodes of Dear Chelsea on the IHeart Radio app,
Apple Podcasts, or wherever you get your podcasts.
I'm Amanda Knox, and in the new podcast, Doubt, the case of Lucy Letby,
we unpack the story of an unimaginable tragedy that gripped the UK in 2023.
But what if we didn't get the whole story?
Evidence has been made to fit.
The moment you look at the whole picture, the case collapsed.
What if the truth was disguised by a story we chose to believe?
Oh my God, I think she might be innocent.
Listen to Doubt, the case of Lucy Letby on the Iheart Radio app, Apple Podcasts, or wherever you get your podcasts.
Hi, I'm Aaron Welsh, and this is This Podcast Will Kill You.
You're tuning in to the latest episode of the TPWKY Book Club,
where I chat with authors of popular science and medicine books about their latest work.
Since starting this series a few years ago, I've gotten to cover some amazing books,
and I appreciate so many of you reaching out with your suggestions for books to feature.
Keep those recommendations coming, please.
And if you'd like to take a look at the full list of books that we've covered in this series,
as well as get a sneak peek at ones that are coming up in future episodes,
head on over to our Bookshop.org affiliate page,
which you can find on our website,
This Podcast Will Kill You.com, under the extras tab.
On the bookshop page, you'll find several podcast-related lists,
including one for this book club and the TPWKY Kids Book Club,
which if you're not following us on social media,
you absolutely should be because Aaron Updike has been putting together videos
reviewing children's books.
It is such a great resource for Sciencey Kids Books for all ages.
And if you want to share your thoughts on these episodes,
make topic suggestions, submit a first-hand account, you can get in touch with us using the
contact us form on our website. Two last things before moving on to the book of the week,
and that is to please rate, review, and subscribe. It really does help us out. And second,
you can now find full video versions of most of our newest episodes on YouTube. Make sure
you're subscribed to the Exactly Right Media YouTube channel so you never miss a new episode drop.
Belief is a powerful force. It shapes
every facet of our lives and transforms perception into reality. What we believe to be true is not always
what is actually true, something I'm sure we can all relate to. Maybe you've debated with a friend
over the answer to a trivia question, like you both know the right answer, but your answers are
somehow different. Or maybe you've had a heated exchange with a relative who firmly believes that
the moon landing was faked. How do we decide what we believe? How can we
know that what we believe is the truth. And how can we convince others of that? These are precisely
the questions that Adam Kacharski, who is professor at the London School of Hygiene and Tropical
Medicine, asks in his latest book, Proof, the art and science of certainty. Kacharski, who is a
mathematician that works on infectious disease outbreaks, explores how we are inundated with
information and increasingly misinformation, that we have to evaluate to determine
whether or not we should incorporate it into our decision-making. This extends beyond personal
decisions, which route is best to take to work, what to make for dinner. Our world is built
upon structures of proof, with varying degrees of support. That car that you drive to work is
manufactured under rigorous safety testing, meaning there are established guidelines for what is
considered safe and how to test that. Same thing with the food we eat, the medicines we take,
the buildings we spend time in. We don't question so many of our beliefs. To do so would leave you
frozen, uncertain of which direction to move in, what to trust. You'd have no time to actually live
your life. But when we do scrutinize our certainty, we might find a gulf between our beliefs
and someone else's and those beliefs and the objective truth. Where does that incongruity originate?
Why are we skeptical about some things and not others?
What does it take to make up our mind?
And what does it take to change it?
That answer might not be the same for everyone.
An enlightening blend of philosophical musings, political commentary, statistical exploration, and personal reflection,
proof is a fascinating read, particularly as this unceasing flood of information, both good and bad,
shows no sign of stopping.
Let's take a quick break and then get into things.
Professor Kacharski, thank you so much for joining me today.
Thanks for having me.
I am thrilled to talk with you about your latest book, Proof, the Art and Science of Certainty.
And before we dig into the various forms of proof and how we determine a threshold for proof
or what different types of proof exist for certain situations, I want to start at the very beginning.
What is proof?
is there a standard definition?
Yes, I think that's a great question.
And I think my background's in maths.
So I think a lot of my kind of training was around this idea
that you can have this definitive knowledge
that something is true.
And I think it's something that people grappled with across fields.
I mean, one of the stories that really struck me
was Abraham Lincoln when he was training to be a lawyer
came across this word, demonstrate.
And yeah, this kind of beyond reasonable doubt,
this certainty.
And he's like, I don't really understand.
what this is as a concept. And he actually went back to all of these ancient Greek
mathematical texts to understand how can we, you know, take what the knowledge we have,
build on that, prove new theorems, use that to prove subsequent knowledge. But I think one
of the things that was really the motivation for the book and something that I think
anyone who works with information and decision making and evidence happens across very often
is it can become quite a shifting concept. I mean, even
in mathematics, things that people thought were proven, turned out, had some hidden assumptions
or human judgments that were kind of lurking there and caused a lot of that to collapse. So I think
it's a kind of fascinating concept because it's something that's so important in life,
not just having knowledge that we gradually accrued, but for many of the things we care about,
whether it's dealing with emergency, whether it's a legal case, whether it's even just a kind of minor
a business decision in our day, we have to work out where we set the bar and how we evaluate
what we've got. And I think for me, that was really the launching off point to explore this.
You know, how do we converge uncertainty and what happens when it goes wrong?
Thinking about the difference between proof and certainty and truth, like what is the
relationship between those concepts?
Yeah, I think that's a great question.
And without going down the kind of philosophical rabbit or could have been a book on it, what is
reality. Yeah. But I think the way that I approached it is just to look at how people have
thought about this in different fields. And again, even going back to Lincoln and much earlier,
there was this appeal of this certainty, this idea that there could be this universe truth.
And it's why a lot of fields ended up borrowing for mathematics. You see it in the US Declaration
of Independence. Yeah, we hold these truths to be self-evident. The original draft was,
we hold these truths to be sacred and undeniable. But Benjamin Franklin didn't like that because it
sounded like they were kind of appealing to some divine authority. And self-evident is just borrowed
directly from maths. It's just a given truth. And unfortunately, it turned out all of these
things about equality weren't self-evident. But I think that story of how you think about
these things. And even when we see in the legal world, a lot of it was originally derived from
concepts around maths, around probability. If you talk about, you know, some of these
thresholds, preponderance of evidence, you're saying it's more likely than not. And you're kind of
borrowing a lot of these kind of probability-based ideas. And even in the world's kind of more
experimental design, as that kind of developed, a lot of it was about, I mean, actually some of these
early studies were almost trying to discount some of the influences of religion, you're wanting
to understand cause of effect in the world rather than just appealing to some other influence.
And then it, for a lot of people, it became this question of how do you take the evidence you have
and how do you link that to a conclusion that you want to make?
And where do you set the bar for that?
Do you try and get ever closer to certainty?
And there was actually a lot of statistical tension about 100 years ago.
I know statistical debates kind of sound a bit boring.
But it was actually this real, you know, people just, you know, almost like wouldn't talk to each other.
Because it was this tension between do you just try and get ever closer to the truth?
Or do you have a framework that allows you to make decisions?
And I think a lot of times in life, we don't get to do the academic.
I'm just going to sit on the fence.
I just, I don't know and I'm just not going to do anything with life or actions.
But often we have to decide.
We do something or we don't do something.
Or we say someone's guilty or we let them go free.
Or there's these decisions we have to make.
And so that process of interacting with evidence is much more pressure.
And I think that was one of the real big tensions that never fully got resolved.
Actually, even how we teach statistics at school, we kind of smush together these two very different
philosophies, one of this ever higher bar for evidence and one where we're sort of outlining a
framework to make a decision based on the knowledge we have. When it comes to public health and
medicine, there's a lot more pressing, you know, need to make decisions. And yet this decision is
often dragged out for long periods of time. And sometimes that is at the urging of, you know,
someone who has incentive to drag out a decision. So one of the example,
that you talk about is Austin Bradford Hill, who is talking about this relationship between
cigarettes and lung cancer and saying, oh, we have some evidence, and there's still a lot of skepticism,
but we have enough to make a decision. We cannot use uncertainty as an excuse for inaction.
Do you feel like that, like we've ever truly learned that as a society, or has it been, you know,
players like the tobacco industry saying, oh, no, this uncertainty, you know, we need to push for
more and more and more evidence.
Yeah, I think that's a really good question.
I think that's a really good example of almost kind of weaponized certainty, that you
can always set the bar higher in any aspects of life, you can set the bar higher and higher
to the point where you just won't do anything.
And in action, of course, is in itself a decision.
And I think Bradford Hills work, he was extremely thoughtful in how we approached this,
because something like smoking, you can't really design it like a truck.
let's get people to randomly take up smoking and see if they get cancer. There's obviously ethical
reasons about there. There's also just timeline reasons. If you look at the time scale of the
intervention versus what happened, you might have to wait decades to have that clear signal.
And so he did a lot of partnering work with others linking together the various sort of non-random
data sets you had available. Because one of the criticisms, of course, as any data is yes,
smoke is more likely to get cancer, but maybe there's a genetic reason that makes them more
like to smoke and get this. And he outlined the lot of the ways we can think about cause and
effect. And I think that's a very useful set of concept. Even some of it's the obvious ones of
the cause needs to become before the effect. Or that, you know, if you have this strength of
association, more of cigarettes makes you more likely to get cancer. Or if you see that across
multiple countries, or if you can start to think about, you know, the biological plausibility,
we see carcinogens in other kind of situations as well. None of those things on their own is
conclusive, you can start to build this evidence-based. And he made this really good point that
any knowledge we have, even if it's very confident knowledge, is always subject to further refinement.
But we still have that knowledge at that point in time. And we can seek further information.
There's been lots more studies of smoking since their early ones. But also, that's information that we
have to do something with. And I think we often, particularly in a situation with emerging threats or
or kind of early concerns about things,
whether it's a health intervention we think might be harmful.
I mean,
what of the examples are given the book is the work at the FDA around thalidomide,
which was this treatment for sickness in pregnancy.
And, yeah, there was actually a lot of concerns about safety for babies
and the FDA blocked it as a result.
But on the other hand, you get things where there might be a lot of value,
for example, in reducing smoking for health outcomes.
And even if there's that uncertainty,
And Bradford Hill made this nice point of actually the standard you should apply for taking action kind of depends a bit on the situation you're dealing with.
If it's a fairly cheap action to take, if it's not too disruptive for people.
But actually in his argument, he said smoking is something people really enjoy.
So we need a kind of higher barrier.
And I think it's a reasonable point.
If you're going to tell a lot of people to change how they live their lives, that the evidence you need is perhaps different for something where you can take.
take some action and you can unwind that.
So it is those kind of trade-offs that you have available
that obviously need to play in as well.
Let's take a short break.
And when we get back, there's still so much to discuss.
There's two golden rules that any man should live by.
Rule one, never mess with a country girl.
You play stupid games, you get stupid prizes.
And rule two, never mess with her friends either.
We always say that trust your girlfriends.
I'm Anna Sinfield, and in this new season of The Girlfriends,
Oh my God, this is the same man.
A group of women discover they've all dated the same prolific con artist.
I felt like I got hit by a truck.
I thought, how could this happen to me?
The cops didn't seem to care.
So they take matters into their own hands.
I said, oh, hell no.
I vowed. I will be his last target.
He's going to get what he deserves.
Listen to the Girlfriends.
Trust me, babe.
On the Iheart radio app, Apple Podcasts, or wherever you get your podcasts.
Ever feel like you're being chased by the marriage police?
Welcome to Boys and Girls, the podcast where dating isn't dating.
Arranged marriage is basically a reality show,
except the contestants are strangers and your entire family is judging.
You're sipping coffee with one maybe, grabbing dinner with another,
and praying your karmic Ken or Barbie appears before your shelf life runs out.
Trust me, I've been through this ancient and unshakable tradition.
I jumped in, hoping to find love the right way,
and instead I found chaos, cringe and comedy.
And now, I'm looking for healing.
Boys and Girls dives into every twist and turn of the arranged marriage carousel,
the meat-awquard, the near-misses, the heartbreak,
and let's not forget all the jokes.
Listen to boys and girls on the IHard Radio app, Apple Podcasts, or wherever you get your podcasts.
This season on Dear Chelsea, with me, Chelsea Handler, we have some fantastic guests like Amelia Clark.
When like young people come up to me and they want to be an actor or whatever,
my first thing is always, can you think of anything else that you can do?
Rather be disappointed in.
Do that.
Dennis Leary.
I wake up and I'm hitting him in the head.
with a water bomb.
And Bruce Jenner is on the aisle in a karate stance
like he's about to attack me, like,
making karate noises.
And his entire, the Kardashian family over there,
everybody's going,
and the air marshal is trying to grab my arms and screaming.
I immediately know that I've been asleepwalk.
David O'Yello.
I love this podcast, whether it's therapy or relationships
or religion or sex or addiction or you just go straight for the guts.
Guy Branham. So anyway, Nicole Kidman broke up with Keith Thurban. Being half of a country couple was always a hat she was going to wear, not like a life she was going to lead.
Oh, interesting. I like that. Did you practice that on your way over?
Gaten Matarazzo from Stranger Things. Tena Monjou. Camilla Morone, Carrie Kenny Silver, and more.
Listen to these episodes of Dear Chelsea on the IHeart Radio app, Apple Podcast, or wherever you get your
podcasts. Welcome back, everyone. I've been chatting with Dr. Adam Kacharski about his book, Proof,
the art and science of certainty. Let's get back into things. Right. The thresholds for
certainty is, it can be different depending on the situation. And then there's also these
personal thresholds for certainty or evidence, you know, how much information do we need? And
one of the things that you discuss in your book as well is sort of what happens when evidence
flies in the face of our personal beliefs and how sometimes even despite a mountain of evidence,
we can just still feel like that's not possible. We can't reject it. It's not an intuitive
truth. You know, what happens? Like, what does this show us about sort of the personal nature
of proof and certainty? Yeah, I think that's one of the things that really kind of struck me
in researching that. I mean, even in some of these kind of mathematical puzzles examples,
It's things that I'd come across as a kid and convince myself, like, oh, that's just, that's the answer to the puzzle.
And it was only years later when I was explaining it to someone else or someone else had asked me about it.
And I sort of went through the thing that it convinced me, and it just didn't convince them at all.
And I think that's a really interesting guy.
I think we focus a lot on, you know, how science works, how methods work, what convinces us.
And you see this in even a lot of studies around political beliefs that people will often try and convince others with arguments that convince.
them. And then you get this gap and it's almost like that just fails. And I think that's a really
interesting step to explore. But why does that fail? And one of the things that I find even just
kind of in some of the modern tools we have in the modern era quite striking is where we have
this desire to explain things. So yeah, a few years ago I was talking to a bunch of people working on
AI and there's a lot of sort of concern about things that self-driving cars. Like we don't understand
why they make mistakes. We need that explainability. We can't have things we don't trust. And actually
in medicine, we have all sorts of things that we know work, we know how often they work.
We don't fully understand the physics and biology. Something like anesthesia, for example,
you know, you can control the effect it's going to have, but actually all the underlying
biology and kind of physics mechanisms is still more work to be done. Things like defiburation,
you know, if you give a heart a shock, you can kind of reset it. Again, some of that's understood,
but there's still those kind of gaps in knowledge. But we know that these are useful tools. And even if
you run a clinical trial, you can assess how effective a treatment is.
But that on its own will just tell you the effect.
It won't tell you necessarily all the mechanisms that are going on to explain it.
But there's these tools that we've got, and we've got the evidence to take action and use these things we're very happy with.
And there's other areas of life where actually that inability to explain something kind of really bothers us.
Even if self-driving cars were much safer than humans.
And humans, when I started looking through the book, humans are not good at driving.
You know, it's not a massively high bar.
But I think it would still make people uncomfortable, even if they would say twice as safe.
in cities were very well defined.
I think it still bother people if every now and again
there was just an accident that we had no real idea
of what was happening.
And I think that's really important to bridge
because I think that particularly when you get that gap in understanding,
that's room for other explanations to kind of creep in.
And I think that's where we start to see emergence
where things like conspiracy theories,
whether it's things with kind of incorrect logic.
Often it is that gap between what we're seeing
and the understanding of why that's happening
I think humans have this very, in many ways, very powerful desire to explain what they're seeing,
but in some cases where the explanation is very hard to untangle, it can lead us astray.
That's fascinating to think of the gap between understanding and what is happening.
We don't understand how anesthesia works or how Tylenol or acetaminopin truly works,
but we do understand how vaccines work, for instance,
and yet there's so many conspiracy theories and misinformation surrounding.
this thing that we do know how it works. I guess what good does evidence do if we do not take it
into account and are not open to it? Yeah, and I think for me, a lot of it is just understanding at
what point that breaks down. I mean, even if you look at some of the COVID vaccines, for example,
or even some of the kind of other debates around climate intervention, other things, you know,
often it gets very into debating some element of the technology. And I think often it's actually
just people disliked some of the control that was exerted over them through mandates or for other
things. And actually, you know, if you've got an intervention that you're unhappy with, you can
disagree with the intervention of say, look, for example, we know that intervention works, but I disagree
with how you're implementing it. Or you can disagree that the intervention actually has an effect.
Or you can get even one step down and just say, you know, actually I think there's sort of deeper
problems or maybe the disease isn't a threat. And I think often those kind of leg,
levels get tangled up. And I think in a lot of conversations I've had with people, often they're
deep down concern or the thing that they're approaching it with isn't necessarily that they've just
out of nowhere decided that this isn't a threat or that that technology doesn't work. It's actually,
in some of these instances, things are a bit more marginal. And you could say, you know,
you can make an argument either way, even if the underlying intervention is effective or is going to
have this. You know, you can make kind of, there's moral and this. It's not.
just about an epidemiological question.
And so I think kind of understanding where those drivers are,
and also just in our own arguments,
I think sometimes I have the conversation with people,
and I think I'm just arguing about the kind of the nuances
of whether intervention is a good idea or not,
and they're actually arguing whether it's a problem in the first place.
We see it, vaccines are, I guess, example, it's more polarised,
but even something in climate, you know,
you can have a lot of people who just agree on the nature of the threat of climate change.
They agree on the different levers that we probably have,
available society, but they might strongly disagree about actually how we prioritize those and all of the
trade-offs. And I think it's just understanding what level we're on and where the evidence might
stop and where it might then just be other things that are filtering in on a personal level.
This idea of proof and certainty and truth, that seems very intuitive in a lot of ways today,
but this maybe wasn't always the case. Like, when did the concepts of truth and the need for
these self-evident truths or certainty or proof, when did these come to be and then, you know,
in what fields or what areas were they initially applied? Yeah, I think that's a great. It's easy
just to think of like the world and sort of science and evidence just always was as it is. I mean,
even in mathematics, this idea that we had a universal truth wasn't the same throughout history.
If you go back to the ancient Egyptians, ancient Babylonians, they were much more focused on problem
solving. A lot of their texts are kind of these these kind of puzzles and very much things around
kind of practical everyday problems. And even if you look at, you know, their formulas for an area
of a circle, they're quite approximate. And if you're building something that needs quite a large
circle, you're probably going to be okay using those. But it's not going to give you that really
precise truth no matter what problem you're working on. And that's something where the ancient
Greeks, mathematicians like Euclid came in and tried to put things on much more solid footing. So you've
got these concepts like pie that if you want the
area of a circle, that will just be universally true and you won't have this issue of your kind
of approximation breaks down. And it was then, I think that as it sort of came into the Enlightenment,
it was very appealing for people that you could have these undeniable truths about the world.
And I think that's where a lot of other fields started Broydonum as well. But even in medicine,
if you look at this study of cause and effect, a lot of that, it was the sort of medieval Arabic world
that a lot of that started to emerge.
So a lot of the kind of superstition,
this idea that disease or conditions
just kind of come out of nowhere
and it's bad people or someone's a witch
or this kind of stuff that was going around
in much of Europe at the time.
There was a lot of early writings
even around the 11th century
saying these aren't supernatural.
There's natural causes and we can study them.
Yeah, we can study them.
We can work out what the cause of effects were.
A lot of early attempts to try
and think about concepts
that we would now call things
like having a control group or thinking about how we kind of, you know, would divide and treat some
people and not treat some people and then compare the difference. The conclusions didn't always
work out. I mean, there was, I think one of the earlier studies was someone who'd identified
correctly the symptoms of meningitis, but then concluded that bloodletting was really effective
for it, which probably something in their study design had gone astray. But again,
it's just kind of really, and it's one of the things you look back on and you think it's just,
it's pretty obvious that we should be doing it that way. But even coming into the 20th century,
if you look at something like analyzing a medical treatment, a lot of the early studies did an
alternation method. Because if you think about it, rather than randomised patients, you could just
say, well, the first patient that comes in, I'm going to treat the second, I won't, the third I will,
fourth I won't. And on average, you should get something that any other sources of variability
should balance out and the difference in those groups should be, on average, down to the treatment
in effect. But Bradford Hill actually, who did a lot of the planning work in the early
sort of clinical trial space, noticed that the groups were often in balance. Because what's
happening is patients were coming in and doctors were maybe subconsciously, that, oh, maybe that
person looks a bit ill, I'll enroll them, or maybe they don't meet the diagnosis. And actually,
a lot of the early randomization wasn't statistical. It was just, it was to sort of keep humans from
themselves because we couldn't trust subconscious judgment. So a lot of the early randomization
in medicine wasn't about the statistical properties of the trial design. It was just about making
sure humans didn't muck things up, basically, with their internal biases. Well, I mean, we'll find a way,
I'm sure, somehow, some way. It's interesting to think about this idea of like self-evident
truths, thinking back to, okay, yes, there's superstition and this person is a witch based on
these signs or whatever. Was that also viewed as proof? The story of those trials by ordeal is a
fascinating one as well, because they were used for a long time. You could have trial by
or deal, like by water or by fire or whatever. And you could also choose trial by dual. So basically
the big criminals always pick that. And people start to notice like, oh, you know, if God is
deciding which one's innocent, God tends to pick the bigger one, like pretty much every time,
which is, I think there was that that came in. But actually, one of the reasons they stopped using
them is a lot of the religious scholars became concerned that they were basically trying to,
But by running those trials, you're essentially trying to get God to do your work for you.
And that felt for them a bit awkward because you're sort of on demand saying, hey, can you come and make a decision for us?
Which they sort of got quite uncomfortable with.
But even those early systems, I mean, early juries in England were kind of fascinating because they weren't the structure that we had today.
They kind of did their own investigation.
So often someone was accused and then they went off and accused someone else and kind of did their own thing.
And it was only over time that system kind of evolved of having that way and converging
to something. And I think that's, you know, we talk a lot about the sort of problem of black boxes.
But to some extent, juries and talking to legal scholars was kind of interesting with this,
that it's not so much about getting to the truth.
It's having a system where you can reach a decision and you've got kind of that finality
or semi-finality to that decision and having a system that works rather than, you know,
you're 100% convinced of that.
And I think we see that kind of across different fields of that emergence of truth.
And as he said, what's kind of obvious and what's self-evident.
I mean, one of the other things that I found kind of interesting was how many mathematicians
were deeply influenced by religion.
So Newton, for example, Isaac Newton, driving all these equations and theories about planets
and planetary motion, he saw that it was God keeping the universe in balance.
And he was essentially just observing divine.
influence. So for him, although he was doing a lot of this scientific work, he saw that there was this
external influence keeping it all in place along the way. So even quite far through history,
you had these kind of other baseline explanations going on. I think even in the modern era,
I think the way sometimes we tell the story of science, I think is sometimes almost a bit disingenuous.
If you read a scientific paper, it's kind of, yeah, there's this problem and I decided to run this
experiment and I got these results. But I think there's also just that element of like
Like, why, what was the hunch that made you think that that might be an interesting thing to investigate?
What was that spark of inspiration?
I think even in this era of AI, it's a really interesting question because, you know, AI can kind of process and mimic human decisions as we write them down.
But I think there's often that kind of spark or that idea that would lead you to do something that just people wouldn't have tried before.
And that's much, much harder to articulate.
So it's not necessarily that kind of obviousness that we might have had in another era.
but I think there still are those things which are quite hard to explain in where that evidence
might have initially sparked from.
One of the things that you mentioned was the use of proof and evidence in the legal system.
And I feel like this was a really fascinating discussion in your book as well where this is
employed as like, you know, proof beyond a reasonable doubt or innocent until proven guilty.
What does this show us about like the variable level of evidence needed to make?
a decision and I guess like the different forms that proof can take in this setting.
It's a really interesting question about how different societies have even set that balance.
Essentially in a legal case, there's two main errors you can make, that someone can be guilty
and you can let them go free or they can be innocent, you can convict them.
And William Blackstone, who was a legal scholar in the 1760s, came up with what's known as Blackstone's
ratio. He said it's better for 10 guilty people to go free than one innocent to be convicted.
and Benjamin Franklin actually
and sort of was even more cautioned.
He said it's better for 100 guilty people to go free
than for warn us and to be convicted,
seeing that that as the kind of balance.
Other cultures, particularly some communist regimes
in the 20th century, said it the other way.
It's like it was better for 10 innocents to be in prison
than warn a guilty to go free
because there's this kind of trade off
and where they're seeing it as the worst error.
And actually some analysis looking at US legal cases,
obviously they don't try and target these error rates,
but you can sort of infer
how people are valuing this,
a lot of them seem to land
between that kind of Blackstone
and Franklin ratio of error.
But then is, of course, yes,
the different evidence
and how it makes its way into the courtroom,
particularly some of the examples
historically of kind of things like early probability.
And again, one of the challenges here
is what one scholar I talked to
called the weak evidence problem.
And I think a lot of how we navigate life
is around probabilities that are quite likely.
You know, a lot of probability theory
was originally developed around like dice games and things you know you can study and you can quantify
but in legal cases we often have this weak evidence problem where you know someone ends up
in some extremely bad looking situation from a guilt point of view and you're like well it's
extremely unlikely this is just a coincidence but then if you think about it you like well this person
might just be a normal everyday person you know well it's extremely unlikely too that they're guilty
so you have these two extremely unlikely events and a lot of statistics just isn't equipped to handle that and so
there's this notion it's called the prosecutor's fallacy where people say, well, this is the
probability that that would all be a coincidence and therefore that's the probability they're
innocent.
But of course, you've got to weigh it against the fact that it's extremely unlikely they're guilty
as well.
And we see this even in other areas, so the work we do dealing with like emerging health threats
and in a pre-COVID there were some studies and actually we did a TV show where you sort of say,
oh, a pandemic could just be around the corner or there was another study that the World Bank,
I think put it at 1% and you're like, well, what is that?
That's, is that, right.
Are we, was that a good prediction?
Was that a bad?
And it's these, these very unlikely events.
I think in legal cases, again, for that weak evidence problem, it's less about do we
definitively work out with high probability, which of those is true.
And it's more just, we have to converge on the best explanation for what we've seen,
given those two possibilities.
And in reality, we may never have certainty about where we are.
And I think it's something that kind of struck me, both thinking about that and then also
thinking about a lot of people who, you know, have to plan for emergencies and very unlikely
events, thinking a lot of the way we traditionally think about probability can very quickly
lead us astray. Because I think we're so used to having this idea, well, I can just be 99%
sure that this happened. But actually it's much more about that balancing act that we have to
perform. Let's take a quick break here. We'll be back before you know it.
There's two golden rules that any man should.
live by. Rule one, never mess with a country girl. You play stupid games, you get stupid prizes.
And rule two, never mess with her friends either. We always say that trust your girlfriends.
I'm Anna Sinfield, and in this new season of the girlfriends, oh my God, this is the same man.
A group of women discover they've all dated the same prolific con artist. I felt like I got hit by a truck.
I thought, how could this happen to me? The cops didn't seem to be.
to care. So they take matters into their own hands. I said, oh hell no, I vowed I will be his last
target. He's going to get what he deserves. Listen to the girlfriends. Trust me, babe. On the Iheart
radio app, Apple Podcasts, or wherever you get your podcast. Ever do you? Ever feel like you're being chased
by the marriage police. Welcome to boys and girls, the podcast where dating isn't dating.
arranged marriage is basically a reality show
except the contestants are strangers
and your entire family is judging
your sipping coffee with one maybe
grabbing dinner with another
and praying your karmic ken or barbie appears
before your shelf life runs out
trust me I've been through this ancient
and unshakable tradition
I jumped in hoping to find love the right way
and instead I found chaos, cringe and comedy
And now I'm looking for healing.
Boys and Girls dives into every twist and turn of the arranged marriage carousel.
The meat awkward, the near misses, the heartbreak, and let's not forget all the jokes.
Listen to boys and girls on the I-heart radio app, Apple Podcasts, or wherever you get your podcasts.
This season on Dear Chelsea, with me, Chelsea Handler, we have some fantastic guests like Amelia Clark.
When, like, young people come up to me and they want to be an actor or whatever.
My first thing is always, can you think of anything else that you can do?
Rather be disappointed in.
Do that.
Dennis Leary.
I wake up and I'm hitting him in the head with a water bomb.
And Bruce Jenner is on the aisle in a karate stance.
Like he's about to attack me.
Like making karate noises.
And his entire, the Kardashians family over there, everybody's going.
And the air marshal is trying to grab my arms and screaming.
And I immediately know that.
I've better sleepwalk.
David O'Yellowo.
I love this podcast, whether it's therapy or relationships or religion or sex or addiction or
you just go straight for the guts.
Guy Branham.
So anyway, Nicole Kimman broke up with Keith Durbin.
Being half of a country couple was always a hat she was going to wear, not like a life she
was going to lead.
Oh, interesting.
I like that.
Did you practice that on your way over?
Gaten Matarazzo from Stranger Things.
Zana M'Jou, Camilla Morone, Carrie Kenny Silver, and more.
Listen to these episodes of Dear Chelsea on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
Welcome back, everyone.
I'm here chatting with Dr. Adam Kacharski about his book, Proof.
Let's get into some more questions.
Thinking about this in the context of COVID when things were evolving very rapidly, the situation was evolving rapidly, and the general public,
and, you know, of course, government officials wanted answers and wanted decisions.
You know, what is the best thing to do?
Wear masks, not wear masks, sanitized groceries, all these things that were just constant questions and people wanting hard answers, like just yes, period, end of.
As someone who was on the informational front lines of the COVID pandemic, what was your relationship with uncertainty like at that time?
Did you struggle with feeling like we don't have enough information yet?
you know, how did that feel, I guess, in your position?
Yeah, I think, I mean, those kind of situations are enormously talented,
both in terms of evidence generation and communication,
and then obviously the political decision-making that comes off the back of it.
I think in many of those situations, I found it useful to, you know,
kind of convert in some cases uncertainty around the exact estimate to just kind of broadly what
situation we're in.
So, for example, when I think it was the Delta variant emerged and we did a lot of,
or the work identifying the early advantage it had.
And it really wasn't, yeah, was it 30%, was it 40%, was it 60%.
But essentially all of those were a big problem.
And it's kind of arguing like, is this, you know, is this a disaster or just a catastrophe
or just very, very bad?
And it's like, from a policy, you don't need to kind of necessarily communicate at it.
You can just say, like, we're very confident that it's going to take off.
A couple of things I think that jumped out for me.
I think one was the need to triangulate across data.
sources. I think sometimes people have this idea of science that you go out and you run a study and
that study gives you the answer or it doesn't give you the answer. And there were quite a few of the early
skepticism were saying, well, actually this study wasn't definitive and this study wasn't definitive.
But once you start to look at all of those, you know, you start to look at the evacuations flights,
you start to look at the testing data and the contact tracing and the big testing of, you know,
some of the cruise ships, you start to look at the clinical data. All of those signals start to drag you
in the same way. And again, each bits of those evidence on their own might have problems,
but you can start to bring together and draw that into a conclusion. I think we saw that across the
pandemic, that if you view it very much as like, I'm going to get the perfect study and it's
going to give me the answer, you'll struggle, but often you can actually find a lot of
complementary data sources that all, yeah, for variance or a lot of that early severity were all
pointing in the same direction. I think it's harder, obviously, when they're pointing in different
directions, as we saw with, you know, some of the interventions where it was less clear, because
different countries, different economies, certain things did affect behaviour and other things
in different ways. But I think the other challenge that kind of jumped out, and I think a lot of the
health issues we deal with in US, UK, in the modern era, are non-contagious. So they're very much
kind of individual, you know, things like cancer, things like heart disease. So it's very much
individual focus. So you have someone who's ill, do you treat them, do you not treat them? If you
don't treat them, that's someone who's one person who's going to get worse. But contagious
health threats have this dependence where, you know, a problem can get worse and that problem
can then accelerate in very different ways. I think that was something that was quite a challenge
to communicate. Because I think a lot of people had this notion of you've got normal life
and then you could do something else that's not normal life. And we'd obviously just prefer it
to be normal. But I think as we saw globally, you didn't get that status quo. I mean, that was,
that was gone. And no country had, they had varying levels of normality, but no country had like
just, you know, pretend absolutely nothing happened. You either had, in varying degrees,
depending on the structure of society and advantages they had in kind of like terms of
demography and healthcare and other stuff, big changes in behaviour or borders, whatever, or you
saw a huge amount of death. And I think that's something that's, that can be, from an evidence point
of view, much more challenging. Because I think just in life, we're much more used to those kind
of linear problems where, you know, like with cancer or something, these are a tragic events
that happen sort of distributed across the population rather than something that the worse it gets,
the worse, that worseness accelerates.
You mentioned how we have these different data sources, these different studies that are all
leading us in a certain direction.
And we have, by this point in time, developed ways to measure both the quantity and quality
of evidence.
I really enjoyed your discussion on randomized controlled trials because this quote-unquote gold
standard of medical studies that might not always be the gold standard. And I was hoping you could
tell me a little bit about the times when the true gold standard might not be, for instance,
an RCT, but it might be something else entirely, or it might be unethical to do a randomized
controlled trial in that situation. I think we've seen quite a lot of examples where treating it as
that kind of cookie cutter, this is the only method we can use, can lead into problems. I mean, smoking
cancer is a very well-known one, that we couldn't just have inaction because you can't get
that level of perfection. Actually, even the first randomized control trial in modern medicine,
which is 1947, so streptomycin, a trial for TB, Austin Bradford Hill, who led that,
made the point that actually streptomycin had some very promising looking lab data and kind of early
signals. And he suggested it would have been unethical to withhold it from patients if it was available.
But actually, it was in 1947, there were currency controls.
The UK and its post-war state couldn't get enough dollars to buy streptomycin.
There wasn't enough to go around.
So in that situation, they said it would be ethical to randomise because there's not enough of it.
So there's not enough of it.
You might as well randomise and just learn something along the way.
And I think we've seen that in other situations.
I mean, in other sort of examples that you see where things are very difficult to randomise,
you can think about natural experiments.
A lot of, you know, well-known one is the Vietnam draft where people essentially randomly
assigned to go to war based on their birthdays. A lot of economists have done
Nobel Prize winning work using that to understand the effects of war on subsequent life
outcomes because it's not something where you can fully design that experiment, but you can
then make use of what you have available. So I think a lot of it just comes down to this
issue of we want to understand cause of effect. And the benefit of randomisation is a lot of the
other things that would influence whether or not, you know, someone's getting a vaccine and someone's
get into disease because you're randomizing on the vaccine, on average, those will cancel out as
effects. So it gives you that quite neat benefit. But of course, you've also got the challenge
that you might run a population in one group and one population that doesn't generalize
to somewhere else. You've also got the time issue. So for diseases that evolve, you know,
you might run a trial now against flu or COVID or something. A year later, that's going to be a
different variant. To what extent can you carry over those conclusions? I think we see a lot of
examples in the literature where, for instance, someone might run a trial in one population
for one disease, for flu, for example, and then see a very different result when people look
at population patterns elsewhere because it's a different immune structure, it's a different
strain, it's a different time period. And yeah, I think we can't just say, well, that study
from a few years ago is the gold standard, we're only going to use that one. We have to think
about how these things move along. I mean, that being said, though, I think in COVID, there were
missed opportunities, I think, to gather much stronger data. I think it's very hard to justify
running those kinds of studies as a threat increases. I think when epidemics going up,
taking your time to kind of try and randomize, I mean, I think essentially countries have to
take that threat, as the evidence suggests. But I think particularly as countries lifted measures,
that was often just done in quite an ad hoc way. And we could have done much more kind of staging.
In the UK, there were some early studies, for example, of can we use rapid tests?
So people test themselves every day rather than quarantining for like a week or two.
And then in practice, a lot of people just didn't bother.
But apart from that, I think there's a lot of these debates we're still having.
And we probably could have got better answers for that with some higher quality studies.
So not necessarily even an RCT, just making use of what we had with more observations.
One thing that I feel like during the COVID pandemic, especially the early months, was this desire from the public to have the
answers and I feel like there's a lot of variation and how willing someone is to say, I don't know.
And I'm wondering your feelings on this. Do you feel that scientists in particular have a difficult
time saying that they don't know the answer to something? Like, do we need to embrace uncertainty
more in as scientists? Or do you feel like there's that we are embracing it but just not
communicating it well? Yeah, I think that's a really good question. It's kind of how,
I guess, how personality and politics and all these things play. And so I know. I don't know.
I think, I mean, there's been good reviews of evidence showing that the overstated certainty
just undermines trust and confidence, whether it's kind of vaccines or it's other things.
So saying, you know, this is 100% safe.
There's absolutely no risk.
And if there's even a tiny risk, you then kind of undermine that.
Yeah, one of the challenges with kind of that oversight certainty, I think particularly
once you make that public statement, it's very hard to back that.
And we saw that with some of the airborne, right here, some even health organizations
say it's not airborne.
and fact, it's very difficult to then walk that back.
But I think it's fine in line because you don't just want to say, we have no idea.
You want to try and communicate the way of the evidence.
I think some countries did that better than others, particularly on their re-eat,
you know, Denmark, Singapore, spring to mind on their reopening where they said,
this is the data we're looking at to do this.
That might change.
And this is kind of how we have to work things through.
But I think one of the difficulties, I think, because any emergency it goes on for that long,
is, you know, you have some people who are very loudly said something's, you know, a hundred
less, a hundred times less severe than it is. And then they're kind of very nailed on to
having to keep promoting that. And I think it is, there was one of the government advisory
committees I sat on, you know, so a lot of the kind of early alpha variant, early delta variant,
a lot of this, early severity came out of this group. And there was a phrase that became used
quite a lot, which was tell me why I'm wrong. If you have that discussion where you want to get
criticism if you present stuff and especially people are more senior and say is this is this correct it's
very hard for people to kind of come in and say oh yeah actually um i spotted a problem especially if there's
yeah power dynamics or um seniority and other things so i think there was a lot of that thing where
people present work and be like right tell me why i'm wrong tell me what i'm missing and i think that's
quite a healthy attitude in that kind of environment to be much more you know looking for weaknesses and
be able to kind of lay out. And I remember actually, I think it was when, was it the gamma variant?
It was sort of emerging in Latin America. And I gave a immediate interview. And when they wrote it up,
it was basically, you know, Dr. Kacharski doesn't really know was the kind of open. But in that situation,
we didn't. And it is hard to do because I think, you know, especially people asking you questions
around your area of expertise, I think in terms of how to balance that, not just saying, I don't know,
but saying, well, we do know this and we can make some judgment. And there was this, this wonderful
study in the, he was 1951, it was by the CIA analyst. And it was about words we use when we're
unsure and words about judgment. And he basically realized that people use probable and possible to
mean all sorts of things and they all, you know, had kind of different notions. And he said,
humans will go out of their way to making a judgment about something that will often, you know,
the risk is you get the uncertainty where we're like very hazy and like, oh, it's, you know,
it's a definite possibility. And actually, in some case,
like with, you know, if you've got an emerging threat and you've got experts, you do actually
want them to put a number on it. You know, even if there's uncertainty, you want them to say,
I am 60% sure that this is the case. And there's been a lot of nice work, you know, even around
things like super forecasting where people make those predictions and you can go back and then look.
Because, you know, if people are well calibrated in their uncertainty, you know, if you say you're
50% sure about a list of things, about 50% of the time those things should happen. So about half the
things on that list should occur. So there are these situations where I think we can get better just
about thinking about our own uncertainty. And one of the things that I actually tried to do,
I've tried to do in a lot of kind of emerging threats is even just writing down what you think's
going to happen. Because I think we're great, you know, the human mind at like kind of rationalising,
oh yeah, maybe I did think that's. And so, yeah, I did quite a lot of like where where you could state,
I actually think the vaccine is going to be pretty good. Or, you know, I think this. And like,
And this is where social media when it was maybe slightly less polarized was quite helpful, because you could just put a post out.
And I think I was always very careful. I didn't delete any of my tweets during COVID because I was like, I actually want that record.
And there were some I got wrong. I was in Singapore in February 2020. And their policy was don't wear a mask unless you have symptoms.
And I think I tweeted. I was like, yeah, that seems like a sensible policy. That seems quite a little evidence base. And now we'd probably, you know, with some of the studies, not look back on that as being the best post.
But so yes, I think it's almost that as well as overstayed certainty, I think it's also holding
ourselves to account, even if it's just, you know, privately about how confident we were and what played out.
I want to close out by asking you about the subtitle of your book, which is the art and science of
certainty. And I want to know about the art part of this. What is the art aspect?
So I think for me it was the more I dug in.
to this, the more I saw these other elements beyond kind of pure logic, pure observation
coming in. I mean, even if you look at what was essentially a bit of a mathematical civil war
in the late 19th century, where a lot of these ancient Greek theorems, you know, things
about the properties of triangles started to break down because people started to draw shapes on
spheres and other structures and come up with functions that these supposedly proven theorems didn't
hold. And I think one of the reasons that...
that was really controversial was
there was this idea that there's a universal
truth out there about the world
and actually in this situation
it kind of depended on what assumptions
humans were making and what we were willing
to kind of define
and even in this
supposedly pure subjects
there's still these debates around
well it kind of depends on which
one you want to pick and that will change
the answer. I think even in
science is a lot of these situations
where you know we can accumulate the
evidence, but then you have disagreement about where you set a threshold. I mean, this kind of 5%
cutoff has become very popular, this sort of P value or the chance you'd get a result that extreme
if there was nothing going on or your no hypothesis was wrong. But that was kind of arbitrary. I mean,
it was partly picked just for convenience that this was, you know, 100 years ago, the calculation
is just a bit easier if they picked a value, one official did a lot of this work, just easier to pick a
value around 0.05. And others who were more pragmatic, you know, working in business on
something and thinking, well, actually, the evidence is a bit weaker, but that's still useful
to it. So there's this kind of human balancing act. And we saw, again, the things like
legal cases where how much you value different types of errors depends a lot on the individuals.
I mean, one of the examples that I find fascinating in the book was Einstein, when he moved
to the US, got very angry about peer review because he sent something to a journal. And it came back,
like, oh, we've got another opinion on it. And he was like, whoa, whoa, whoa.
like why haven't you just accepted
accepted my work? And actually Max Planck,
who published some of his amazing early papers,
Plank made that point that actually I would rather
kind of publish a few things that are a bit, you know,
nonsense than this is me paraphrasing,
than miss a really important idea.
So for him, his threshold was like,
I want to set the threshold low,
admittedly, mainly amongst kind of physicists he knew
because I don't want to set it too high and miss a good idea.
And I think we all have this,
kind of, that's where the art, I think, creeps in that, that kind of subjectivity in not just
the evidence. I think, for me, the real difference with something like proof is it's not just
generating data. It's how that data interacts with the world and the decisions we make. And I think
that's where things get really interesting. It's like, where do we actually set the bar for evidence
and then both to convince ourselves, but then go out and convince others too.
Well, Professor Kacharski, thank you so much for joining me today. This was such an enlightening
conversation and I really did, I loved your book, Proof, so I appreciate you coming on to the show.
Thanks. Great to talk. A big thank you again to Dr. Adam Kacharski for taking the time to chat with me.
If you enjoyed today's episode and would like to learn more, check out our website, this podcast
will kill you.com, where I'll post a link to where you can find proof, the art and science of
certainty, as well as a link to Dr. Kacharski's website where you can also find his other book,
The Rules of Contagion, Why Things Spread,
and why they stop. And don't forget, you can also check out our website for all sorts of other cool
things, including, but not limited to transcripts, quarantini recipes, show notes and references for
all of our episodes, links to merch, our bookshop.org affiliate account, our Goodreads list, a
first-hand account form, and music by Bloodmobile. Speaking of which, thank you to Bloodmobile for
providing the music for this episode and all of our episodes. Thank you to Leanna Squalachi
and Tom Brigh Fogle for our audio mixing.
And thanks to you listeners for listening.
I hope you liked this episode and are loving, still being part of the TPWKY Book Club.
A special thank you, as always, to our fantastic patrons.
We appreciate your support so very much.
Well, until next time, keep washing those hands.
This season on Dear Chelsea, with me, Chelsea Handler, we have some fantastic guests, like Amelia Clark.
When, like, young people come off to me and they want to be.
want to be an actor or whatever.
And my first thing is always, can you think of anything else that you can do?
Rather be disappointed in.
Do that.
David O'Yelloo.
I love this podcast, whether it's therapy or relationships or religion or sex or addiction or
you just go straight for the guts.
Dennis Leary, Gaten Moderato from Stranger Things.
Tana Monjou.
Camilla Morone, Carrie Kenny Silver, and more.
Listen to these episodes of.
Dear Chelsea on the Iheart radio app, Apple Podcasts, or wherever you get your podcasts.
I'm Amanda Knox, and in the new podcast, Doubt, the case of Lucy Letby, we unpack the story of an unimaginable tragedy that gripped the UK in 2023.
But what if we didn't get the whole story?
Evidence has been made to fit.
The moment you look at the whole picture, the case collapsed.
What if the truth was disguised by a story we chose to believe?
Oh my God, I think she might be innocent.
Listen to Doubt, the case of Lucy Lettby on the IHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
