The Chris Voss Show - The Chris Voss Show Podcast – Everything Is Predictable: How Bayesian Statistics Explain Our World by Tom Chivers
Episode Date: May 13, 2024Everything Is Predictable: How Bayesian Statistics Explain Our World by Tom Chivers https://amzn.to/3wxZAKu A captivating and user-friendly tour of Bayes’s theorem and its global impact on mod...ern life from the acclaimed science writer and author of The Rationalist’s Guide to the Galaxy. At its simplest, Bayes’s theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. But in Everything Is Predictable, Tom Chivers lays out how it affects every aspect of our lives. He explains why highly accurate screening tests can lead to false positives and how a failure to account for it in court has put innocent people in jail. A cornerstone of rational thought, many argue that Bayes’s theorem is a description of almost everything. But who was the man who lent his name to this theorem? How did an 18th-century Presbyterian minister and amateur mathematician uncover a theorem that would affect fields as diverse as medicine, law, and artificial intelligence? Fusing biography, razor-sharp science writing, and intellectual history, Everything Is Predictable is an entertaining tour of Bayes’s theorem and its impact on modern life, showing how a single compelling idea can have far reaching consequences.
Transcript
Discussion (0)
You wanted the best. You've got the best podcast. The hottest podcast in the world.
The Chris Voss Show. The preeminent podcast with guests so smart you may experience serious brain bleed.
The CEOs, authors, thought leaders, visionaries, and motivators.
Get ready. Get ready. Strap yourself in. Keep your hands, arms, and legs inside the vehicle at all times.
Because you're about to go on a monster education roller coaster with your brain.
Now, here's your host, Chris Voss.
Hi, folks.
This is Voss here from thechrisvossshow.com.
The Chris Voss Show.
The Voss.
There you go, ladies and gentlemen.
Welcome to the show.
We certainly appreciate you guys being here. As always, the Chris Voss Show is the family that loves you but doesn ladies and gentlemen. Welcome to the show. We certainly appreciate you guys being here.
As always, the Chris Voss Show is the family that loves you but doesn't judge you,
at least not as harshly as the rest of your family,
because most of them never liked you anyway,
and that's kind of how families go.
What's the old saying?
You can't pick your family.
The great thing about the Chris Voss Show is you can pick your family,
and we pretty much love you as a whole unless you're evil.
If you're there in the back, half the reason we do the show is to teach you how to be a better human being so don't
be evil and the show is kind of here for i think some of the evil people so work on it damn it now
if we can just get putin to start watching the show we have the most amazing guests on the show
the pulitzer prize winners the ceos the billion, the famous authors that bring you all their data.
They know everything.
They've studied it.
They've spent 10,000, 100,000 hours studying what they're studying.
They bring you the stories to improve the quality of your life and everything else.
For further information on your family, friends, and relatives, go to goodreads.com,
Fortunes, Chris Voss, LinkedIn.com, Fortunes, Chris Voss, Chris Voss1, on the TikTokity and all those crazy places on the Internet.
Today, we have another amazing gentleman on the show.
His book just came out May 7th, 2024.
It's called Everything is Predictable.
How Bayesian Statistics Explain Our World.
Tom Chivers is on the show with us today.
I'm going to be asking if he can help me with that data with his book on dating.
He is the author and award-winning science writer at semaphore
his writing has appeared in the times in london the guardian new scientists wired cnn and more
his books include everything is predictable the rationalist guide to the galaxy and how to read
numbers we need to get more people doing that for math welcome to the show how are you sir
i'm very well thanks Thanks for having me.
Thanks for coming. We really appreciate it. Give us your
dot coms. Where can people find you on
the interwebs? Tom Chivers.
I actually have got tomchivers.com, but I
probably should update that because I haven't for a while and it's out there.
But you can find my work at
semaphore.com where I write their daily newsletter
and you can find me, obviously
it's not called Twitter anymore, is it?
Damn it. X.com. is it damn it x.com
just keep calling it what everyone does it really annoys me twitter.com slash tom chivers i am there
trust me they'll change the name back in bank it's next week sometime so welcome to the show
tom congratulations on the new book give us a 30 000 overview what's inside this right okay so
thanks for the congratulations the book is about this
very simple equation called bayes theorem which was invented or developed or discovered or whatever
you want whatever word you want to use by this it was a it was a clergyman a sort of hobbyist
hobbyist mathematician from the 18th century guy called thomas bayes and it is it's just literally
it's one it's one line of of of equation. It is just multiplication and division stuff my eight-year-old daughter could do.
But it explains, I would say, a decent chunk of everything.
It is the maths of prediction, basically.
It is the maths of how when we get new information,
how we integrate that information with the information we already have
and therefore predict the world like so we yeah i mean so it is absolutely crucial in
medicine and science it is it is the basis of all like decision theory and how we you know
how decisions and are made under uncertainty it is it is a pretty i would say it can describe
pretty effectively how the human brain works it's it's a really it's the most i would say it can describe pretty effectively how the human brain works. It's the most important one-line equation that is possible to know.
So everything in this world kind of has a mathematical sort of segment.
It seems to be there's math in the universe.
Whoever created the universe is a giant nerd with a pocketbook.
Yes.
Can I use your book to help me win the lottery?
Or how is that different? I think you can use the book to say that you're very help me win the lottery? Or how is that different?
I think you can use the book to say that you're very unlikely to win the lottery.
But you can use the book to...
Sorry to pour cold water on that one.
Keep trying, keep trying, right?
But no, you can use the book, I think, to give you an interesting perspective on how decisions are made.
And I hope help you make this help make better decisions right and and make a better sort of stabs grasping how the world works and sort of understanding it because i think
like the the fundamental insight of bayes theorem is that we we we make these predictions all the
time like of i don't know is the shop gonna have beer when i get there to pick it up you know that's
and and we and we update that that those
predictions with new information when we get it if I look on the website and it's got it and they
what what Bayes theorem does is describe is describe the maths of how we do how we do that
or the idealized form of it but the and obviously we can't really predict everything we don't we
can't we can't we can't really know all the information in the world but we can use this
as a sort of idealized form of it.
But more importantly, I think the insight for us as individuals is that we don't need to say this thing is true or not true,
or this thing will happen or won't happen.
We can say, I think it is this likely that this thing is true,
or this likely that something is going to happen.
And then we can update away from those best guesses,
our subjective best guesses guesses with new information
and and never have to sort of doggedly defend i believe this thing i say you know and we can
sway with new information as it comes in and i think that's some from a sort of personal you
know news you can use sort of angle that's the biggest insight of the book and of bayes theorem
is that we don't need to be dogmatic about things.
We can update with new information and make probabilistic judgment.
There you go.
So I'm an Oakland Raiders fan, technically Los Angeles Raiders now.
Can I use this to figure out when we're going to win the next Super Bowl?
You can.
Yes, you can.
You sort of can.
I mean, as in, right, you have, how many teams are there in the nfl total i mean there's
i'm not an expert on this stuff yeah there's there's there's different conferences aren't
there yeah without any information prior information let's let's say there's 25 teams
i don't know i just totally pulled that number out my ass right yeah um yeah something like that
there's that or in the in the british in the english premier league there's 20 soccer teams
so if you didn't have any more information than that, your prior probability
of any given team winning
in a given year
is one in 20, right?
It's 5%.
Have I got that right?
Yes, 5%.
But then you get more information.
You get who won it last year
and it's Man City.
I don't know who won the Super Bowl.
Was it Kansas City Chiefs?
Sorry, did I get confused?
Yeah, you see.
Yeah.
The only reason I know that
is because I know who
Taylor Swift is going out with, which is just embarrassing you see yeah the only reason i know that is because i know who taylor swift is going out which is just embarrassing that's an embarrassing reason
to know that that's not very manly my friend no i'm a real man definitely watch football all the
time no i actually do watch english football and i'm sorry so it's a different thing they throw the
ball over there don't they yeah yeah that's that you get told off for it but you know people do
but yeah the but so with you know that's your prior probability that is your base rate of one in one
in 20 chances your prior probability of how likely any given team is but then you update it who won
it last year and you think that should update that should move me towards thinking that team is more
likely than the team that just got promoted to the league or the team that came bought dead last in
the conference whatever last year you know and that so my prior probability is that then you
can update it more has something brought in new players have they brought in you know have any
their players got older got past their best have they sold you know has one other teams come in and
stolen some of their best players has someone got an injury and you can update away from your initial
prior your initial prior with this new information so yes it can i i would imagine that the team that
won the super bowl last year is more likely than a
than a randomly selected team to win the super bowl this year i don't know how often teams defend
it but that but that you know so i would say that kansas city chiefs are more likely to win it than
i don't know the miami dolphins still from miami they're still they're still in miami but most high
school teams can beat them yeah i mean that's, that's my pick to go to general.
Yeah, you're right.
You're right.
The teams that have built themselves to a perfection of great players
and that have mastered the game, they do tend to hold on.
You know, like the Patriots were an extraordinary run.
Kansas City Chiefs, i think they won the year
before or the year before i think they're in the playoffs two years prior but they won the last
year so they tend to you tend to have a team that goes through you know rebuild and then eventually
peaks and just everybody's firing on all cylinders and then there is a bit of luck to it too of course
because you're two of the top rated
teams but the mindset of you know they've studied the mindset of when you go into the super bowl
and the people that have been there before aren't as jacked up and nervous and have the jitters
of nerves so much as the new person that shows up and that's a factor so the new people that
usually show up in the super bowl for the first, it's like going on stage for the first time, even though you've been in arenas,
you know, you you're at the show show and you know, you've got, you've got
millions of people watching that don't normally watch normal games, you know,
you're, you realize you're at the pinnacle of your career.
You're like, I've spent my whole life working for this and it's very easy to,
you know, you can use the analogy analogy fumble the ball as it were
yeah because as most of us do when you work really hard and you reach the pinnacle of
what you've worked all your life for sometimes you're you're you have a little trouble staying
at that level yeah so i want to get into your other book too that will circle back to the ai
stuff because we love talking about ai so those who are listening we're going to talk a little
bit about that later on the show tell us a little bit about yourself because people like to get to know the
author. What was your upbringing? What made you want to become a journalist and inspired you to
write, et cetera? When did you kind of know you were becoming a writer? I don't know. I became a
journalist after sort of not knowing what else to do i think that i i left university i did a
philosophy university because which is just your classic i don't know what i'm doing with my life
it was really interesting but i just want to and then i did more i still i didn't know what else
i was doing so i carried on doing philosophy university when i finished it and then eventually
realized i just had to go and get a proper job and i was i'd sort of like by that stage i'd
realized i was good you know the best thing about being at university was writing the essays.
I was good at that.
I was I would make them funny, which is not a common theme of undergraduate philosophy essays, I don't think.
And I was I realized I was enjoying it.
And so then when I came out of what turned out to be an abortive attempt at doing a Ph.D.,
I managed to get a couple of weeks work experience at an English national
newspaper British national newspaper called the Telegraph and managed to short answer not screw
that up you know marriage said not not balls it up for a couple of weeks they started giving me a
couple of shifts you know and and I've always I've always been interested in in science so once I had
my foot in the door I just tried to get to write the sciencey stories learned more about it didn't
really understand half the stuff I was writing,
but 15, whatever it is.
Oh, Jesus, what year is it?
17 years later, I've been doing it long enough
that I've picked up the sort of swing of it.
But yeah, so it was just,
it was a combination of I vaguely like writing
and I have a big enough ego
that I want to tell people what I think.
Narcissism is important, I find powerful driver right yeah triad traits really help very helpful very helpful
yeah I think I could use a bit more disagreeability actually I think that on the on the dark triads I
I tell you what do you have you heard of an author called Terry Pratchett I know oh anyway
famous British author I met him and interviewed him he was when before he died in 2013 and he'd been a journalist and he was he was dying at the time he had a form of port
dementia which was going to kill him eventually and he and i remember changing the subject away
from his away from his brain the fact that he was going to die basically and he was like because i
couldn't stand it anymore you know and he just said you're too nice you're too nice to be a
journalist but i remember that.
I've still got that in my Twitter bio.
Too nice to be a journalist, which I thought was...
Too nice to be a journalist.
I like that.
You know, we've had people on the show that when they made the major news, the news would go for the most salacious part.
So I'll give you an example.
Peter Strzok, when he came on the show, he was an FBI agent who was heading the Clinton and Trump investigation.
And one, he's a very integral guy, but he made one mistake that a lot of guys make in their middle ages is they'll run a affair.
And so he was texting someone, and it was someone at work but you know all the news could talk about was not about how my democracy
was under assault and the the data that he had and and and you know what he was finding that
our you know democracy was under attack do you need me explain a democracy to you because you
guys are over there in england oh excuse me how excuse me how dare you we've had a democracy
hundreds of years longer than you that's true i'm just giving you you know
you know it's like the whole monarchy thing and and i mean i'm just proud of you guys you guys
have kept a someone at 10 downing street for like at least three months so yeah but must be getting
on for a year now but he's going to get booted out pretty soon it's going to be brutal it's going to
be absolutely while there though you guys are just uh don't know. It was a temporary job for a while.
Yeah.
At least you got rid of your version of Donald Trump.
That was good.
We had a variety of things every six weeks for a while.
It was complicated.
It was.
And I was like,
I was like,
can't they get a guy who can comb his hair?
Yeah.
He was,
he was,
I mean,
he was,
he was,
he was good value.
He was like,
he was,
he was, he was, he was, uh, he was box was good value he was like he was he was he was he
was uh he was box office but he was yeah he was no churchill let's put it yes yeah exactly wanted
to be anyway i shouldn't get political but yes yeah that was an interesting time we're just
not yeah i lost my segue though whatever i was going into but i don't know what i i completely
lost my sorry no we were having fun That's what we do on the show.
If I remember getting back to it,
it'll,
it'll usually come back to me,
you know,
with the,
with this Bayesian,
let's get back to your book with the Bayesian gentlemen.
Tell us a little bit more about this guy who is,
who he is,
when did he live?
Why should we trust him?
So we don't actually know when he was born.
We know with,
we know he was born 1700 or 1701 because he was,
because he was, actually, what in England we call a nonconformist preacher,
as in he was not a follower of the Church of England.
But since the entirety of the United States was essentially founded
on nonconformist preachers who ran away from England
so they could go and set up their own things over there,
you'd probably just call them preachers.
But yeah, he was a vicar or a clergyman and he was he in his spare time he was this massive nerd he was a i
mean he just he was constantly writing like i think you know at that time the way his biographer
told me you know nowadays rich people if they have leisure they might get involved in a sports
team or something like that you know by a sports team that time, if you were a rich guy, which he was, he came from family money and you have an undemanding job,
like being a clergyman in a small town outside London somewhere, then you get into science as a hobby.
And he was part of the whole, you know, this is the whole pre-Victorian times.
But in those days, there's a lot of people just sending letters to each other arguing about maths problems or you know science this you know a hundred years later darwin would be doing the same thing
writing letters to his hundreds of friends all across europe arguing about what you know the
shape of pigeons feet or whatever you know and so it's that sort of so he was just and he got
involved with this this guy called lord stanhope who was a self-massive massive nerd who just went
around sort of supporting interesting mathematicians we found and you know he did a couple of things he
wrote a big big piece for a big sort of paper defending god because
defending against against the problem of evil which philosophy graduates might remember you
know how how come there's evil in the world if god is perfect and wants us all to be happy that
sort of stuff he wrote yep he wrote a big thing defending newton from an attack by uh about by
the bishop berkeley but the thing he's remembered for is one,
he's also, it was one paper,
one paper which was published after he died,
called, what was it?
A Something on the Doctrine of Chances,
a Treatise on the Doctrine of Chances.
I should remember the name of it.
That's not quite right.
But anyway, and it was about how do we,
so I'm going to get a bit technical here and stop me if I'm being boring,
but I think it's, I think hopefully it's not, right?
Normally when we do probability at school and things like that,
we might say, how likely am I to see three sixes on three dice?
Something like that.
Or how likely am I to draw a royal flush from a deck of cards?
That's something we...
Use this for crafts.
Yeah, yeah, yeah, exactly.
We work out how likely we are to see some result, right?
Given the hypothesis that this deck of cards is complete
or that the dice are fair or whatever, you know? So how likely are we to see event? How likely to
see data given this hypothesis? But if we're doing statistics in the world, right? So that's
called sampling probability. But if we're doing statistics in the world to find stuff out,
what we want to know is how, you know, like i want to do a study into i don't know a
covid vaccine right that and i get and i and i see some results say i see um you know that in the in
the placebo arm only one per 10 people get covid and in the in the real vaccine arm only one person
does we don't you know i can tell you how likely i would be to see that result given the hypothesis
that the vaccine doesn't work.
There are chances that it will be a coincidence in that situation.
But what we can't do with that sort of probability, we can't say how likely it is that the vaccine works.
And that's actually what we care about, right?
When statisticians do it, we're not just saying, oh, we don't really want to work out how likely we are to see three sixes on three dice.
That's easy and trivial.
There's one in 216 and you can
work it out and yeah it's fine but if what you want to say is how likely given this new information
that i have is my hypothesis to be true so what bays did was after you know and people have been
arguing about this stuff for a couple of hundred years by the time he came along what he did was
was described why how we do that and it's the, what he realized was that you need to have,
you need to have what we call a prior probability.
So like when we were talking earlier on,
when we were talking earlier on about the probability that the Kansas Chiefs,
Kansas City Chiefs would win, you know, given out of one in 20 chance,
that that's our prior probability.
And we need to have that before we can then say,
and now given the new information that they've signed this player,
what's, you know, what, how likely are they now? We need to have that before we can then say, and now given the new information that they've signed this player,
how likely are they now?
So you need to have a best guess before you go into it.
That was his big insight.
The reason that was controversial and remains controversial now,
300 years, what year are we in again?
Oh God, I can't remember.
Yeah, 2024.
So we're pushing 300 years on now. And it's still really controversial because that is, it's subjective.
I say my best guess of how likely
that my COVID vaccine is to work, whatever.
It's my best guess.
There's not some fact out in the world.
It is just a subjective guess.
And that, some people really hate that.
Really, really hate it.
It's the sort of saying all statistics
are just squishy, best guess, subjective.
There's no, it's not an objective fact about the universe, but that is the only, without those, without those priors, you cannot use statistics to say how likely something is.
You can only say how likely we are to see these statistics, given a hypothesis, for example, that there is no effect.
Is that, is that, is that comprehensible?
Yeah.
Yeah.
I mean, the probability, you know, one of the favorite sayings I always have is that is that is that comprehensible it's yeah yeah yeah i mean the probability you
know one of the favorite sayings i always have is character is destiny history is destiny you
know one of my famous quotes from mine on the show that's really narcissistic no it is it's fine
it's fine you need narcissism to get anywhere we just said that's true that's true and one of my
sayings that i always say is the one thing man can learn from his history
is that man never learns from his history.
That's why we go round and round.
And, you know, history, character is destiny.
I mean, I learned this in dating.
You want to find out what people's history is in, you know, people follow, especially
female nature follows emotional patterns.
Yeah. And it's interesting how even though some relationships can be toxic, some people have learned that toxic blueprint from their parents.
And so they go from relationship to relationship and pick toxic scenarios or create them.
And they feel comfortable in those toxic things where some of us might, you know, look at those things and be like, I really don't like having the police called every, every night.
But for you, that's a, that's a healthy relationship.
So you go girl.
But you know, the one thing I've learned is history is destiny.
So if you, if you, you know, I once had a friend who he dated a woman who she was his fifth or he was her fifth marriage.
Jesus.
And all four marriages, she had peppered each man that he would cheat to the point that he finally cheated.
And it was like a self-fulfilling prophecy every time.
And so she was doing that to him as well.
And I was like, didn't you notice when you dated her that this was a pattern in her relationships?
And so character is destiny, as they say.
History is destiny. And say history is destiny and
you know even credit reports know this right yeah a credit report knows that if you're a person
doesn't pay your bills early on and you're going to have bad credit you're likely not going to
change throughout your life you're going to have bankruptcies you're going to you know i i've i
used to pull credit reports for my mortgage company for 20 years
and so you know we probably pulled thousands of ten thousand thousands of mortgages in credit
reports and it was true you would see people you know two or three bankruptcies they'd file across
a lifetime if they got started early it was just really true you just it was weird because i'm like
people aren't robots but it's kind of sad to treat them like you're only as good as your credit score.
But it,
I mean,
the patterns are there.
So,
you know,
that probability that you mentioned that he was basing law,
the stuff on this is for real.
Yeah,
it's absolutely.
So base theorem is absolutely crucial to insurance decisions.
Cause again,
you can do the same thing.
Yeah.
Like you can do,
you can do the same thing.
Like you can look at the number of insurance, insurance payout you know claims per million follower million insurance
accounts whatever you know and you can work out that that'd be your prior probability of any given
one defaulting right or making a claim but then if you you know then but then if you can you can
add more information to it you can say like that could say, I know that young men are more likely to crash their cars
than elderly women or something.
Or you can look at individual cases,
and this person has three prior claims.
That boosts my probability.
It's the same maths.
It's the same maths, absolutely.
You take your prior probability, you update it with new information,
given information as it comes in.
This is absolutely true. I mean, it's also true to go back to your dating point you can do the same like actually
one of the examples that someone in what used in the book was if someone asks you at a wedding
how likely is that you know a bit of an inappropriate thing to say at a wedding but
imagine someone did they'd say you know they say how you know how likely do you think this wedding this marriage is to go to go the distance your your prior probability you know you can't
a sort of un what's the word i'm looking for an unskilled and an inexperienced forecaster might
say i don't know i'm in a good mood here everything everyone seems happy they're looking deeply into
each other's eyes everything's brilliant you know i say 95 sort of translate their feeling of
warmth and happiness into a into a into a subject to say 95 but what you should be doing is start with a base
rate and the base rate in britain at least is that about one in three marriages end in divorce
so that's your starting point and then you can use look how deeply they look into each other's eyes
or how firmly they grip each other's hands and use that as a you know whatever you know how nice
the canopies are history if one of them has a tendency to cheat relationships yeah exactly if one of them's if one of them's
got three divorces beforehand then that definitely would increase you know if it's elizabeth taylor
then i think yes exactly yeah yeah donald trump um donald trump yeah yeah sorry again political
no but i mean the real ad is there i mean he did he did cheat with each life so
yeah i think he's admitted to that or it's quite obvious actually if you follow the timelines
yes but there is some comedy here the it's interesting to me how how is this does science
use this data a lot it sounds like insurance companies and other people do is is this is this
a is this an equation that scientists and well after've taught you've you've you've hit on an enormous controversy
there it is honestly i've been writing about science for the better part of two decades now
and it is one of the tastiest rows is one of the like people get so cross about bays but
the most science is not bayesian right most most of the time when we do when people do
have you heard people say things are statistically significant?
That's just what we're saying there.
Cool.
Okay, so statistically significant result is when you say you do,
for example, say that COVID test, right?
You do the COVID vaccine and you get 10 in one group and one in the other.
What you can say is how likely, like we say, how likely,
if,
you know,
be like,
imagine if it's like flipping,
flipping a coin and you get in one group,
you get 10 heads and,
and the other group,
you only get one.
You can say how likely you would be able to see those two results given a,
you know,
given the hypothesis that there is no effect given,
if you imagine that there's,
there's this,
this vaccine doesn't work,
how likely would I be able to,
would I be to see those results just by chance that's that and
if it's and by convention we normally say if you'd be less likely if you only see it one time in 20
so that's that's a five percent chance or less then or the result is then we can we say that's
statistically significant and we go and quite often you can get your paper published in a journal or
whatever so that is that is how science usually. Now that is the exact opposite of what Bayes does. That is
the frequentist position, they call it. And that is the sampling probability. That's looking at
how likely we are to see these results. And that's been how most of science has worked for about 100
years now. And Bayesians say, come on, this is is bonkers we want to know how likely these this study is to be this hypothesis is to be true that's what i want to know right that's i want
to know is my does my can i can i now say that my my covid vaccine is likely to work can i now say
that my higgs boson is likely to exist or whatever you know and what the frequentists say is no we
can't we're not we we don't that that involves too much squishy subjectivity we don't like that
we're going to we're going to we're just going to look at the data and say,
we probably wouldn't see this result by chance.
So let's just say that it's real.
So that is a huge row in science, which in the last sort of 40 years,
there's been more Bayesians have been coming out of the woodwork
and it's becoming more of a thing.
But yeah, it is definitely used in science, but it is mainly,
mainly not, and it's sort of the minority position
it's sort of like it's the sort of outside the outsider thing where and all the people who follow
it are kind of wild you know like bright-eyed evangelists for this cause where so yeah so it
does indeed get used in science it's crucial in some science you cannot for example i'll tell you
what the classic example of this right because in some parts of some scientific things
you simply cannot not avoid using it imagine I did a medical test a test for a medical condition
right and I and we know that this test only comes back with false positives so if you if you have
the disease or the condition yeah it only comes back with a false negative one time in 100.
And if you don't have the condition, it only comes back with a false positive one time in 100.
I take the test.
I come back with a positive result.
How likely am I to have the condition?
Do you want to take a stab at it?
If it's like cancer, I think 90%, 100%, 90% is going to come back.
No, it's not. not you see this is the trick
imagine now that we now i'll tell you about my big reveal you see my big reveal is the test is a
pregnancy test so you see what i mean like oh okay all right i get caught your prior probability of
me yeah exactly your prior probability of me being pregnant is very low but you're probably yeah
yeah exactly i'm quite old now, right?
It's a factor.
Yeah, yeah, exactly.
But anyway, so you have to take your prior probability of how likely is this guy to have the condition into account
when you look at how likely they are.
Then they take the test, and that updates your...
But it's more likely that I'm the one in the hundred where I get the false positive than I am to be pregnant.
That's much more likely. That's more likely explanation.
And if it was, if it was a cancer test, you can get quite accurate cancer tests or less
accurate cancer tests.
And, you know, that depends on the cancer, but even if it's a very accurate test, if
the cancer is incredibly rare there, and you know, some, uh, like if you're doing like
breast cancer screening is something that's often about 90 accurate like forget as in as 90 will be if you have breast cancer also correctly say you
do in about 90 of times if you don't have it will correctly say you don't in about 90 of times
i think i'm pulling those fingers slightly figures slightly out of the air but that's
remembering from the book okay but if you're a 30 year old woman, cancer is breast cancer is very rare.
So that 90,
so that,
so a 10% chance of false positive is a lot more likely than having had cancer in the first place.
So you need to take these statistics,
these prior probability into account.
Otherwise you just,
so the correct answer to how likely was I to have the condition?
It was,
I don't know.
You haven't told me the prior probability.
I just haven't.
I just don't know.
So that's data.
Yeah,
exactly.
Exactly that. I'm going to buy your book so I can find out when I just don't know. Don't know enough data. Yeah, exactly. Exactly that.
I'm going to buy your book
so I can find out
when I get pregnant.
Yeah.
We'll just pee on a stick
like the rest of us.
That's true.
We do that on Fridays around here.
It's a lot of fun.
Let's delve a little bit,
if you don't mind,
into your other book,
Super Intelligent AI
and the Geeks
Who Are Trying to Save
Humanity's Future,
The Rationalist Guide
to the Galaxy
is the first line
of that book title.
How did you jump from there to here?
Is the Bayesian stuff being used in AI?
Extremely relevant, yeah.
So from the point of view,
since I'm obviously here plugging that book,
I'll start by talking about that,
but I will answer the question about the earlier book as well there so the the ai when you know these llms and
things large language models the new the new air like chat gpt and all that sort of stuff and in
fact almost any ai when it's like the ones that say to think of an easy example like ones that
are just like classifying pictures you show them different pictures that says that's a picture of a
dog that's a picture you know what they're doing is predicting stuff right so a a large language model is just predicting if you
ask it how are you it will say something like i'm very well thank you not because it is very well
but because it's predicting that's what a human would say in that situation you see yeah that it
is predicting the most likely next string of characters, the next sequence of words or whatever
that follow the sequence of words it's just had.
Now, I don't want to say it is just predicting
because just, you can put just before anything
and make it sound not important.
Like he just ran the 100 meters in 9.6 seconds.
But so predicting is really hard
and shows a lot of what we mean
when we say we understand things is they can predict it.
So that is, it's a big deal, but that is what they're doing.
And, and before you train the AI at all, you know, the probability, the it's, it's, it's prior, prior probability that the, of the sequence of words that should follow, how are you?
It could be anything.
It could be like, you know, it doesn't, it doesn't know.
So it just has a, a, a low probability on on all these but then you train it on loads of different data
and it notices that these things this sort of phrase like how i'm very well thank you or fine
thanks or great cheers they all come that sort of thing comes up a lot and so it moves a lot of it
so it so it's it from its flat it's it's low information prior that i had before it moves
lots of its prediction onto things
like that and you know the one that's recognizing images of cats and dogs or whatever it does
something similar it's like it's predicting what a human would say when it looked at that picture
and you could it's improving its predictions given more and more information so ais are fundamentally
bayesian things that is what they do right they They are predicting the world-given information,
and they're building on earlier predictions
with new information.
So that is absolutely what they're doing.
So that was sort of the link.
I mean, I've written three books so far,
and all three of them have mentioned Bayes' theorem
because I'm a bit obsessed, clearly.
Ah, you're in the cult.
Yeah, exactly.
No, genuinely.
I have a cult for you here that I should join.
Yeah, a bright-eyed evangelist like the other one.
But that first book the rationalist
guide to the galaxy it it was off the back of i read years ago a book i reviewed it for the
telegraph when i worked there by a guy called nick bostrom his book super intelligence
paths dangers strategies i think it was called and it was about how he was one of the first
people to raise this idea that ai in which now a lot of people seem to be worrying about,
that AI could destroy everyone, destroy the world.
And it was sort of arguing basically that it's not that AI will go rogue
or that it'll do the Terminator thing where you say, you know,
it'll become self-aware on the 31st of August, 1997,
or whatever the date was in Terminator 2.
But it will instead do exactly
what we tell it to do um but what we tell it to do is not what we want it to do you know what i
mean and i found that really interesting i went and met the sort of community of nerds and again
i say that with deep love who worry about this stuff i found them really interesting i found
myself the whole way through like on, on the one hand, going,
oh, come on, this is crazy sci-fi nonsense,
but on the other hand, never quite able to work out
where the arguments fell apart.
You know what I mean?
Actually, I follow the argument through,
yes, okay, this makes sense to me,
this makes sense to me, it still feels weird,
doesn't feel right somehow,
but I can't make, I can't find the reason,
I can't find the all-convincing argument
to make me stop worrying.
You know, so I ended the book fairly worried that you know i that there is a decent chance you know but i
feel strikes me that everyone being killed by ai would be a bad thing you know i'm against it
personally it's controversial um kind of against it too yeah yeah exactly so that yeah so that was
a book about it's sort of a i don't want to say it was a journey you know my personal journey but
it was like it was me testing out the different arguments,
examining things.
And I will say also I was like five years ahead of the game.
Cause everyone's worrying about it now.
I wrote most of that in 2017.
Yeah.
Chat GPT and generative AI really, really landed.
Yeah.
As it were, I'm still surprised.
And then the scale of, of, and speed of, of its upgrades are, are just extraordinary. I'm just holding on for dear life, I'm still surprised. And then the scale and speed of its upgrades are just extraordinary.
I'm just holding on for dear life trying to master it.
Yeah.
And, you know, every new thing.
But, yeah, I mean, I guess as long as no one feeds in Terminator 1 and 2
into the training decks, we should be okay.
You know, you just don't want to give it ideas.
That's why any time anyone's talking, you know, in the podcast or other places about you know ai killing us i'm like
don't give it any ideas the thing is they're going to increasingly train it on video as well as so
yes you're absolutely right it will become it will and tell you what if you ask them i think
if you ask the one because there's lots of writing about this stuff if you ask a chat gpt or something like that now about it
they've probably trained it not to now but for a while it would if you say so how will you how
will you take over the world and turn everybody into paperclips it would say oh yes i plan to do
that obviously it's just sort of filling in it's predicting what a human would expect to hear in
that situation so i don't think it will try and kill everyone yet but it was it was a bit it was grimly funny you know yeah yeah i mean it's kind of if you're in a relationship
and you talk about breaking up just for having conversation or divorce you're just kind of hey
i'm not really thinking about divorce or breaking up but i mean you ever think about getting divorced
or breaking up i'm just kind of bored making conversation i don't really mean it yes not
gonna end well no that doesn't seem a good thing to say no i'm gonna start it's
gonna start a series of events most likely and badly yeah i've kind of learned that if somebody
quits you or they talk about breaking up they've given it a lot more thought than you think that
they have and they're testing they're testing and so i i've learned when people quit you have to do
that hopefully we'll learn the same about ai this has been very insightful everything is predictable They're testing. They're testing. And so I've learned when people quit, you have to do that.
Hopefully, we'll learn the same about AI.
This has been very insightful.
Everything is predictable.
I got to read the book so I can figure out when my Raiders are going to win the Super Bowl again and how this all works.
Because I love statistics.
I love data.
I love studying averages. We were joking before the show about how my favorite comment in the world is, or analysis in the world is George Carlin's.
Think about how dumb the average person is and realize 50% of the people are dumber than them.
And then the funny part, the irony that I love about that statement is people are like, what's an average?
I don't understand what that means.
And you're like, you're the Dunning-Kruger person.
Do you know that?
Yeah.
But that's the beauty of Fight Club and the Dunning-Kruger disease. You don't know of that's the beauty of you know fight club and
the dunning kruger disease you don't know you're in the club if you're in the club people have
dunning kruger don't talk about dunning kruger so there you go anything more you want to tease
out of the book before we go um i tell you what i will i i will say like the if if you're interested
in the human brain and stuff the the big sort of reveal at the end of the book or or not big reveal, the thing it builds up to is that you can use, I think
I mentioned this at the beginning, that the human brain, what the human brain does all
the time is predict stuff.
We predict, you know, that you're building a model of the world around you.
You predict that, you know, that things will fall down when you drop them.
You're not actually seeing the world clearly through a window.
You're building a model, which is a prediction, and you're updating it with information from your senses and you can use bayes theorem to describe that pretty
like pretty accurately that is that is a and there's increasingly more scientists who describe
the brain in bayesian ways so there's a whole big chunk of that on the end which i think is
really fascinating and i commend it to the listeners nice i love data i love statistics
i've always been an averages guy. I'm surprised
that when you ask people nowadays, what's the average of that? And they're like, what's
an average? And you're like, seriously? This is why you're here. And I love it because
it does give me an aspect of predictability that I'm going to misstate something on the
podcast maybe is what we should be studying. So there you go. But this will get me drilled
down to the data I love. I love when people argue with me about data they're like what can't be true
i actually can mirrors are hard people it's hard when you look at that reflection you're like that
can't be me and you're like yeah it is i don't we're all looking at you so you're the one who's
kind of in there that city in egypt denial there's a great
there's a great i'll i'll stop talking in a sec but there was a there's a great psychologist called
paul meal middle of the 20th century mainly and he um he noticed that a lot of people's predictions
of the world can be outperformed by very simple very simple algorithms that just say you know i
don't know the how how well a wine will how well a wine how expensive a vintage of wine will do will be
will be will be predicted better by just a thing like how much sunshine and how much rain it's had
in the season rather than how what wine experts will do and it's just like people aren't very good
at this stuff they're they're they think that they they make a lot of bad predictions they're not
always super sharp and they it just, yeah.
The numbers often are better than they are at this stuff.
Yeah, I think that's a truth.
The truism we found is that people are really bad at making evaluations,
probably because they're in that lower average that George Carlin talked about.
But part of it, they don't take up data very well.
You know, a lot of people base their i think i
base their predictions on emotion not really logic and reason i'll people to people i'll hear people
say i just feel it should be that way and you're like that's cute yeah you know it's not that it's
always the wrong thing to do but like quite often it quite often there are you can just look at
numbers and say look actually i know you feel like i don't know terrorism is a big threat or whatever but it's
really it's not and you're much more likely to get diabetes and that is you know it is sad you know
i'm all for you i don't want to tell people they're wrong to be scared of things your your
own what you're scared of is it's up to you but it is worth being aware of the real number of the
things that real numbers of things so that you can calibrate how scared you should be.
I think it's having a bit of a grasp of, I mean, the thing is, the trouble is, I'm a journalist.
I was saying to you before the show, journalists just aren't very good at numbers.
And my ability to multiply one number by another makes me bang average at maths for most of the of the world but really good for a journalist and it
just means we journalists aren't too often easily confused by oh this thing has happened and
therefore it's very common you know it's it's a problem because we don't get our news from
journalists are you sure you didn't write this book to tell your journalists that your journalism
manager there in the office that you're a better journalist because you can predict how many views
you're going to get on your um i i would never say that and i like my job so i would certainly certainly not agree with you
there you go i'm good at math so don't fire me i'm better than the other journalists i get it i get
it i get it i see what you're up to so this is really good this is very insightful i'm going to
read the book and check it out tom give us your dot com so people can find you on the interwebs sure so once again
twitter.com slash tom chivers i am on tom chivers.com but don't listen to what it says there
about where i work because that was like three years ago and you can find you if you if you google
tom chivers you know what there's another tom chivers so don't do that but you can find me at
semaphore.com to read their daily flagship newsletter which i is brilliant it's a very very simple way
of like keeping up to up on up to speed with all the important stuff in the world every day so i
really recommend that to readers yeah please keep up on the important stuff that goes on in the
world today i hear these people that tune out the world and you know maybe they have to because it's
too overwhelming but i'm just like i don't know i i really like the history of the world understand
the history of the world because as you've talked about it gives me a predictable model for the
future you know it we tend to repeat history because you know of that predictable model that
we use of of history and character and destiny and and you know thereby we go around and around
you know you you can look at history and be like, you know, you can look at American history,
stuff that CIA has done and us mucking around the world going,
we're here to save you with democracy.
And people are like running away in horror.
Not that democracy is bad.
It's just we're really not good at composing it in places.
Yeah, like the people in Afghanistan.
You know, I think you guys tried changing them.
We tried changing them.
Russia's trying changing them.
I think there's been a few other people who tried to change Afghanistan.
They're pretty locked in.
Yeah, no one's quite got that right, have they?
Yeah, they're pretty locked in to what they like,
and it's probably never going to change.
Your prior probability that invading Afghanistan will lead to democracy
should be quite low, I agree.
Should be quite low, agree that is quite low yeah
you think we would have learned that but i don't know ego's part of some of our leaders
egos and narcissism yeah and then there's lots of money involved too so it helps that we're
capitalists because we make decisions that hunt capital on money you know that's why you guys give
you know health care health care away technically for free and we don't because yeah yeah it's probably yeah i mean it's all falling apart here as well
once again getting political but there we go there you go they can sue us thanks for coming
on the show tom we appreciate it thanks for my eyes for tuning in order up the book where
refined books are sold understand your world god damn it already everything is predictable
how bayesian statistics explain our world out may 7th 2024
you can find out why the world is so weird maybe maybe that's the best mess you can have and if
you understand the world better it seems to make it a little bit more stable even though it's as
chaotic as it is and maybe that's just jerking your mind off really when it comes down to it
but it does give you it makes you sleep a little better at night.
Let's put it that way.
And if that's a placebo,
if that's the line from matrix that says,
I know the steak is fake,
but I can,
you know,
it tastes like steak and looks like steak in my brain.
You know,
ignorance is bliss.
Ignorance is bliss to a certain degree.
Just trying to lower it.
Joe Pantoliani.
Yeah.
That's a great scene.
Love that movie.
Thanks for tuning in. Be good to each other.
Stay safe. We'll see you guys
next time.