The Knowledge Project with Shane Parrish - #87 Hannah Fry: The Role of Algorithms
Episode Date: July 7, 2020Mathematician and author of Hello World and The Mathematics of Love, Hannah Fry discusses the role of maths in society, the dating world and we explore what it means to be human in the age of algorith...ms. -- Want even more? Members get early access, hand-edited transcripts, member-only episodes, and so much more. Learn more here: https://fs.blog/membership/ Every Sunday our Brain Food newsletter shares timeless insights and ideas that you can use at work and home. Add it to your inbox: https://fs.blog/newsletter/ Follow Shane on Twitter at: https://twitter.com/ShaneAParrish Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
I think one of the big complaints that you get from school kids is like, well, I'm never going to use this stuff.
What's the point of it? It doesn't apply anywhere. And I think really showing just how dramatically important maths is to virtually every aspect of our modern world.
I think that that's something that can really make the subject come alive.
Hello and welcome.
I'm Shane Parrish, and you're listening to The Knowledge Project,
a podcast dedicated to mastering the best of what other people have already figured out.
This podcast in our website, FS.blog, help you better understand yourself
and the world around you by exploring the methods, ideas, and lessons learned from others.
If you enjoy this podcast, we've created a premium version that brings you even more.
You'll get ad-free versions of the show, like you won't hear this,
early access to episodes, you would have heard this last week, transcripts, and so much more.
If you want to learn more now, head on over to fs.blog slash podcast or check out the show notes for a link.
Today I'm talking with the incredible Hannah Fry, a mathematician, author of Hello World, and The Mathematics of Love.
We talk math, how schools can promote better engagement, human behavior, how math can help you date,
and we explore what it means to be human in the age of algorithms. It's time to listen,
and learn.
Hannah, I'm so happy to have you on the show.
Oh, well, I'm very excited that you invite me.
Thanks for having me on, Jane.
What are you interested in maths?
I like how you said math there, for starters.
Thank you for anglicizing it, so I appreciate that.
I think partly I was born that way.
So, okay, actually what happened was when I was about 11 years old,
my mum, I think she just didn't know what to do with us over one summer holiday. So she
bought me this math textbook and she made me sit down every day and do a page of this
textbook before I was allowed to go out to the garden to play. And then when I went back to school,
that's September after the summer, I was just so much better at the subject. I just understood
everything. I'd seen everything before and I was just really well practiced at it. And I think
that it's inevitable that if you're good at something, you just find it all the more
enjoyable. And the more enjoyable, you find something the less like it feels like hard work.
So I think that's it really. I think that that's just, that was sort of the, before then,
I mean, I didn't dislike it at all, but I wouldn't have said it was my thing. But I think that
that was really a stark change. Like after that, then it became my thing. And then, you know,
the more and more I got into it, the more and more it became part almost of my identity.
I mean, math is such a tricky subject for students. I mean, they seem to have this very love,
hate relationship with it with most people hating it. What are some of the things that schools
could do to promote better engagement with students over mass? So it's a tough thing because,
I mean, on the one hand, if you're ever going to be able to reach the most beautiful elements
of the subject, if you're ever really going to be able to properly to put it to use, you can't
have your working memory being swamped by remembering all of these rules and remembering these
really fundamental basics of the subject. So it's slightly unfortunate that that inevitably means
that when you're starting out, when you're in the early stages, it has to be dominated by
essentially learning the basics of the subject. It's something that's, you know, it's difficult,
it's not particularly inspiring or, you know, if it's taught in a very straight fashion, it's not
particularly inspiring. So in terms of what schools can do, I mean, I think for me, the, I've really
seen a difference when teachers really put in the effort to demonstrate just how useful this
stuff is. I think one of the big complaints that you get from school kids is like, well,
I'm never going to use this stuff. What's the point of it? It doesn't apply anywhere. And I think
really showing just how dramatically important maths is to virtually every aspect of our modern
world. I think that that's something that can really make the subject come alive. Do we see that
sort of manifesting itself now with kids' attitudes, because they're surrounded by algorithms and
machines, and does that change how they perceive math? Well, yeah, but I think that, unfortunately,
the math is invisible, right? Because, I mean, for this stuff to work, for a mobile phone to work,
it has to be all of, I mean, the amount of math is involved in getting your mobile phone, or, you know,
me speaking to you now, however many thousand miles apart we are, the amount of math that's involved
is, like, phenomenal. I mean, it's easily PhD-level stuff. But for this to work effectively, it
has to be invisible. It has to be hidden completely behind the scenes. You as the user can't really
be aware that any of it is there. So even though, you know, as you say, with algorithms dominating
more and more of the way that we're communicating with each other, how we're accessing information,
you know, what we're watching, who we're dating, everything, even so I think the math is so
behind the scenes that I don't think it's necessarily clear that it's driving so much of the
change. As you were saying that, I was sort of thinking of a Formula One
car you know the driver gets all the attention but there's this big huge team of engineering behind them
that we don't know their names we don't know who they are what they do that's a perfect analogy it's
perfect analogy i always think so a big fan actually a formula one and the reason why i like it
if i'm honestly is because i think of it as a giant maths competition just with you know a bit
of glamour on top i have this idea where they um they should do a driverless version of the cars too
because you have this closed track, right?
So it would be super easy to do an autonomous.
And then you're actually, then the engineers are competing.
There's no human element.
And then you could celebrate the engineers.
And I think by celebrating the engineering and the people behind the scenes,
you get kids more interested in that work.
Oh, see, I don't know if I agree with you, actually.
Oh, push back.
Yeah, pushback.
I'm sorry.
So early on.
So, okay, so partly there are examples of that already.
there's a, I think it's called Robo Race, which is the fastest autonomous vehicles in the
world. There's different teams, build the cars, and it's, it's like robot wars, right, but on a
track. And it's all very fun. It's all very interesting. But for me, I think that part of the
problem with why mass communication is difficult is that really we care a lot about stories and we
care a lot about stories of people. And I think that in many ways, the thing that makes Formula One or
are the racing so fascinating to watch is because you have it sitting in that gigantic
engineered machine with so much science and technology going into it you have a person who cares
so much about what happens in that race you know you you live the whole emotional roller coaster
with them as the series progresses and i think if you take that out of the situation then then actually
i think it it dehumanizes it and and makes it less interesting in a way that's really interesting
So how do we make a better story around mass then?
So I think it's that.
For me, it's humanizing it.
I think that really is it for me.
I think, you know, one of the, certainly in Britain, I think in the States too,
there's this massive book called Fermat's Last Theorem,
massive is in terms of its sales rather than physically big.
It was written by Simon Singh.
And it's, you know, I read it when I was maybe 16 years old.
And one of the things that really, I guess,
solidified the idea that I wanted to be a mathematician. And in it, it's just a long story of,
you know, hardcore maths throughout the centuries. But what he did was he anchored all of the
stories to the people that were involved. And it's exactly like your race car driver, right? Like,
you care so much about the characters who are involved in this history of math. There's stories
of someone like Galois is a great example of a character that Simon Singh tells the story of
in the book. So he was French. He was about 19 years old. I think someone I'm sure will
know the facts better than me. I know sure we'll contact me and correct me. But it was
about 19 or 20 and he'd been having an affair with a very important person in French society,
a woman who was older than he was and her husband had found out about this affair and had
challenged him to a duel. Now, of course, in France, this is like I'm going to guess 1700s,
1800s in France at that time,
if someone challenges you to a duel,
you do not back out, you go to the jewel.
Except unfortunately, Galois had been working
on this incredibly important
theory of mathematics, now known as Galois
theory, and hadn't quite
finished the math. And so he knew
that at sunset, he had
to go off and fight this jewel
and probably be killed.
And he was desperate,
all the way into the night, drinking
and cowering over this,
his quill and his paper
desperately trying to write down as much math as he kid
and the papers that he left
that were left on his desk
as he went off to his jewel
they're just incredible
like you can see sort of photos of them
or see images of them they still exist
and it's loads and loads of equations
loads and lots of scribbling and then every now and then
he's like oh my goodness what's happening
this lady why did I do this
I'm after my death and you know he's desperately trying to finish
everything and I think for me that's what makes
the maths come to light because when you realize how important this stuff is to people that
they know that they're going to their death and still the only thing they want to do is finish
their mass. I think that's the stuff that makes it come alive. That's a great story. I hadn't
heard that one before. It doesn't it? Yeah. It sort of like pulls you in. What does it mean to you
to be human in an age of algorithms and machines? Ah. Wow. Goodness.
I mean, I could and have write an entire book on the subject.
Exactly.
So I think that actually that whole idea of humanizing maths, I think it sort of works both ways, actually.
I think that you need to humanize maths to make people want to find out more about it.
But I also think that the maths itself needs to be humanized if it's to properly fit in with our society.
because I think this is something that's happened a lot, actually.
In the last decade, certainly.
I think that people have got very, very excited about data
and about what data can tell us about ourselves.
And I think that people have sort of rushed ahead
and maybe not always thought very carefully
about what happens when you build an algorithm
or when you build something based on data
and just expect humans to fit in around it.
And I think that that actually has had quite, you know,
catastrophic consequences. So the most sort of famous examples of this, there's Kathy O'Neill's book
Weapons and Maths Destruction, which I think honed in on one aspect of this really brilliantly,
which is, you know, the bias that comes out when you don't think very carefully about taking
this, this algorithm and planting it in the middle of society and expecting everyone to just,
to just fit in around it. You know, the sort of gender bias that we've seen, the racial bias,
all of that stuff, I think that's like very well documented and quite well.
well known and understood about.
But I think there are slightly more subtle things as well.
So the example that makes this a really personal story for me
is that, and the reason I guess why I started thinking about this very clearly
or very seriously, and the reason why I wrote a book about it.
So is because of something that happened to me where I think I made that same mistake
where I got so tunnel vision about the maths that I didn't think about
what it meant when you put it in the human world.
So this is back in, as soon as I finished my PhD, back in 2011, the first project really that I did with was a collaboration with the Metropolitan Police in London.
So we just had, in 2011, we had these terrible riots across the country that started off as protests against police brutality, but they evolved into something else.
And there was a lot of looting, there was a lot of social unrest, really.
and the police had been, I think, slightly stunned by how quickly this had taken hold.
I mean, you know, we were in, for four days, really, the city was on, you know, was on lockdown.
London certainly was in lockdown.
So we'd been working in collaboration with the police just to see if there had been anything they could have done earlier, just to calm things down, I guess,
to just see if there were signatures or patterns in the data that would have given them, you know, a better grasp on how things were about to,
spread. So, okay, we wrote up this paper and the, you know, the academic community were
really happy with it, whatever. And a couple of years later, I went off to this big
conference in Berlin and gave a talk. There was like 1,500 people there at this talk.
And I was standing on stage, giving a talk about this paper. And I think that, I think I was
a bit naive, really. I think I was a bit foolish at the time. Because when you're a
mathematician, there's no Hippocratic oath for mathematicians, right? There's no, like, you don't have to
worry about the ethics of, I don't know, fluid particles when you're, when you're running
equations on them. And so I was standing on stage and I was presenting this paper and I was giving
this very enthusiastic presentation. I was essentially saying how great it was that now with data and
algorithms, we were in a world where we could help the police to control an entire city's
worth of people. That was essentially what I was saying. And it just hadn't occurred to me that,
you know, if there is one city in the entire world where people are probably not going to be
that keen on that idea, it's going to be Berlin. So I'm just like totally, yeah, I just didn't
think it through. Anyway, so as a result, the Q&A of this session, I mean, they destroyed me.
I quite rightly say, they destroyed me. They didn't destroy the math. They just destroyed the
store. It did destroy the man. They just destroyed, yeah. It was like heckling.
and everything. It was amazing. It was amazing.
I think for me that was just this really, really important moment because I think
it just hadn't quite twigued with me. I know that it makes me sound really naive,
but it hadn't quite tweaked in my mind that you can't just build an algorithm,
put it on a shelf and decide whether you think it's good or bad in completely in isolation.
You have to think about how that algorithm actually integrates with the world that you're embedding in.
And I think that that's a mistake that sounds like it's really obvious,
but actually I've seen lots and lots of people make,
that mistake repeatedly over the last few years and continue to make it.
Reading, playing, learning.
Stellist lenses do more than just correct your child's vision.
They slow down the progression of myopia.
So your child can continue to discover all the world has to offer through their own eyes.
Light the path to a brighter future with stellar lenses for myopia control.
Learn more at SLOR.com and ask your family eye care professional for SLOR Stellas lenses at your child's next visit.
Can you give me examples? What comes to mind when you say that?
Just as a silly example, a kind of more trivial example, I think that the way that some satnaves used to be designed, this is less true now, but certainly the way that some satnaves used to be designed was that you would just type it in and it would tell you your destination and off you went, right? Tell you where you were going and off you went.
And you could, if you wanted to, go in and interrogate the interface and find out exactly where.
where the thing was sending you.
But most of all, you'd put in the address
and it would just tell you where to go.
And that is an example, I think,
of not thinking clearly about the interface
between the human and the machine
because there are all sorts of stories
about people just blindly following their sat nav.
So my favorite example is there was a group of Japanese tourists in Brisbane
and this is a few years ago,
who wanted to go visit this very popular tourist destination
on an island off the coast of Brisbane, got a sat-nav, put it in, didn't look at the map,
off they went, didn't realise the sat-nav was essentially telling them to drive out into the ocean.
And amazingly, amazingly, the story, you'd think, you'd think, okay, fine, right?
You know, like, you get to the ocean and you're like, well, no, it's obviously asking me
to drive into the ocean, I'm not going to.
They didn't have that moment.
They carried on driving.
They really trusted the machine.
thought, oh, well, it'll bring us to a path eventually. And eventually they had to abandon their
vehicle, I think, like 300 meters out into the ocean. This is amazing. It's like half an hour
later as the tide came in a ferry sail past, they abandoned her. That's crazy. It sort of like
calls to mind, though. What role to algorithms play then in abdicating thinking and authority?
Well, right, that's it. That's it. So I think the shift in design that we've seen recently, and this is only
very recently is where you type in the address now. So I'm thinking in terms of Google
Maps and ways, certainly, and perhaps others, is that you type in the address and then up pops a
map, which gives you three options, right? So it's not saying I've made the decision for you,
off you go. It's saying, here is the calculations I've made. Now it's down to you. But it's
giving you that, I guess, that just that last step where you can overrule it, where you can
you can kind of sanity check it if you like and I think I like to I mean I sort of maybe I'm giving
them a bit too much credit they did drive out into the ocean but I sort of think that these
tourists had been seen a map for showing that they were going into the ocean maybe they wouldn't
have done it how does that work in as algorithms become more and more is that the goal then like
I'm thinking about the integration between algorithms and medicine where you're you're scanning is it
Is it always a human overruling?
Are there edge cases?
Is there a certain, like, how do we, how do you think about that?
Yeah.
So that, I think, is an incredibly, incredibly tough example.
So, okay, the first, the first algorithms that came through,
the machine learning algorithms that were designed to just tell you
whether there was cancerous cells within an image or not, right?
Yes or no.
And that's all very well.
That's kind of, you know, that's good.
And they proved themselves that they were good, that they could perform well in
But they're problematic.
There were examples where, you know, they'd go into a hospital.
They'd been performing incredibly well on a certain set of images.
And then suddenly they're performing incredibly badly.
And these algorithms are so sensitive that they were picking up on things like the type of scanner that was used was making a difference to the decision process of the algorithm.
Or like actually, the best example of that is there was a skin cancer diagnosis algorithm that was picking up on lesions on people's skins, photographs,
taken by dermatologists was the training set.
And it turned out that the algorithm wasn't really looking at the lesion itself at all.
It was deciding whether or not it was cancerous,
based on whether there was a ruler photograph next to it or not.
Like that kind of stuff, this stuff makes stupid mistakes.
So I think that that was sort of phase one of these sort of algorithms within medicine.
I think phase two is about making them much more able to be interrogated.
So, for instance, a deep mind who I spent a long time working with on public outreach projects, one of their big systems is, rather than just having an algorithm that tells you what the answer is, is having two separate AIs, right, two separate agents, one of them that highlights areas of interest within the image itself, and then the second algorithm that goes in and labels them. But it's just kind of opening out the box a little bit more so that it's possible for a pathologist or a radiologist to interrogate.
that image. So, okay, I think that's stage two, right? And that's like, that's the difference
between old type satnaves and new type satinabs. But I think that there's a stage three in
medicine that we're only just beginning to go into, which is, I think, a harder, even harder one
of all, which is that most cancerous cells in people's bodies actually are nothing to worry
about, which sounds like a mad idea. But there was a study a few years ago, you have to forgive me
slightly because I don't have all the numbers on the tip of my tongue, but there was a study a few
years ago where a group of scientists performed autopsies on people who had died from a whole
host of different causes, so everything from heart attacks, car crashes, all these different
kind of things. And they looked deliberately to see whether they had cancerous cells in the
body. And even though none of these patients had died from cancer, a huge percentage of them
had cancerous cells within their body. And the reason
for this, it's not that they all had really serious cancer that needed to be detected and
treated, it's that actually this happens a lot, right? It's not, if you have breast cancer,
for example, it's not a case of you don't have cancer or you do have cancer. There's a whole
spectrum in between that, and in between totally fine and really, really nasty cancerous
cells, there are tumours that may turn out to be something bad and may just, maybe the
body may to deal with them, or they may just stay there, untouched, well into, you know,
for essentially all of your life and be nothing to worry about. And the real danger of relying
too much on algorithms to detect those cancerous cells is that if you are too good at detecting
them, you're not just good at detecting the ones that then go on to be a problem. You're also
going to be good at detecting the ones that are nothing to worry about. And hence,
potentially causing huge numbers of people to have very serious and very invasive
techniques like double mastectomies, for instance, life-changing treatments, right,
that actually they never needed to have.
And that I think is something that's, it's another thing about like that boundary between
how much do we trust our machines that I think is not resolved yet and a sort of tricky
one for the next few years, I think.
That's fascinating.
I hadn't really thought of it in that way before, but I like the way you put it.
I think one of the interesting things going into the future is also going to be on if algorithms
are involved in the decision, is there an obligation to make them open source?
And then that would be sort of like stage one where, you know, you can critique and see the
actual algorithm working.
But stage two would be maybe it's a machine learning algorithm.
And then each iteration that it runs is actually slightly different.
Like, do we have to keep a copy of each algorithm?
And would we be able to detect, like, how it actually works?
I know, I know. It's so hard. It's so hard. Because I think it's very easy, you know, that it's very easy to say there are definitely problems with algorithms that are not open source. It's very easy to say there are huge problems with transparency. But finding the way around it, finding the solutions is a lot harder. It's a lot harder. I mean, because I think actually I sort of among the opinion that open source algorithms, at least the ones that are
at least the ones that have some sort of intellectual property in terms of them.
I think that that is both too much and too little.
So what I mean by that is I think it's too little because if you publish the code,
if you publish the source code of something, the level of technical knowledge and time actually
that it would take to interrogate that as an outsider enough that you have a really good
understanding of how it works, enough to be able to say, okay, you know what, to sort of
sanity check it, if you like. It's just fast. I just don't think it's realistic that actually
you can ask the community at large, really, to be able to take on that load. But then simultaneously,
I think it's, by doing so, by releasing, making everything open source, then I think that you
are going to stifle innovation, right? Because I think that part of the really good thing,
part of the reason why we've seen such acceleration of these ideas is because it's, because it's
it's possible to make them commercially viable.
And I think that if you publish things as open source,
then there's a problem with that,
that you risk slowing down innovation, I think,
which is it, which I don't think you'd want to do either.
The workaround, though, you know,
okay, so what do you do instead?
Because I think that everybody sort of agrees
that transparency is really important here,
I think particularly when it comes to the more scientific end of algorithms.
I mean, I think, to be totally blunt,
I think that unless you're doing science openly,
you're not doing science.
But yeah, I mean, it's really, so some of the suggestions have been, and I think this is one that I broadly support, some of the suggestions have been to copy the pharmaceutical industry's model, so where you have a separate board like the FDA who have the ability to really interrogate these algorithms properly and can give a sort of rubber stamp of approval as to whether they are appropriate to be used or not. But that's different from just open source because, I mean, a sort of FDA style thing would be able to go in and stress test them and test them.
for robustness and check them for bias and all of those type of things instead. But I mean,
there's no easy, there's no silver bullet to sort of, yeah, addressing some of the many
problems that algorithms raise. Do you think, like, we would rather on general, like, when do we
want algorithms making decisions and when do we want humans making those decisions?
Well, so there's certainly, there's certainly some occasions where actually the further away
humans are from it's the better.
Humans, we're not very good at making decisions at all.
We're not very good at being consistent.
We're not very good at being clear.
With nuclear power stations, for instance,
as much as possible, you want to leave that to the algorithms.
You want to leave that to the machines.
Likewise, in flying airplanes,
I think you want to leave that to autopilot as much as you possibly can.
In fact, actually, there's that really nice joke.
To fly a plane, you need three things.
A computer, a pilot, a human, and a dog.
and the computer is there to fly the plane
the human is there to feed the dog
and the dog is there to bite the human
if ever it touches the computer
which I think is like nice
and there's definitely some situations
where you want the humans as far away from it as possible
but I also think that actually
these machines especially the ones
that are getting much more
involved in more social decisions
they really are capable
of making quite catastrophic mistakes
And I think that if you take the human out of the decision,
even if on average you might have a slightly better, more consistent framework,
if you take the human out of that decision process altogether,
then I think that you risk real disasters.
We've certainly seen plenty of those in the judicial system,
you know, where algorithms have made decisions,
judges have followed it blindly and it's been really the wrong thing.
Just to give you an example,
there was a young man called Christopher Drew Brooks.
This is actually a few years ago,
but he was 19 years old from Virginia,
and he was arrested for the statutory rape of a 14-year-old girl.
So they had been having a consensual relationship,
but she was underage,
and so he was, which is illegal and he was convicted.
But during his trial, an algorithm assessed his chance
of going on to commit another crime in future.
These are the sort of very controversial, yeah, exactly, algorithms that do so.
But actually have been around for quite a long time.
And this algorithm, it went through all of his data,
and it determined that because he was a very young man,
he was only 19 years old and he was already committing sexual offences,
then he had a long life ahead of him,
and the chances of him committing another one in that long life were high.
So it said that he was high risk,
and it recommended that he'd be given 18 months jail time,
which, I mean, I think you can argue that one,
way or the other, depending on your view. But I think what this case really does do is it highlights
just how illogical these algorithms can sometimes be. Because in that particular case, if instead
the young man had been, I think, 36 years old, that would have been enough. This algorithm had put
so much weight on his age that if he'd been 36, it would have been enough to tip the balance,
even though that put him at 22 years older than the girl, right, which I think surely by any
possible metric makes this crime much worse. But that would have been enough just to tip the
balance and for the algorithm to believe that he was low risk and to recommend that he
escaped jail entirely, which I think is just an extraordinary example of how wrong these
decisions can go if you hand them over to the algorithm. But I think for me, the scary thing
about that story is that the judge was still in the loop, right? The judge was still in the loop
of that decision-making process. And I think that you would hope in that kind of situation that
they would notice that the algorithm had made this terrible mistake and step in and overrule
it. Well, it turns out that, you know, those Japanese tourists we were talking about earlier,
I think that judges are a lot more like them than we might want them to be.
So in that case, and lots of other cases like it, actually the judge just sort of blindly
followed what the algorithm had to say and increased the jail sentences of this individual.
So, I mean, you've got to be really careful, right? You've got to be careful about putting too
much faith in the algorithm. But just on the flip side of that judge,
example, I also don't agree with the people who say, well, let's get rid of these things
all together in the judicial system. Because I think there is a reason for them being there,
which is that humans are terrible decision makers, right? Like, there's so much luck involved
in the judicial system. There's studies that show that if you take the same case to different
judges, you get a different response. But even if you take the same case to the same judge,
and just on a different day, you get different responses. Or judges who have daughters tend to be
much stricter in cases that involve violence against women.
Or my favorite one, actually, is that judges tend to be a lot stricter in towns where the
local sports team has lost recently, which is, it kind of shows you what you're dealing
with, right?
Like, there's just so much inconsistency and luck that's involved in the judicial system.
And I think if you do it right and carefully, I think there is a place for algorithms to
support those decisions being made.
Do you think in a way we get to advocate ourselves?
from responsibility if we defer to an algorithm. So if you're a judge and you defer to an
algorithm, it's not like you're going to be fired for deferring to the algorithm that everybody
agreed was supposed to input or make the decision. Exactly that, especially if you're,
you know, especially if people vote you in. And here's a way that you can absolve yourself of
responsibility. I completely agree. I think all of us do it. All of us do it. And that's the
problem is that this is a really, really easy thing to happen. It's very, very easy thing to happen. It's
very easy for us to just, I don't know, take a cognitive shortcut and do what the machine tells
us to do, which is why you have to be so careful about thinking about this interface,
thinking about the kind of mistakes that people are going to make and how you mitigate against
them by designing stuff to prevent that from happening.
Can you talk to me a little bit about what we can learn about making better decisions from
mass?
I'm going to do a pertinent example.
because the thing that I think the example of what's going on right now with the pandemic is a really
tragic and chilling example of how important maths can be when it comes to making clear decisions
because I think that this is just one situation where in many ways maths is really the biggest
weapon that we have on our side you know we don't have pharmaceutical interventions yet we don't have a vaccine yet
and all we have really is the data and the numbers.
This is March 18, just for people listening to 2020.
Yeah, exactly.
So we're still at the stage where things are ramping up.
I mean, you know, who knows how bad it's going to get from here.
But certainly in the last month, I mean, they're the first ones really.
The epidemiologists and the mathematical modelers are the ones who've been sort of raising the alarm
and driving the decision making and driving the strategy and driving government.
policies. You know, because at the moment, if you looked only at the numbers of where we are,
I think there's been maybe 150 deaths or so in the UK. I haven't got the exact numbers to my
fingertips, but something of that order, right, around 100 deaths in the UK, which, you know,
every single one of those is a real tragedy. But it's not a huge, huge, huge number. But the reason
why we know that that's a bad, why we're in a bad situation, and the reason why we know we need to take
these extreme measures to essentially shut down our borders, to shut down our country, is
because the math is telling us what is coming next. We don't have a crystal ball to look into the
future, but really math is the only thing that's there guiding us. It's really fascinating to me.
Can you talk to me a little bit more about the pandemic and sort of like how you think about it
through the lens of math? Yeah, totally. So I actually, in 2018, I did a,
big project with the BBC because we knew that a pandemic was coming.
So we teamed up with some epidemiologists from the London School of Hygiene Tropical Medicine
and the University of Cambridge to collect the best possible data so that we could be prepared
for when something like this did happen.
The big problem at that point, so this is only a couple of years ago, the big problem was
that if you want to know how an epidemic or a flu-like virus will spread through a population,
then you need to have really good data on how far people travel
and how often people come into contact with one another
and crucially who they come into contact with,
the different age groups,
the settings they come into contact with other people and so on.
And up until a couple of years ago,
it sounds mad to say it,
but given that everyone's carrying mobile phones,
but up until a couple of years ago,
the best possible data that we had
within the UK at least for how people did that, how people moved and how people mixed with
one another was a paper survey from 2006 where a thousand people said, oh yeah, I reckon I did this.
I reckon I went about that front. I reckon I came into contact with these people.
So what we did with this, with the help of the BBC, because you know, they have such amazing reach,
is we created this mobile app that would essentially track people,
people would volunteer and sign up
by watching the program and so on
and let us track them around for 24 hours
and track who they came into contact with
and also get loads of things about their demographics
and their age and so on and so on and so.
Now two years later or less than two years later
we have this incredibly detailed data set
that's feeding right into the models
that our government are using,
making this enormous difference
in terms of the accuracy of how well we can predict things.
And I just think it's like
It's just the most pertinent and chilling example I've ever been part of,
which just demonstrates how important the maths is if you're going to try and win a war with nature, essentially.
It seemed to me, I mean, there was two different types of people,
just to broadly generalize going into this pandemic.
There was people who understood nonlinear and exponential functions,
and people who maybe had a harder time with that.
and the people who did seem to understand or grasp those concepts better seem to take it a lot
more seriously than the people that didn't. And I would love to find a way to help people think
better in terms of exponentiality. Yeah, of course. I mean, part of the problem is that the word
exponential just gets thrown around. Like, you know, people say, oh, this, this project's exponentially
more difficult or, you know, exponentially more dangerous. And it's like, well, no, it's not. It's not what the
word means. And it is really counterintuitive because the thing about exponential growth,
it doesn't just mean big, it doesn't just mean lots. It means something very specific. It means
that it's where something is changing by a fixed fraction in a fixed period. So this virus, for
instance, is doubling every five days. So doubling fixed fraction every five days is a fixed
period. And I think that it's just, yeah, I mean, it's just not something that's counterintuitive
at all. Like there's the really classic example of the rice on the chess board. So this is this
idea. It's like a classic story about an Indian king who was really impressed with the chess
boards when it was shown to him. And so he said, okay, I'll tell you what. I will, I'll give you
a grain of rice for the first square and then we'll double the grains of rice every subsequent
square right um which uh sounds like oh that's not very much if you're at the beginning and it's like
one grain then two grains and then four grains like okay this you know this is not going to cost me
very much the thing is is that by the end of the the end of the chessboard you need like a lot
of rice essentially you need 18 quintillion grains of rice which is essentially i worked this out
um if you take Liverpool the area of Liverpool which i know for american listeners isn't easy
to imagine. But it's essentially like a whole city, it's an area that size, stacked three
kilometers high with rice. That's how much rice it is. It's like, I mean, exponential growth is just
beyond imagining. It's just completely counterintuitive. One of the stories I loved about in your
book switching gears a little here to Hello World, you had this story of Kasparov and playing
deep blue. And everybody's told that story, but you had a unique angle to it that I hadn't heard
anywhere else, which is that the machine was also playing with Kasparov.
Yeah.
So this goes exactly back to what I was saying earlier about.
It's not just about building a machine.
It's about thinking about how that machine fits in with humans and fits in with human
weaknesses.
Because the thing is, is that Kasparov, I mean, he was an incredible player.
So I had a chat, when I was researching my book, I spoke to lots of different chess grandmasters.
and one of them described him like a tornado
so when he would walk into the room
he would essentially pin people to the sides of the room
they would kind of clear a path for him
because he was just so respected
and what he used to do
had this trick if he was playing you
he would take off his watch
and he would place it down on the table next to him
and then carry on playing
and then when he decided that he'd sort of had enough
toying with you he would pick up his watch
and he would put it back on as if to say
that's time now I'm done I'm like
I'm not playing you anymore. And essentially everyone in the room knew that was your cue to
resign the game, which is just like so intimidating and just really like terrifying. The thing is
is that those tricks that Casparov had, I mean, they're not going to work on a machine, right?
You've got the IBM guy sitting in the seat, but I mean, he's not the one making the moose. He's
not the one playing. So it's not going to affect him at all. So none of that stuff worked in
to Casper's favour. And yet, the other way around, the IBM machine could still use tricks on him.
So there's a few reports. The IBM team deliberately coded their machine so that the way that it
worked for it would sort of search for solutions. And depending on how long that search would
take, it would be how quickly the answer would came back. But they deliberately coded it so that
sometimes in certain positions, the machine might find the answer very quickly, but rather than
just come back with the response, they added in a random amount of time where it looked like
the machine was just ticking over, thinking very carefully about what the move was, when in
reality it was just sitting there in a sort of holding pattern. And Kasparov himself, so in his
latest book and in several interviews, had said that he was sitting there and was trying to second
can guess what the machine was doing at all times.
So it was trying to work out why this machine was stuck grunting through very difficult
calculations and essentially got psyched out by the machine.
Because I think all of the chess grandmasters are pretty much uniformly in agreement
that at that moment in time, when the machine beat Kasparov, Kasparov was still the better
player, but it was the fact that he was a human, it was the fact that he had those human
failings that meant that he was outsmarted by the machine.
that's such an amazing and incredible story thanks for sharing that your first book the mathematics
of love explained the math underlying human relationships how can applying math math concepts to
romantic situations be helpful to people well uh so this is this is a it was sort of a kind of private
joke that got terribly out of hand that book um where i i you know when when i was sort of
of, you know, in the dating game or like, you know, designing my table plan for my wedding or
like any of those things. I mean, I just like generally apply maths to everything. And we're just
try and calculate as much as possible. I'm trying to like game it as much as possible. And so in
the end, I like wrote these up into a book and it's all very tongue and cheap. But the thing is,
is that while I totally believe that you cannot write down an equation for for real romance, you
can't write down an equation for that sort of, that spark of delight that you get when you meet
someone and you know you really like them.
There's kind of, there's no real math in that.
But there's still loads of maths in,
in lots of aspects of your love life, right?
So there's maths in, you know,
how many people you date before you decide to settle down.
There's maths in the data of what photographs work well
on online dating or, you know, apps or websites.
There's loads of maths in designing your table plan for your wedding
to make sure that people that don't like each other
don't have to sit together instantly.
My code's available if anyone wants it.
And there's even, actually, my favorite, favorite one is there's even maths in the way that
arguments between couples in long-term relationships, the dynamics of those arguments.
So there's lots of little places that you can, you can find a place to kind of latch on and use
the math.
How many people should we date before we settle down?
This is the one I got me the most in trouble.
So, okay.
So here's the problem, right?
is that what you don't want to do, I guess, in an ideal world is you don't want to just decide
to latch onto and settle down with the very, very first person who shows you any interest at all
because actually they might not be that well suited to you. And if you hold out a little bit
longer, maybe you'll find someone who's better suited to you. But equally, you don't want to
wait forever and ever and ever and ever because you may end up missing the person who was right
for you turning them down because you think someone betters around the corner and then
finding out that actually they were always the right person. So what you can do is you can set this
up as though it's like a mathematical problem. So you've got a number of opportunities lined up in a
row, sort of chronologically lined up. And your task is you want to stop at the perfect time. You want
to stop at the moment that you're with your perfect partner. So it's essentially a problem in optimal
stopping theory it's called. So the rules are that once you reject someone, you can't go back
and say actually I wanted you after all
because people don't tend to like that
and the other rule is that once you
decide that you've settled down you can't look ahead
to see who you could have had
you know going on later in life
so if you frame it like that
with those assumptions then it turns out
that the mathematically best strategy
is if you spend
the first 37% of your dating life
just to having a nice time and playing the fields
so it's 1 over E right so if there's 7%
Yeah, spend the first there 7% of your life just playing field, having a nice time,
getting to know people, but not taking anything too seriously.
And then after that period has passed, you then settle down with the next person who comes
along that is better than everyone you've seen before.
So, yeah, that's what the math says.
But I should tell you, right, I should tell you that there's quite a lot of risks involved in this.
So is that what you tell your husband?
you're the you're the best after the 37th um percent yeah yeah yeah marginally better yeah that's it
exactly great how can we use i don't want to say argue better but i'll use your language like how can
we use math to argue better in our relationship oh this is my favorite favorite one so this is
this is some work that was done by um the psychologist john gotman he's done some amazing work with
couples in long-term relationships and he's worked out a way that he what he essentially
does is he gets couples in a room together and he videotapes them and he gets them to effectively
to have an argument with one another right so officially they say that it's uh they ask them to have a
conversation about the most contentious issue in their relationship but basically they they lock up a
couple in a room make them have an argument but what they've done is they've worked out a way to
score everything that happens during that conversation so every time that someone's positive they get a
positive score every time someone sort of laughs and you know gives way to the partner and you know but even
just, right? So if you roll your eyes, you get a negative score, if you stonewall your partner,
you get a negative score, that kind of thing. Anyway, the thing that's kind of neat about this
is that it then means that you can look at a graph of how an argument evolves over time. So the
really nice thing about this is that John Gottman then teamed up with the mathematician
called James Murray, who came up with a set of equations for how these arguments ebb and flow,
the dynamics of these equations essentially. And hidden inside those equations, there's something called
the negativity threshold.
So essentially, this is how annoying someone has to be
before they provoke an extreme response in their partner, right?
So my guess would have been, I mean,
they've got the data on, you know,
hundreds if not thousands of couples here.
My guess always would have been, all right,
negativity threshold,
surely the people who've got the best chance
at long-term success, the people who end up staying together,
surely those are going to be the ones
where they've got a really high negativity threshold.
that would have always been my guess.
You know, like the couples where you're leaving room
for the other person to be themselves,
you're not sort of picking on anything
on every single little thing
and you're kind of, you're compromising, right?
That would have been my guess.
Turns out, though, when you actually look in the data,
the exact opposite is true.
So the chances,
the people who have the best chance at long-term success
are actually the people who've got really low negativity thresholds.
So these, instead,
they're the people where if something annoys them, they speak up about it really quickly,
immediately, essentially, and address that situation right there and then. But they do it in a way
where the problem is dealt with and then actually you go back to being, you know, go back to
normality. So this is, it's couples where you're, you're continually repairing and resolving very,
very tiny issues in your relationship. Because otherwise, you risk bottling things up and then
not saying anything and then one day coming home being totally angry about a towel that's left
on the floor or something and it just being totally at odds with what the incident itself is
you know bottling things up and then and then it's loaded yeah i think that's really fascinating right
because if you look at what it takes to bring things up in a relationship when they happen or
pretty close to the time they happen it means you have a lot of security and comfort and you know that
bringing this hard thing up and is not it might make somebody angry or hurt them but it's not
going to be the end of the relationship and then not letting it fester actually makes the relationship
stronger long term exactly exactly now of course the language that you use is really important as well
right so you can't just be like you know can't just launch it and be a nightmare right but um but but but i think
that's i really like i love those stories i love those stories where there's something about humans
that is just written completely in the numbers i think that's really wonderful Hannah this has been an
amazing conversation. I want to thank you for your time. Oh, thank you. Thank you very much.
Hey, one more thing before we say goodbye. The Knowledge Project is produced by the team at Furnham Street.
I want to make this the best podcast you listen to, and I'd love to get your feedback. If you have
comments, ideas for future shows, or topics, or just feedback in general, you can email me
at Shane at fs.com. Or follow me on Twitter at Shane A.
parish. You can learn more about the show and find past episodes at fs.blog slash podcast.
If you want a transcript of this episode, go to fs.blog slash tribe and join our learning
community. If you found this episode valuable, share it online with the hashtag, the knowledge
project, or leave a review. Until the next episode.
Thank you.