The Joe Walker Podcast - A Practical Guide To Coping With Uncertainty — John Kay
Episode Date: May 17, 2021John Kay is one of Britain's leading economists.See omnystudio.com/listener for privacy information....
Transcript
Discussion (0)
Swagman and Swagettes, this episode is sponsored by GiveWell.
I'm so proud to support them.
Imagine if every year you saved a person's life.
One year you rescued someone from a burning building.
The next year you saved someone from drowning.
The year after that, you're out for dinner with your partner or maybe you're on a date.
You notice someone having a heart attack and you perform CPR and save their life.
Think about the warm glow you'd feel living
this extraordinary life. The truth is we have an opportunity to do this every single year of our
lives just by targeting our donations to the most effective charities in the world. How is this
possible? Three premises. Number one, if you're listening to this podcast, chances are you make
more than 19 and a half thousand US dollars per year post-tax and are therefore in
the richest 10% of the world. Two, we can do 100 times more good for others than for ourselves
by focusing on the parts of the world most in need, because a doubling of income will always
increase subjective well-being by the same amount. And three, in the same way as the success of
for-profit companies isn't normally
distributed, some charities are vastly more effective than others. But how do you find the
most effective charities? Well, since 2010, GiveWell.org has helped over 50,000 donors find
the places where their donations can save or improve lives the most. Here's how. GiveWell dedicates over 20,000 hours a year to researching
charitable organizations and handpicks a few of the highest impact evidence-based charities.
The best ones GiveWell has found can save a statistical life for $3,000 to $5,000. Donors
have used GiveWell to donate more than $750 million.
These donations will save over 75,000 lives and improve the lives of millions more.
Here's the best part.
GiveWell is free. They publish all of their research on their site for free so donors can understand their work and recommendations.
GiveWell doesn't take a cut of your donation and they allocate your tax-deductible
donation to the charity you choose. GiveWell is a key organization in the effective altruism
movement, a movement I first became loosely involved in five years ago. I've run several
episodes with leading effective altruists, including Will McCaskill, Peter Singer,
Jan Talen, and Rob Wiblin, and I personally give to the Against Malaria
Foundation, which distributes bed nets, to prevent malaria at a cost of about $5 to provide one net.
If you've never donated to GiveWell's recommended charities before, you can have your donation
matched up to $1,000 before the end of June, or as long as matching funds last. Just go to givewell.org slash swagman and pick podcast
and then the Jolly Swagman at checkout.
Make sure they know that you heard about GiveWell
from the Jolly Swagman podcast to get your donation matched.
That's givewell.org slash swagman, select podcast,
and then select the Jolly Swagman at checkout.
You're listening to The Jolly Swagman podcast. Here's your host, Joe Walker.
Ladies and gentlemen, boys and girls, swagmen and swagettes, welcome back to the show.
Welcome to this special shortcast.
I'll explain what it's about in a moment, but first let me introduce our distinguished guest.
John Kay is one of Britain's leading economists, one of my favourite economists,
and the author or co-author of some of my favourite books, including Other People's Money, Obliquity, Radical Uncertainty,
and Greed is Dead. John is a fellow at St. John's College, Oxford University. He was the first
director at the Saeed Business School at Oxford, and for 20 years, a columnist for the Financial
Times. So that's our guest. This episode is about radical uncertainty. Radical uncertainty describes situations in which we can't list all of the things that might happen,
let alone assign probabilities to them.
It drives, in Maynard Keynes' words, the spirit of enterprise.
It moves markets and it makes life interesting.
But many people, chiefly economists, have conflated quantifiable risk with unquantifiable
uncertainty, treating the future as if it were a shadow of the past. The purpose of this episode
is to help people understand why that kind of thinking is erroneous and offer something in
its place to describe how people really cope under uncertainty.
John Kay is uniquely positioned to speak on this topic.
Not only is he a doer whose career spans academia, business, finance, and public policy,
he is also, along with former Governor of the Bank of England, Mervyn King,
co-author of a brilliant book titled Radical Uncertainty, which was published last year.
Mervyn came on the show last year to discuss it.
I've talked to many economists in John's circle of British uncertainty musketeers,
not only Mervyn King, but also David Tuckett and Paul Collier.
But until this episode, I'd never spoken to John.
Here was my pitch to him.
Quote,
Dear John, I typically do very long podcasts, as Mervyn can
attest, of one to two hours, as I think this allows me to ask nuanced questions that haven't
been raised in previous interviews of my guests. I thought we could try something different. I've
noticed, perhaps you have too, that many people pay lip service to the idea of radical uncertainty,
but they don't act as if they truly understand it.
It hasn't sunk into their bones.
To help address this, I'd love to produce a very short,
say 15-minute long, podcast interview with you.
A podcast that could be listened to by busy people
and would be likely to be shared widely.
Here's how it could run.
Part 1, overview of radical uncertainty.
Part 2, implications of radical uncertainty. Here's how it could run.
Needless to say, John was on board, and this episode is the product of that email exchange.
It ended up being a little longer than 15 minutes,
but it's still more bite-sized than my normal fare. So without much further ado,
please enjoy this special short cast on radical uncertainty with the very eloquent and very erudite John Kay. John Kay, welcome to the podcast. Good to be with you, Joe.
John, what is radical uncertainty?
Let's begin by saying what uncertainty is.
And uncertainty is not knowing either what is happening or what is going to happen as a result of imperfect information.
And that might be imperfect information about the past, the present, and most of all about the future. So that's uncertainty. Now there are some uncertainties that are resolvable.
And they're resolvable in one of two ways. One is you can get more information. You can look
something up. There are a lot of things none of us know, but the answer is on Google and
Wikipedia or something like that. That's a resolvable uncertainty. The other kind of
uncertainty that's resolvable is something like the results of tossing a coin, where
you don't know what's going to happen, but there's a well-established stationary process that enables you to attach
probabilities to it. And radical uncertainty is what's left once you've resolved these two
kinds of uncertainties. In one case probabilistically, in the other case by assembling out more information.
And radical uncertainty arises because typically we're
dealing with unique situations, we don't know what the range of possibilities are,
and we may not know what the outcome was even after the event has occurred. That's radical
uncertainty. What is radical uncertainty's domain? How do we know whether we're in a situation
which is subject to radical uncertainty or a situation which is subject to resolvable uncertainty?
The question is whether the underlying process that generates what we see
is a stationary or a godic process or not. By that we mean we know what the process is,
it remains unchanged over time,
and it's unaffected by our interaction with it.
So if we take, for example,
I think we use in the book the remarkable example
of NASA firing Messenger to Mercury.
And it took seven years.
It circuited the Earth, it circuited Venus,
it circuited Mercury several times
before they nudged the shuttle into a final orbit.
But even so, after seven years,
they knew more or less exactly where it would be.
And that happens because it's a stationary process. We know
what the equations of planetary motion are. We know that they've remained unchanged for hundreds
of years. And Venus and Mercury don't care very much what we think about their equations of motion.
None of these things are true of most of the problems we face
in business, politics, finance.
So some scientific and engineering problems
are capable of this kind of description
because there are long-term stationary processes.
We don't have many of them in social sciences.
What are some causes that generate non-stationarity?
It's human beings, really, or rather biological evolution.
That we want very much to represent things as mechanical systems
because we've been very successful at knowing what is going to happen
when we describe a mechanical system. So we want to describe human affairs in that kind of way.
And we can learn some interesting things by trying to do that. Mervyn and I wrote this book with a commitment, actually, to using economic models.
But the purpose of using economic models is to help you understand problems better.
It's not to predict what is going to happen. Most people are probably familiar with the
concept of black swans, which Nassim Taleb popularized. Taleb actually uses
the phrase black swan in two very different ways. How do you think about black swan events,
and how do they fit into the concept of radical uncertainty?
I'm glad you said that, because there's a lot of confusion generated by these
two different senses of black swan. The original one is, as it were, the Australian one,
which is that people thought all swans were white
until European colonists went to Australia
and discovered black swans.
So that was an event that was a black swan
in the sense that you couldn't imagine it before it happened.
And there are many examples of that.
We talk about a paradigm example, which is inventing the wheel.
You can't attach a probability to inventing the wheel,
because if you can think of a probability of inventing the wheel, you have already invented it.
And the same is true in a more trivial way with something like the iPhone.
25 years ago, if you described an iPhone to people, they just would not have understood what you were talking about.
And that's why there could never have been, for example, a futures market in iPhones.
Now, there's an important point to emerge from that, which is if there had been, there wouldn't
have been the profits to be made that there were from inventing the iPhone. And the iPhone,
therefore, might never have been invented. It's actually the existence of radical uncertainty
that creates profit opportunities in business and finance.
And that was the insight that Frank Knight had a century ago.
And it's an insight that people lost hold of
in the past hundred years since then.
To say that the future is radically uncertain
is a penetrating glimpse of the obvious
for most people. But why has it been so hard for the economics profession to internalize
this insight? It's an interesting question, especially since exactly 100 years ago,
two great economists, Maynard Keynes and Frank Knight, who I've already
mentioned, wrote books explaining the central importance of radical uncertainty and resisting
the ways in which people at the turn of the 20th century had started to talk about everything
in probabilistic terms.
And both these people argued very clearly and cogently that you couldn't do this.
But the economics profession went off in a different direction.
And the reason for that was essentially the strand of economics in the late 1930s,
and then particularly the next decade through the Second World War that looked at rationality in axiomatic terms.
That is, economists had a rather strong definition of rationality and rational behaviour.
And they wanted to extend that definition to apply to choice under uncertainty
as well as the choices you make
in the supermarket. And in a formal sense you could do that by creating subjective probabilities
akin to the taste you have for coffee or beer or whatever. But that was a mathematical device
rather than a reality about the world and
the way people think about the world. Interestingly, the beginnings of what people now call behavioural
economics were in the early 1950s. There was a very interesting conference in Paris in 1952 with some of the greatest economic thinkers there
talking about uncertainty.
And they persuaded themselves at a dinner party there
that they didn't think in the ways
in terms of subjective probabilities,
that you couldn't attach probabilities to everything.
But actually, when the American attendees of that went back to the States,
they changed their mind.
And by the 1960s, Milton Friedman would famously write
that his predecessor at Chicago, actually, Frank Knight,
made this distinction between risk that you could talk
about probabilistically and uncertainty that you couldn't. And Friedman said, I shall not refer
again to this distinction because I do not believe it is valid. We can treat people as if they
attached probabilities to every conceivable event. And that's wrong. How should policymakers or business
leaders deal with radical uncertainty? How can we design systems that are robust to uncertainty?
Let's start with the way people really think about uncertainty, because people don't naturally think
probabilistically. One of the intriguing things we discovered was that
probabilistic mathematics only came into being in the 17th century. And that's pretty odd when you
think about it, because the ancient Greeks and Romans gambled a great deal, and the people in
these societies were certainly pretty good at mathematics, but they didn't develop this branch of mathematics.
And that's because probabilistic thinking is just alien to the way people think about the future.
They think that there is a knowable future somewhere, if only they could get at it. And
that's because the human reasoning is more naturally narrative storytelling
than mathematical, and indeed in very complicated situations. The only way you can frame uncertainty
and make choices is by creating narratives. I remember when I first got involved in expert witness legal cases,
I thought, why haven't lawyers and judges learned to deal probabilistically with the issues which they're faced with?
And I came to realise that's just not the way the law works.
You're asking this question, what is going on here?
You're trying to put together a story that describes the variety of facts and the like that are at issue in any particular case. unique. And it's quite important that every legal case is unique. If you're in court,
you don't want to be told, we work on the basis that we get 95% of our judgments right. You want
to be told we will work very hard to get your judgment right. Does radical uncertainty license
us to be nihilists? If I can't list all of the possible things that might happen in the
future, let alone attach probabilities to them, does that give me carte blanche to believe whatever
I want? It absolutely does not. And this is one of the great mistakes that economists have made,
particularly in relation to the ways in which they think about macroeconomics.
There are things that are completely known at any rate up to a probability distribution
and things that are unknown that are typically called shocks. Now the reality of the business
and financial world is there are a great many things we know roughly, but not precisely.
Very good examples are in speculative bubbles.
We have one at the moment in all the fintech things that surround cryptocurrencies
and the spinoffs from that, the NFTs, SPACs, and so on.
We know quite a lot about how bubbles evolve from our historic experience,
but that's not at all the same as being able to say this one is going to blow up in October 2023.
John, in this final part of the podcast conversation, I'm going to lob some famous predictions at you,
and then you can tell me whether they are intellectually defensible in light of radical
uncertainty. First famous prediction. In his book, Destined for War, Graham Allison notes that in 12
of 16 cases over the past 500 years, when a rising power challenged a ruling power,
the result was war.
On this basis, he argues that while war between China
and the United States is not inevitable,
it is more likely than not.
I think that's a reasonable way of framing it.
What people certainly should not do is note this
and say that the probability of war between these two countries is 0.75, 12 over 16.
But we distinguish between likelihood, which is essentially an ordinal variable, and probability, which is cardinal.
That means it makes things to say something is more likely
or less likely. It also makes sense to say this information, which I've just received,
makes, in this case, war between China and the United States, more likely than I thought before.
But that's not the same as being able to frame it in terms of probabilities. So to say that on the basis of
history, which is what Allison is saying, it is likely that this kind of standoff will end in war
is reasonable. To say there's a probability of 0.75 or any other number is absurd.
This next one is perhaps more an infamous prediction rather than a famous prediction.
Nate Silver's 538 2016 US presidential election night forecast
put Hillary Clinton's odds of winning at 71.4%
and Donald Trump's chances at 28.6%.
Well, you say that's infamous,
and probably you say it's infamous for the wrong reason.
It's not a bad prediction because we know that Trump won.
That's the mistake that's called resulting,
which is judging a forecast by whether it comes
true or not. And we like that phrase resulting which comes from Annie Duke
who's a female professional poker player and she says resulting is thinking that
a decision is right or wrong because you know what happened.
That isn't appropriate in a game like poker. You're going to get some decisions right,
some decisions wrong in that sense. But actually the question is whether it was a good decision
at the time given the information you had. And if you make good decisions of the second kind,
you will win more often than you lose. So this does not demonstrate that silver was a no-good
forecaster. What it does demonstrate is that silver's attempt to put a number on it of 71.4% for Clinton was completely ludicrous. You cannot do that.
You can say, and you would have been right to say in 2016, that it's likely Clinton will win,
even though she didn't. In his book, The Precipice, Australian-born Oxford philosopher Toby Ord
puts the chance of an existential catastrophe befalling
humanity in the next 100 years at about one in six. He reaches this estimate by roughly adding
up the odds of various existential catastrophes such as climate change, nuclear war and a super
volcanic eruption occurring within the next 100 years but he does stress don't take the
estimates to be precise their purpose is to show the right order of magnitude rather than a precise
probability yeah there's a lot i like about ord's book uh but he's falling victim and i completely
understand having talked to many audiences about radical uncertainty.
He's falling victim to the demand to put numbers on what it is he's saying.
He quite sensibly looks at these existential catastrophes,
climate change, nuclear war, supervolcanoes,
and assembles the evidence round about them.
And these are all very different.
I mean climate change estimates of what happened there depends on the outcome of models,
models actually of rather poor quality. Nuclear war, you have to know quite a lot about geopolitics to talk about how likely
nuclear war is. Supervolcanic eruptions, well we have reasonable data from the past about
the frequency of supervolcanic eruptions. So that's closer to being a stationary process to which we can attach probabilities.
So it's quite sensible to talk about these sources of existential catastrophe and make
assessments about how likely they are.
In one or two cases, like super volcanic eruptions or some of the other existential catastrophes we've had, like asteroids, for example,
you can put probabilities on them because there is an underlying fairly stationary process and data set.
These are processes that have some of the characteristics of the mechanical processes which I described
earlier. Nuclear war isn't remotely a stationary process at all. You can't say, although Nate
Silver might, that because there have been two atomic bombs dropped since 1945, the probability that there will be an atomic bomb dropped tomorrow
is the number of days between 1945 and now divided by two. You can't do that kind of sum,
although that's exactly the sum Silver does in relation to talking about the probability
of the attack on the Twin Towers in 2001.
In their book, This Time is Different, Carmen Reinhart and Kenneth Rogoff present a historical
analysis comparing the run-up to the 2007 US subprime financial crisis with the antecedents
of other banking crises and advanced economies since World War II. They note that standard indicators for the United States,
such as asset price inflation, rising leverage,
large sustained current account deficits
and a slowing trajectory of economic growth,
exhibited virtually all the signs of a country
on the verge of a financial crisis, indeed a severe one.
That's a very good example of how people should be approaching these kind
of problems. You look at the history, you recognize that every financial crisis is in one sense
unique, but equally that there are common elements to financial crisis. And what they're trying to do
is to pull out these common elements.
I talked earlier, actually, about the current speculative bubble we have.
It's something that, it's almost a mirror image of the one which burst in 2008.
Because while the one in 2008 was very much one internalised by market professionals.
This one is almost entirely about financial amateurs outside the mainstream financial system.
And actually, I don't think that's an accident,
that financial crises inoculate people for a time
to these particular kinds of financial crisis.
So the next one is likely to come from an entirely different source.
But that's a different way of thinking about these issues
than either attaching a probability to it
because this is the frequency with which we've had financial crisis in the past
or saying, which unfortunately is what a lot of economists do, This is a frequency with which we've had financial crisis in the past.
Or saying, which unfortunately is what a lot of economists do,
that these are just shocks about which we can say nothing.
As I said earlier, there are things like that,
that we can say something, but not as much as we would like. In his 2003 book, Our Final Hour,
astronomer Martin Rees gave humanity about a
50-50 chance of surviving the 21st century. What do you make of that prediction?
Well, that's again attaching a probabilistic number in a situation where you can't. He can say,
and he can argue, and he might be right to say, it's likely that the world will end in the next hundred years
um that that's an intelligible statement that um the world there's a 50 chance of it
is i think not and the world either will end or it won't philip tetlock argues that it is one thing
to recognize the limits on predictability and quite another
to dismiss all prediction as an exercise in futility. And he points out that his super
forecasters in the Good Judgment Project systematically outperform control groups
and betting markets. In the IARPA tournament, here's an example of a question that they asked forecasters. They asked forecasters
whether the number of registered Syrian refugees reported by the United Nations Refugee Agency
as of 1 April 2014 would be under 2.6 million. What do you make of Philip Tetlock's work?
I think in a way you've answered that question in the way you framed it,
that what Tetlock does, and I find Tetlock's work very interesting, but what Tetlock does
is to frame questions to which there are potentially precise answers, that at some date in 2014 we'll know whether the number of Syrian refugees did not
exceed 2.6 million. But that's not really the question that people making policy towards Syria
want answered. The question they want answers to is what's going on in Syria? What's going to happen?
Will the crisis get better or worse?
And Tetlock's approach gives a precision by depriving the question at issue of interest.
Right, so he's framing the questions more as puzzles
rather than mysteries
Yeah, exactly, that's a distinction we make
which is to say a puzzle is a well-formulated question
that has a right or wrong answer
and in due course everyone will agree what the right answer to that was
Most of the problems we're talking about in business and finance
really are not like that.
In his book Principles for Navigating Big Debt Crises,
Ray Dalio argues that gathering historical data,
stitching together templates, for example,
that a country's long-term debt cycle usually lasts 75 to 100 years,
and then building computer decision-making systems, for example, a depression
gauge, enabled Bridgewater Associates to anticipate the 2008 financial crisis, and in Dalio's
words, do very well when most everyone else did badly.
Well, I think there were many ways of anticipating the 2008 financial crisis. And I'm not sure building
these kind of historical templates would be very helpful in doing it. The truth is that
we're talking about processes that are not remotely stationary over hundreds of years.
If you ask what the economy of 400 years before, 100 years before 2008 was like in 1908, the things you could learn from that experience for 2008 are pretty limited.
Once again, it's not nothing, but they're not things that you can develop
quantitative models for. Final famous prediction. In a famous 2015 TED talk, Bill Gates predicted
that, quote, if anything kills more than 10 million people in the next few decades,
it's most likely to be a highly infectious virus rather than a war, end quote.
I think that's an intelligible statement. It might be right. Well, actually, you won't know
whether it's right or wrong, but you will know what did kill 10 million people. This pandemic,
it doesn't look at the moment as if it's going to get to 10 million so we're we're no
closer than answer to that is there a message about radical uncertainty that you would like
to leave the audience with yeah the message i always want to leave an audience with on this
topic is uh manage risk and then embrace uncertainty.
Risk is about ensuring that your business strategies,
your life strategies,
are robust and resilient to events you can't predict.
Uncertainty, not knowing what is going to happen,
is actually what makes life interesting
and gives people in business and finance opportunities
to have new ideas and make profits.
Uncertainty is what makes us great as human beings.
Risk is what gets in the way of that.
We need to go back to making that risk and uncertainty distinction.
John Kay, thank you so much for joining me.
Pleasure, Joe. Thank you so much for joining me. Pleasure, Joe.
Thank you so much for listening. I hope you enjoyed that as much as I did.
John and I both agreed that discussing examples of predictions was a great way to unpack
the concept of radical uncertainty. For show notes and to join my mailing list,
head to my website, thejspod.com. The audio engineer for the Jolly
Swagman podcast is Lawrence Moorfield. Our video editor is Al Fetty. I'm Joe Walker.
Until next week, thank you for listening. Ciao.