The Origins Podcast with Lawrence Krauss - Tim Palmer: The Primacy of Doubt
Episode Date: February 4, 2023Tim Palmer graduated from Oxford with a PhD in mathematical physics, working on general relativity, and got a postdoc to work with Stephen Hawking. He turned it down and moved into the field of meteo...rology, and then moved on to Climate Change studies, where he pioneered the development of what is called ‘ensemble forecasting’ to predict both long term climate change, as well as short term weather predictions. This technique has now become a standard in the field, and is necessary to properly account for possible chaotic behavior in atmospheric systems. Even simple classical systems can be chaotic—implying that even minute changes in initial conditions can sometimes produce dramatic variations in their later evolution. The canonical hyperbolic example is a butterfly flapping its wings in Kansas might later cause a violent storm on the Eastern Seaboard. On first glance, it may seem that this would imply all predictivity must go out the window, but over the past 40 years techniques have been developed for dealing with the so-called ‘fractal’ distributions that often result from chaotic dynamics, and as a result, it has become possible to constrain the range of possible long term outcomes of chaotic behavior. Tim Palmer has recently written a new book, entitled The Primacy of Doubt, which provides a wonderful discussion about the importance of accounting for doubt and uncertainty in a wide variety of systems, from weather to medicine, and even includes discussions of there possible implications of his ideas for the fundamentals of quantum mechanics and gravity. While I am more skeptical of his nevertheless intriguing latter arguments, Tim and I had a fascinating and informative discussion about his own experiences as a scientist, and the importance of explicitly incorporating a range of initial conditions when exploring weather and climate predictions. For many people, uncertainty is something to be avoided, but in physics, uncertainty is an inherent part of our understanding of the world, and it must be faced head-on. Being able to make quantitative predictions with likelihoods that have meaning requires it, and science is the only area of human inquiry where we can state with great quantitative accuracy what the likelihood is that a given prediction will be correct. This is a triumph of the scientific process and deserves to be better understood. In this regard, there are fewer better guides than Tim Palmer, and it was a delight to spend time with him on this podcast, which will enlighten and entertain.As always, an ad-free video version of this podcast is also available to paid Critical Mass subscribers. Your subscriptions support the non-profit Origins Project Foundation, which produces the podcast. The audio version is available free on the Critical Mass site and on all podcast sites, and the video version will also be available on the Origins Project Youtube channel as well. Get full access to Critical Mass at lawrencekrauss.substack.com/subscribe
Transcript
Discussion (0)
Hi and welcome to the Origins Podcast. I'm your host, Lawrence Krause. Perhaps no other area is as maligned and misunderstood about science than uncertainty. Many people think that being uncertain about the final results is a bug, but it's actually a feature of science, because science is the only area of human intellectual activity where we can quantify our uncertainties. We can quantify the impact of what we don't know on the predictions we make and talk about the likelihood.
that our predictions are going to be accurate
with a quantifiable, certainly,
a 95% or 99% likelihood
that what we're predicting to happen will happen.
And that's incredibly important
because in most other areas of activity,
we just make wild guesses about that.
My guest on this program is a physicist, Tim Palmer,
who's worked in a wide variety of areas,
all relating fundamentally to the nature of uncertainty.
He initially was a studied generalizing,
relativity and was going to go into general relativity in cosmology, made a crucial career change and moved to meteorology and ultimately the climate change.
And his work is not just in trying to utilize the impact of uncertainty on our predictions in general, but to explore systems that are so complex that they have chaotic behavior.
And in those systems, very small departures from initial conditions can in certain circumstances produce wildly different outcomes.
And it's important to be able to understand those systems and to utilize intrinsically that uncertainty to be able to make predictions with any kind of accuracy and any kind of likelihood distribution about what's going to happen.
Tim has recently written a book called Primacy of Doubt where he's talked about his own experiences across the wide range of physics that he's explored and also more generally in society, from economics to medicine, and even to fundamental science, which we talk.
about quantum mechanics and and the fundamental nature of physics.
But that key aspect, which is often described as a butterfly,
flapping its winds in Kansas can ultimately result in a tornado in the east coast.
The fact that chaotic systems can change dramatically in their outcomes when
initial conditions are not known perfectly, and we can never know initial conditions
perfectly is incredibly important. How can you
handle that as a scientist?
He's talked about the ways that
he and others have developed algorithms,
ensemble predictions that allow us to
be able to say
with confidence
what might happen in a chaotic system
and talk about the outliers, talk about the likelihood
that there'll be extreme weather
in a day or a week, and also to talk
about the possibilities for the future, long-term future
in climate change, which is obviously incredibly important
for planning by politicians and society in general.
And the fact that chaotic systems can have behavior
which is quite non-intuitive is very important.
And to utilize that in your calculations, not to ignore it,
but make a central part of your calculations,
the insertion of uncertainty and variation,
is a remarkable, basically new area of science,
which is increasingly important in a wide variety of physics
and, of course, in our ability to determine our own future.
I think you'll find the discussion remarkable and very illuminating, as I did,
and you can watch it without any advertisements on our substack site, Critical Mass,
or you can, of course, watch it later on YouTube or listen to it on any podcast site.
Whether you watch it or listen to it, I hope you'll consider
subscribing at least to critical mass because those paid subscriptions
support the Origins Project Foundation, which supports the podcast
and the other activities at nonprofit activities that we do.
But whether you watch it or listen to it, I think you'll find your view of the world
will be altered in an interesting way by this remarkable discussion with Tim Palmer.
Well, Tim, thank you very much for being on the podcast, agreeing to do this across the ocean.
It's nice to have you.
Thanks, Lawrence. It's good to be here.
And I'm excited to talk to you because I think, well, I'm going to hold this up,
which is, which is, I can see it backwards, but it's the primacy of doubt,
which is a lovely title of your book.
And when I heard about the book, and I was really excited to do it
because I've often said that uncertainty is the least understood aspect of science.
among the public, but also the fact that science is uncertain is its strength and not its
weakness. And people don't understand that, and we'll go through that. But I mean, basically,
I assume you'll agree with me of this, but science is the only area. The science defines,
allows one to define what one means by uncertainty. It's the only area where you can say
with any confidence what your confidence is. And that's exactly.
I mean, you know, I often say, you know, a good hallmark is the fact that in science, you know,
we try to put error bars on our predictions and so on.
And you can kind of tell the difference between science and non-science.
You go to an astrologer and ask them the error bars on the prediction that you'll meet a tall dark stranger in a week's time.
You won't get one.
And that's probably telling you it's not great science.
Yeah, absolutely.
In fact, in fact, it's in some sense without, if people don't.
tell you the likelihood or the uncertainty in what they're saying, in some sense you should
always doubt it more. And that's the other meaning of doubt that you don't mean in this book.
Not that we should doubt science, but that there's doubt in the sense of uncertainty whenever
we say anything. In fact, I'm often asked in the media whether I believe something. And I
always say, I try to always say anyway, that the word belief is not a word that scientists should
use, in science, something is either more likely or less likely. And that's it. And if it's
extremely likely, we tend to say it's going to happen. If it's extremely unlikely, we say it's not
going to happen. But we don't think belief, which it implies some kind of. And I, you know, I use,
I use that example. If, you know, if, if the weather forecast says there's an 80% chance of rain tomorrow,
does that mean you believe it's going to rain or you believe it's not going to rain? Well, it obviously
doesn't make sense, does it?
I mean, you just say there's an 80% chance of rain
and you determine from that
what your action is going to be.
You're going to take an umbrella to work,
or you're going to not have a picnic
with it otherwise you would do.
And then that's relevant really when it comes
to talking about climate change, because people really do
use the word belief about
climate change. And again, it's
really just as irrelevant,
actually, that word as it is
for a weather forecast.
You know, we try to make our best
probabilistic predictions of climate change, you know, knowing all the uncertainties that we have
in making such predictions. And it's really not a matter of belief or disbelief. It's whether you
think the probabilities are high enough that it warrants taking action or not, like whether you
take your umbrella to work. Do you cut emissions? It's kind of, in a sense, no different, but it's at
a bigger scale. It's a much bigger scale. And we'll get to it. And I think the point of the book,
in some sense, is how to take that kind of colloquial thinking and make, and make
it more rigorous and and and and and and and and and which is not at all an obvious or immediate
route and and and yet the the as we'll talk about the consequences are much more can be much
more significant than whether you just get wet and and and and although we will talk about
meteorology and and and and but the other thing i wanted to point out what i which i hadn't
realized when i picked up the book that it's not just about understanding the nature of uncertainty
science. But it's actually something more interesting, that uncertainty is, thinking about uncertainty
is useful in certain instances in understanding how the science works, that uncertainty is a window
into the dynamics of systems that you wouldn't have if you didn't consider uncertainty. And moreover,
something that was actually new to me, although I knew it in the context of sort of quantum
mechanics, which we'll get to, but that adding uncertainty in the case of noise, adding noise to
the system sometimes makes the signal stronger.
And that goes completely against conventional wisdom.
We talk about signal to noise ratios,
and we usually trying to make the signal above the noise.
But for some systems, you argue particularly for nonlinear systems,
and we'll get into that and what nonlinear means,
that adding noise actually can sometimes allow you to distinguish a signal
that wouldn't not be so clear otherwise or stabilize a signal
that wouldn't be stabilized otherwise.
That's right.
I use the example of the MIT metrologist Ed Lawrence, who is kind of often seen as the father of chaos theory.
Yes.
But really his really, and I write about it in the book, his kind of profound discovery was to try and understand why are chaotic systems so difficult to predict.
Why does uncertainty sort of explode when you have these chaotic systems like the weather and the other examples?
And you kind of realize that there's a type of geometry.
So to understand that uncertainty, you kind of, you discovered this new type of fractal geometry,
which would have been completely alien to the classical physicists of Newton and so on.
So that's a great example about how really trying to, you know, characterize.
uncertainty has led you into new ways of thinking and characterizing, you know, the physics of
systems. And I think that's a really, really, really profound discovery of his. Yeah, I think,
I mean, you devote a lot of time in the book to it, and we'll go through it. I mean, I think
to parsing that, because it takes a little while to understand what that means, but the significance
of that for, for understanding physical systems first, and then for the areas in which you've
spent a lot of your time, which is meteorology and then climate science.
particularly for those systems, and I want to get to it.
I will say at the beginning, I think you defined, described the beginning of the book,
there's a paragraph early on, which says there are two reasons, basically,
you talk about risk analysis, but you say more or less there are two reasons for doing risk analysis,
but I think of it as two reasons for writing the book.
First, there's the practical reason that we're liable to make lousy decisions
if we base them on predictions with unreliable estimates of uncertainty.
But just as important, at least for me as a assumption,
scientists, we may be able to understand better the way systems work by focusing on the ways in
which they are or can become uncertain. These reasons form the two themes of my book, the
science of uncertainty to predict our uncertain world and to understand our uncertain world.
So there's a bicameral theme there. They're sort of developing a science of uncertainty and
then also using it to understand our uncertain world, which are things. And,
And I've always thought of it in terms of understanding in our certain world, but we develop
techniques.
And but to talk about, before we get there and we've taken longer to get to this point than I usually do,
no, no, it's an origins podcast.
And I want to talk about your origins, which you, which you're describing the book a little bit.
But I think it's an interesting, interesting voyage you took to get where you are, which is to many places.
I mean, your interest is, is.
so, your interests are so broad.
And I know one person has described you, I think, as a polymath, which I think is
a nicest thing you can ever be called.
And they're very broad.
But your interest began actually in more theoretical areas of physics in general relativity,
but I want to go back before that.
Right.
I want to ask, what led you to science?
Were your parents scientists?
No, no, actually.
Quite the opposite, actually.
I had a brother and sister who were somewhat older than me,
neither of whom nor my parents were interested in science.
But my parents, not so much,
but my brother and sister were very kind of political people.
And I can remember watching, you know, watching the news.
And either my brother or sister would say, you know,
oh, that guy's a rogue.
You're talking about a politician or a member of parliament.
That guy's a rogue.
He should be drummed out, you know.
And they talk about some, you know, new policy from the government.
Oh, that's nonsense.
That'll never work, you know.
And I'm kind of thinking, how do they make up their mind so quickly?
You know, I would need to kind of see both sides of the story and then weigh it up, you know.
And it kind of made me realize I'm not a, I'm not cut out to be a politician.
I don't have that intuitive sense of what's kind of right or wrong.
I'm always kind of weighing up things.
So I think rather than...
That's very appropriate given the topic of your book when one thinks about it.
Well, yeah, that's right.
But that kind of steered me away.
And then at the same time, I mean, this was a great era when people like Fred Hoyle, Hermann Bondi, you know, these were the great cosmologists of the kind of, you know, what would you call late 1950s.
Yeah.
You know, they would come on TV at prime time and just give this ex-examology.
extemporized lecture on, you know, on the universe. And I just remember thinking, wow, that's just so
fantastic. You know, listening to Fred Hoyle talk about, you know, there was no beginning to the
universe and blah, blah, blah. And that got me hooked. So by the time I was a teenager, I was
completely hooked on, you know, Einstein and cosmology and all that stuff. And that determined
what I wanted to do. Ah, so, okay. So it was the popular works of people like Hoyle and, and others. I mean,
For me, too, it was popular books by scientists, and I've said it many times on this podcast.
That's one of the reasons, one of the reasons why I wrote books in return, because, you know,
it's the idea that I might excite some young person that's happened every now and then.
But it certainly was it for me, which is why I like to encourage people when they're reading books about science,
to read books by scientists if they can.
That's not to put down science journalists, and there's some excellent science journalists.
But it's nice to get that horse's mouth.
And Hoyle, I mean,
Hoyle, but of course at the same time people suffered, right?
Hoyle was a remarkable scientist.
He also was a great popularizer.
He also was a great writer, his book,
his science fiction book,
The Black Cloud is the best science.
Yeah, I used to read his science fiction stories as well.
I just devoured everything by these people.
Yeah, well, I mean, it's probably, I think,
I was talking about my friend Richard Dawkins,
and I think we both kind of agree
that the blackout may be one of the best science fiction stories ever written.
But it's interesting that Hoyle, I mean, he was on the wrong side of history, at least in terms of the Big Bang, although he invented the term.
But and that may have been part of it.
But, you know, it's unfortunate.
You think of people like him and also Sagan.
But Hoyle is a great example of scientists who was undervalued as a scientist and probably one might say, and one can say, could have easily, was deserving of the Nobel Prize that was given to people he worked with.
for the seminal work he did in cosmology.
And so sometimes unfortunately popularizing is viewed as a negative
and it has been viewed as a negative in the community.
I don't think so much anymore.
I just want to say that, you know,
I hope that history will prove Hoyle right
in the sense that the Big Bang isn't the beginning of time.
You may not agree with that.
But, you know, so I think, you know,
we may reevaluate not the precise steady state here,
but we reevaluate his philosophy and maybe show
that wasn't so far off the mark.
Yeah, no, no.
You know, I view you as part of the, part of the intent of the book,
which we've talked to before we came on the air here,
is to argue heretically about a number of things.
And you make interesting arguments for some really deep issues in physics,
one of which is the nature of quantum mechanics,
uncertainty, which you might get to.
and the other is the sort of nature of cosmology and the big bang and what we're
seeing is delusion to some extent influenced by i think your
lawrence i would i would just say in my mind those two problems are related
they're yeah you yeah you describe clearly that you think one is related to the other
right and i think you're influenced by your colleague in some sense roger penrose who i have
had lengthy discussions with and i don't agree with about some of these things but
but i kind of think kind of interesting that you're that you i had an
occurred to me that you served in some sense to rise up or defend two scientists who are great
in their own terms and both have been recognized in terms. Hoyle and Einstein from what
other people might have said their follies. One of what I would have said,
Hoyle's folly is not recognized in the Big Bang happened, and Einstein's folly was not accepting
quantum mechanics. And you argue they're actually both right and they're both right for the same
reason. And that's the latter part of the book, which we may get to. But I think that's a more
controversial. It's you agree. It's a more controversial discussion. And one that I think is for the
purposes of, if we can get there, fine. But for my purposes, what's really important about this
particular book and about your discussion is helping people understand how uncertainty works in
science and more importantly, in real world issues like meteorology and particularly climate change,
how it's implemented because you have a long history in that area.
And if back speaking, getting to history, let's get back.
So, Hoyle and others, did you read Asimov?
Did you read all those people too?
Yeah, yeah, yeah, yeah, quite a few.
Feynman, by the way, did you ever read Feynman?
Well, Feynman in what sense?
Well, there's popular books.
It's popular books.
Yeah, I mean, not perhaps not as many as others, but yeah, I'm certainly aware of it.
Well, I mean, you're more, you're British.
You're more influenced by the British.
By the way, I should say the phrase, the primacy of doubt, actually I took from the biography of Feynman by James Glick.
He Feynman believed in the primacy of doubt, not as a blemish on our ability to know, but as the essence of knowing.
I just, I still find that an absolutely knockout sentence.
It is.
And you may not know, but I will say it here is one of the books I'm happy.
I've written, I wrote a scientific biography of Richard Feynman, which kind of compliments Glick's thing.
And his, and that aspect of, well, that fundamental aspect of skepticism, which is really what it's all about is so, so important.
And Feynman utilized so much in his own work and then later on his public work.
So yeah, anyway, I didn't know whether Feynman reached across the Atlantic as much as Bonn as the oil and the rest.
Anyway, so you were hooked and you were going to do general relativity, which you did do.
We went and did a PhD in general relativity, right?
Working with Dennis Sharma.
Dennis Sharma, who was a supervisor.
Yeah.
British cosmologist, very influential.
He famously supervised Stephen Hawking in Cambridge.
He famously convinced Roger Penrose to move from being a pure mathematician to a mathematical physicist.
And he's had numerous, really illustrious students.
I'm a kind of minor student of Sharma's,
but there are lots of very, very, very big things.
Martin Reese.
Martin Reese is one of Dennis's students.
Yeah, he's basically produced, yeah.
So Dennis was a remarkable guy, a really inspirational character.
And yeah, I was very pleased to do a PhD.
But then I came to the end of it.
And I sort of said, well, okay, I've sort of achieved
you know, my childhood ambition to do a research degree in general relativity and, you know, black
holes and all that stuff.
I mean, I was around when Hawking first announced the famous evaporation of black holes.
So I went through all of that, you know, quantum field theory and curved space time stuff,
you know, and I knew that kind of inside out and back to front.
And I was given an opportunity to work with Hawking as a postdoc.
But I then had sort of misgivings about.
out whether that was really what I wanted to do for the rest of my life.
I'd sort of, you know, I ticked off the box of my childhood ambition to do a research degree.
But I kind of met, I mean, I don't want to go into this in detail because it's more detail than you want.
But I met somebody by chance who was a very internationally renowned climatologist.
And he made me realize that the kind of mathematics I'd learned, and even to some extent,
the physics that I'd learned, was not actually as totally irrelevant as one might.
kind of seemingly think at first sight in the world of climate physics.
I mean, I have a little story about how something called the principle of maximum entropy
production, which Dennis was convinced was, Dennis Sharma was convinced would explain in a simple
way, Hawking's evaporation formula.
This guy told me quite casually, oh, yeah, the new big idea in climate sciences using the
principle of maximum entropy production.
And I'd never heard of this damn thing until, you know, a few weeks earlier.
And it just kind of knocked me out that a thing which from an obscure area of thermodynamics could equally apply to climate or to black holes.
So that was kind of eased the transition, let's say.
And I guess I wanted to do, I kind of, I guess I wanted to say at the end of my career, I don't know if I'm at the later stages of it, let's say, you know, that I've achieved something that hopefully benefited, you know, society in some way.
So I feel, you know, I feel that's been a good thing.
Well, so that's interesting.
I couldn't help thinking, as you were speaking, of the fact that it's that small, that small perturbation of meeting someone who said something caused such a huge effect.
And it's almost like something one might call one might call the butterfly effect.
It's absolutely, it is absolutely the butterfly effect.
And, you know, it.
And of course, it immediately provokes the kind of counterfactual question,
what would my life have been like if the butterfly hadn't flat?
And of course, it's an impossible question to answer.
But I think on the whole, I think, you know, rather,
I think the way to approach these questions is not to say,
well, what would have happened if the butterfly flat the other way.
Just to say, if you know everything you know today,
would you have made a different decision?
And I don't think I would.
I think, you know, given that my things that have happened in my life, I'm pretty happy with the way it's been.
So I would make the same decision again.
Well, it's all right.
Well, you don't have a chance.
Well, except if I understand your cosmology correctly, you may have that chance again.
But in general, you don't have the chance anyway, so you might as well be happy with what you did.
On this epoch of the universe, I will not have that chance again.
But as you say, who knows?
Yeah, well, yeah, I think you should not head your bets right now, just enjoy the fact that you.
that you're happy with what you did.
But, and I brought that up, obviously, not completely tongue and cheek because there's a lot of,
because understanding how how small perturbation is going to have huge impacts is a key part of chaos,
which we'll get, which we'll get to in the book.
And so I do want to, can I just make one point, which I do feel strongly about if people,
you know, at a similar stage in their career to me when I was finishing my PhD are listening to
your podcast.
Yeah.
You know, you can change fields after your PhD.
You can do, you know, you can do years of poster.
You can change your field.
And indeed, you know, the technical stuff that you bring to a different field,
you'll be surprised that it will be useful, even though, you know, at first sight,
you may not think so.
And so I am a great believer in kind of promoting, you know, these programs which, you know,
which allow people to swap fields.
I feel, you know, it's just completely wrong that we're kind of siloized by the time we're in our
20s.
Yeah.
When we're, you know, people are now productive.
We don't retire till, you know, much later than we did 50 years ago.
So, you know, we shouldn't be completely typecast by our mid-20s.
And, you know, the more that can happen, the better.
Well, yeah, absolutely.
Not, I used to, it rings, rings important for me for two reasons.
When I was used to be chairman of a physics department, I was talking to students about why
to choose physics, say, instead of engineering or something else.
And I'd point out that physicists were doing all these.
different things because what you learn in physics is basically how to solve problems.
Sometimes when you don't know what the problem is, and that skill is portable.
But the more importantly is this notion of lifelong learning.
Less dramatically than you, for my entire academic career, I was a professor of astronomy
as well as a professor of physics.
From every beginning, beginning from the time I first go to professorship at Yale and onward,
I always had both those appointments.
but I actually never took a course in astronomy in my life.
And I certainly did.
And likewise, even though my fields didn't diverge as much as yours,
I've often said, and it's true,
that I learned much more physics after I got my PhD than before.
And I think that's the important thing for people to realize.
It doesn't end there, but you're absolutely right.
I think too many people are just feel that they don't want to take that step
to study something because they're not certain that they want to do it
the rest of their lives.
But you're never committed.
that's right yeah but it's also that employers it bothers me that employers you know they list all these
requirements for jobs you've got to have n years of experience in you know such and such a branch
end years of experience in computing you know and so on and so forth and it kind of almost you know
it means you'll never get shortlisted if that's the way the job is advertised you're coming in
from a different field you won't even reach the short list yeah so you know i say this to
employers as well you've got to be a bit more far-sighted and realize that
you know, it's the way new ideas come.
New ideas, in my view, don't typically just pop out of the ether.
They come from people applying, you know, well-known fields, well-known areas into new areas.
It's transfer of information that is the kind of innovative thing, not kind of something coming out of completely out of nowhere.
Yeah, absolutely. Well, Wall Street learned that.
One year when I taught Yale, I think four of our five PhDs went to Wall Street because Wall Street knew that these people could solve complicated
equations, work 15 hours a day in rooms without windows.
And that was a skill that was very important.
And actually, another colleague of mine who I've worked with who was a cosmologist and
particle physicist, and now as a neuroscientist, Larry Abbott at now at Brandeis, remember
at the time, he was, look, we were, we were joke because this one I was at Harvard, that he would
say, well, I'm going to go in economics because, you know, the people who, who basically
solved, you know, translated first year physics problems into economics won the Nobel Prize.
So I just have to, you know, translate second year physics problems in and I can, and I can,
it didn't work in any case. We'll talk about economics, which you try and address. And I think
you're by, by your discussion, it validates a concern of mine about whether economics really
is science. But let's get back to, to this. So you did make that you, you didn't do the
postdoc with with with that's right you just you decided I turned turned it down and and that had
was that really as sudden as you suggest or had it had it been had it been you've been sort of mulling it
in your mind for a while well the the the problem was you know I I it wasn't I mean what I'd
been mulling it's over and absolutely getting nowhere mulling it over I just couldn't you know I was
it just kind of you know the phrase paralysis by analysis yeah I mean that was the problem the more
I analyzed the problem, the more I paralyzed myself with indecision. And in the end, you know, I was still
trying to finish writing up my thesis. So, and I'm saying, well, this is just doing my head in. I've got a,
I've got to sort of, you know, I got to shelve this for a couple of weeks and just kind of get on
with my life, which I did. And suddenly, you know, I don't know exactly when, two weeks later,
I kind of walked into my office and it just, it was just obvious. I mean, I don't know why it was
obvious and I didn't, I decided not to quiz myself as to why it was obvious, but I just sat down
immediately and wrote a letter to the head of, head of the applied maths and theoretical physics
department at Cambridge and politely declining the post. And it kind of got me interested. I mean,
I think that's the, you know, you mentioned neuroscience. I mean, I have been kind of peripherally
interested in how the brain makes decisions and the sort of functions that go behind decision making.
And there's obviously something, you know, slightly mysterious here that somehow by not, you know, and we see that many examples of that in, you know, Roger Penrose talks about when the famous idea that got him the Nobel Prize was when he was crossing a busy street and just dodging the traffic.
And he didn't quite know what the idea he had, but he knew.
And this is fairly universal.
There's something about, you know, important ideas somehow come not when you're focusing hard on the problem,
but the brain kind of works in the background and mysteriously puts, makes connections, you know,
when you're relaxing or not thinking hard.
And it kind of seemed to me that was actually an example of that.
I was just getting myself in a mess trying to think through the problem.
Okay.
Although I don't want to diverge too much of this, but you write a lot about that in the book.
again, a subject which I say, yes, maybe, but I'm not sure I agree completely. Sometimes,
and I think maybe Einstein is another good example, not in special relativity, but in general
relativity, sometimes concerted thinking for a long time also works. And I think in his case,
for 10 years in developing general relativity, focusing on that problem with a laser-like intensity
was, I mean, it was essential for him, at least, and maybe not for someone else.
Yeah, yeah, yeah. No, I mean, there's no doubt that you've got to know your stuff. You know,
You can't have a brilliant idea if you don't know your stuff.
So you've got to immerse yourself in whatever it is you're doing.
But, I mean, I don't know about whether Einstein, I don't recall whether he said or not, whether he had, I mean, like his famous example of the, you know, falling, free falling in an elevator.
You know, was that just something which kind of came to him on a walking out on a sunny day.
I mean, it probably was, you know, and then he realized from that, everything kind of flowed from that idea.
Yeah, yeah, absolutely.
His Godunkin experiments were good.
But I think the point is, anyway, it doesn't matter.
I think he was always thinking of Goduncan experiments,
not just looking at the sunny day.
He was also, while he was looking at the sun, he was thinking,
I get the sense he was anyway.
But did you have a job already or did you just turn down this other job without another?
No, I had two, well, I had two offers of two completely different jobs.
You know, one was, I sometimes put it to work at the second most famous university in the world with Stephen Hawking.
and the other was just to be a very kind of prosaic scientific civil servant
working at the UK's Met Office,
which was a kind of government meteorological service.
And, you know, my parents thought I'd gone completely mad to turn down Hawking.
But I was intrigued.
I actually kind of knew that you'd had that.
You went to the Meteorological Service.
I didn't know if you'd had the offer before.
But I'm intrigued in why they made you the offer.
I mean, were they very far sighted and saying,
Well, this person could solve complex equations?
Exactly.
At the time, and I'm not sure whether this would happen today,
at the time, the then-director-General, a guy called John Mason,
who himself had been a professor at Imperial College,
I mean, I was told subsequently,
he briefed the interview committee to just find good physicists and mathematicians,
and he didn't care whether they knew anything about the weather
or not. As I say, today, I'm not sure this would work. You know, the job, the job advertisement would
require n years of, you know, metrological background. But Mason was, you know, thankfully for me,
far-sighted and he just, you know, and I could, I could witter on about the principle of maximum
metropy production and how it was relevant to climate, you know, really not knowing very much
about either subject, but it kind of sounded impressive. And I think the, I probably,
duped the committee to give me a job, which was probably I didn't deserve, but, you know,
thankfully.
Well, I don't know. In retrospect, it looks like you did okay.
Now, but you know, one of the, what, one of the things about weather, yeah, unlike, say, black
holes is that people, is that, as you point out, it has an immediate impact on people's lives.
And everyone talks about it.
Very few people can do anything about it, except doing something about it means maybe anticipating it.
And the real problem, the real problem of uncertainty that you wanted to focus on, you know,
we can all say we can measure this with a certain accuracy and that people can understand.
But sometimes the question is, how do small uncertainties lead to potentially big effects?
And that's where nonlinear systems come in.
And that's where, and weather is an example.
And you give a great example of a poor guy named Michael Fish.
You want to give that example, which I think is important because it sort of illustrates the significance.
Yeah.
Well, the point about uncertainty in a non-linear system is that the uncertainty is not going to be the same every day.
Your sources of uncertainty might be the same.
The fact that the models are not perfect and the observations are not perfect.
So your kind of uncertain inputs are more or less the same every day.
day. But the uncertain outputs will not be the same every day. And some days, you know, the outputs may be
pretty certain. On other days, they may be somewhat uncertain. But occasionally, they may be
explosively uncertain. And, you know, I try to discuss, I mean, in fact, right in the very early
chapter of the book, I talk about the solar system and, you know, planets going around, you know,
the end body system.
And that can be one which can be actually
look pretty damn predictable most
of the time. You know, planets going around in
ellipses and just minding their own business.
And you start to yawn
at the animation. And then suddenly,
explosively out of nowhere, they
just buzz off to infinity, you know.
And the weather's
like this as well. And
a great example, which
had a profound effect, actually, in
the UK, was a
weather forecast that
Michael Fish came on the TV and he basically said, you know,
tomorrow's weather will be pretty, pretty average, you know, a bit breezy,
but nothing special.
And pretty much the worst storm for 300 years happened.
And, you know, the town of seven oaks in Kent famously became no oaks, you know,
because all of it is.
No, but I mean, literally, I mean, in today's money, I think, you know,
it was billions of pounds worth of damage.
you know, and a number of lives lost.
And, you know, people were incensed.
How could the weather forecast, you know, we pay the Met Office all this money to make
forecasts and they talk about all their supercomputers and their satellite data.
How can they get this thing wrong, you know, only 12 hours before?
And I kind of retrospectively analyzed this case.
And it was a classic example of this utterly kind of explosive, explosively uncertain system.
that as near as dam you can put in butterflies into a weather forecast model.
You can't exactly put butterflies in it, but as near as dam.
You know, within a day they had grown into such amplitudes and scales that they made the difference between, you know, low pressure.
In some solutions, low pressures would form.
In other solutions, high pressures would form, you know, you could get anything in between.
So this was the classic case where, you know, it kind of highlighted the fact that,
that meteorology, you know, if you wanted to be cruel,
you could say meteorology wasn't really a scientific discipline at that stage.
Because it was like the astrologist,
just saying you will make it a tall, dark stranger,
without any aerobars.
It will be sunny tomorrow without error bars.
So it kind of convinced me that we've got to do something about this.
And in a way that the technique was straightforward,
the technical details behind it were complicated,
but the technique was in principle straightforward,
which is you don't run a single weather forecast.
You run 50 forecasts and you change them by flaps of butterflies' wings.
And you just look to see, is this a day where they all diverge?
Or is this a day where they all stick together?
If they all stick together, you can be pretty confident what's going to happen.
If they all diverge, then you can only talk about vague probabilities.
But if some of the members have extreme weather, then at least you can warn people, you know, of the risk of extreme weather.
And then it's up to them how they decide to take, you know,
to use that information.
But at least you've been into that information.
Yeah, you present a great series of graphs where one doesn't,
and you're jumping head to these ensemble forecasts,
which is an essential part of meteorology,
and then we'll get to in climate change.
It's a really eminently sensible idea.
And it's at the heart of related to something which you talk about,
and I've used a lot of my career,
something called Monte Carlo analysis.
Yeah, it's very much linked to Monte Carlo.
If a complex, if a system is too complicated to follow it
through exactly and you need a computer do it, you know, if you can't solve it completely
analytically, then, and then if you want to know how things will change and the best thing to do
is to change it and to put, change your input variables and then run this complicated thing
and then see what, what comes out. And generally that's the way now, I mean, it's not completely
now. They do a little more fancy stuff called Bayesian analysis. But in the long run, that was
what experiments did. They would, in order to estimate when an experiment, physically,
experiment was measuring something and compare it to theory, you'd, you have certain intrinsic
experimental uncertainties and you would run a simulation with those uncertain, randomly choosing
among those different uncertainties and seeing what the outputs were and seeing there, and that
would tell you how accurately you were actually measuring something. It's a, it's a wonderful way
to compare theory and experiment. It's, it's eminous sensible. I was surprised that it took so long for
people and that it was actually hidden for a while. It was top secret. Yeah, well,
Yeah, but let me just, you know, let me be devil's advocate because I, you know, I went through, you know, what may seem obvious today. You know, it's like, it's like things, everything in life, you know, what seems obvious today wasn't so obvious. Like once you understand it, yeah, 30 years ago. And, you know, the argument was from my colleagues, there was, because you have to remember when you make a weather forecast, there's a limit, you have a limited amount of time.
to make that forecast on a computer.
Otherwise, it just becomes useless.
I mean, if it takes you more than a day
to make a day-long weather forecast,
then there's no use.
And, you know, so typically, you know,
you get observations come in.
You have to kind of assimilate the observations
into the model, run the model forward,
all in the space of a few hours,
and then disseminate the forecast out.
So you've got this very tight schedule.
So, you know, it's not like we could sit back and run these 50 forecasts at leisure.
It all had to be done within a very tight schedule.
And the argument was, well, okay, you're, you're, you know, you're using up lots of extra computer time to do all these 50 forecasts.
Wouldn't it be better to increase the resolution of the model, you know, so that the, you know, the dynamics were more accurate and the, the, the, the, the, you know, the, you know, the, the, you know, the, the, you know, the.
could assimilate the observations better and so on.
So it was not that people, you know, didn't agree in principle with the idea.
They just said, look, there are better ways of using our computer time than running a forecast
over and over again.
And actually, well, that's important.
Actually, an important point make later on in the book, which is it depends on what you're
doing.
And there are certain circumstances where it's better to do.
It's not always clear that it's better to do one or the other.
Sometimes improving the model is better and sometimes running, running similar.
And of course, a lot of.
a lot of depends on practicality, such as speed of computers and resources,
but others sometimes depend upon time available and also the significance of the result, I think.
And so, yeah, it takes all different approaches need to be considered,
and there's not a universal, I think, in one or the other.
That's right.
But let me, I want to get to chaos and uncertainty and meteorology and climate,
and then risk analysis is where I'd like to try and cover those topics which are at the heart of a lot of the book.
But the key point is to decide when a system, as we say, when a small uncertainty, what's really relevant for weather and climate and a lot of the things that day-to-day things like pandemics and economics that you go into and wars is when a small effect can have a big change in the outcome.
And that requires, that can generally only happen if a system is nonlinear.
And I want so, but for the public, I think it's important to explain what we mean by that and why that's the case.
So let me ask you to do that.
Well, yeah.
I mean, when I wrote the book, a kind of an example of a nonlinear system came to mind.
So let me, I mean, a nonlinear system is one where, you know, if you sort of double an input variable,
then the output variable doesn't necessarily double.
It might triple or it might halve or something.
So it's not kind of proportional to the change in the input variable.
And the example I used was thinking about,
you know, not that I do this myself,
but if you were to win, you know,
five million dollars or something on the lottery,
you know, I mean, if I won five million dollars on the lottery,
I'd be unbelievably happy.
Yeah.
If I won $50 million on the lottery, I mean, I guess I'd be more happy than winning.
I mean, I suppose I would.
I don't know, but I, you know, be more happy.
But I don't think I'd be 10 times happier to have won $50 million and $5 million.
So, I mean, that's a kind of an example, if you like.
If that resonates, I don't know.
Well, in a sense it does, although in a sense it does, but in the sense it's the opposite direction, right?
Sometimes it goes the other way where a small input, a small, you know, whereas,
doubling something doesn't make it two or doesn't make it a half, it makes it five or seven or ten.
And that doubling can become exponential.
And then you get, that's ultimately how for nonlinear systems you can get.
Yes, although in the nonlinear system, actually what will happen, you know, errors will grow and grow and grow.
But then they ultimately saturate and, you know, and then they don't get any large.
And that's a kind of a non-linear saturation of an error.
But the most important point I really wanted to sort of bring out on all of this is that,
you know, because sometimes I do get asked this question that, you know,
well, you could just look historically at a load of weather forecasts
and work out, you know, on average, how skillful or unskilful they were.
you know, put your error, calculate your error bars from a kind of, you know,
historical set of forecasts and, you know, statistically analyze them.
And the point is that that, I mean, that would be better than nothing,
but that doesn't take account of the fact that in a nonlinear system,
you sometimes get these states which, where errors grow slowly,
and they are kind of relatively unimportant.
And other states where errors grow catastrophically, rapidly, you know,
and it's being able to discern and distinguish ahead of time those kind of situations.
That's really when it becomes important.
That's really what I think, yeah, that's the heart of the book.
In fact, there's a figure in the book, which I think really is really the heart.
And what really matters, if all of this matters, is to know when something dramatic is going to happen based on your uncertainties
or when you can ignore the uncertainties to first approximation.
And it's also worth pointing out that not all nonlinear systems, I think, are chaotic.
But I mean, you know, I mean, but the fact that certain variables depend on the square or the square root or something of it, like distance versus time in physics, you know, the distance you're falling down is the square of the time if you're falling down in, in, in, in, in, in gravity, which is really not intuitive because you normally ever see it.
But, you know, that you can, you can, your distance increases so much between if it increases by a certain amount between zero and one seconds, you might think it increases the same amount by one and two, which maybe Aristotle thought.
but as Galileo pointed out, it increases a lot more.
But there are certain systems where it's not just that things increase.
It's where you get this intrinsic uncertainty, which I guess is characterized by this notion of,
which I may be due to Lawrence, but deterministic non-periodic flow.
The fact that a system, the system is deterministic.
It's run by equations that are deterministic, but you don't know the outcome because to know
you'd have to know every variable infinitely well.
And the simplest example, which I think most people don't realize,
which you talk about in the book, related to planets,
but even related to pendulums, is a three-body system,
is the fact that if, that, you know, those of us took physics,
and some people have good memories of that and some are bad.
But, you know, you remember the Earth going around the sun,
and it all looks like everything is described beautifully by Newton,
and it is, but then you throw another planet in there
or another pendulum in the case of three pendulum.
This is suddenly it's possible.
It doesn't always happen,
but it's possible that all hell can break loose.
And the simplest example, as you point out,
is if you have three bodies under certain circumstances,
well, in general, you can't describe,
there's no way you can know the future history of that system
with 100% accuracy.
That's right.
You can't write down a formula.
There's no formula which sort of tells you what's going to happen.
and that kind of shattered a lot of dreams, you know, in 19th century science.
Absolutely.
Where people thought, okay, once we, you know, with Newton, once you have the initial conditions,
the rest of the universe is determined,
and we can in principle predict it if we have a good enough system predicted
arbitrarily far into the future and arbitrarily accuracy and accurately.
And there are certain systems where that's just simply not possible.
And it's a shock to realize it.
But on the other hand, as you.
point out, when you look at these motion of, say, three planets or in a certain, you know,
most of the time it's just sensible.
And then every now and then one, something gets knocked off to infinity.
And then you say, well, gee, what's, I'm worried now.
What about, what about nine planets?
Or if you get rid of Pluto, eight planets and the sun.
And then people, you know, you start to worry, well, are we, are we destined to doom?
And then there, and then one has to understand how to, how to picture what's happening.
Sir, I'm leading...
Well, I was going to say the...
I mean, you know, it seems to be...
I mean, people do these very, very long calculations
of the solar system, and it seems to be pretty unlikely,
you know, that the Earth will suddenly get ejected
from the solar system.
But there's a more practical problem,
which is the asteroid belt.
You know, the motion of the asteroids
is essentially unpredictable on long time scales,
and we don't know whether one will get ejected.
from the belt and ultimately hits Earth.
And of course, that can be just as devastating as a nuclear bomb or something hitting a city.
So, you know, that's why NASA sent the, the, the, was it data, or whatever it was called,
to see whether we could deflect an asteroid.
You know, so that's really a good case of trying to take kind of mitigating action
against an intermittent instability.
Yeah, absolutely.
And we'll get back to it.
I think it's a good example.
Remind, I'd like to use it because I was going to use it.
when we talk about risk analysis,
what, you know, risk versus, you know,
probability of something happening versus its implications
and how when, and so I'd like to try and get there.
Right.
But it's okay.
This is, I think, all particularly useful.
And the, the, what is probably worthwhile pointing out is that
the solar system is chaotic.
It's known to be chaotic,
but it's chaotic in a kind of manageable way as far as we can tell.
And moreover, we use that.
And in fact, climate scientists,
You have used that ever since the work of a guy named Milankovic who realized that it's actually small-scale, you know, but generally predictable chaotic changes that will produce more or less somewhat cyclical variations in the Earth's climate.
And so that we take into account that chaos.
But what we don't want, but what some people who like to argue against all evidence would argue, well, then what we're just seeing is,
It's just that.
And it's worth pointing out that these are cycles that are over 100,000 years or so,
not over 20, 30 or 40 years.
But so we can manage, I mean, chaos doesn't mean you can't do anything.
There was a science of chaos.
And that's really the heart of much the book.
And you talk about Lorenz and Lawrence and, and his importance in thinking about this in terms of geometry.
And I think that distills down, you know, one can talk about,
fractal geometry, and it's all very nice to talk about fractional dimensions. It sounds nice,
but I think more importantly, operationally, is the notion of attractors, is that ultimately,
you can think of the geometry of a system by thinking about how a system changes in what we call
phase space, which is just the parameters that, you know, the distance, you know, or some other
characteristic. Over time, the system will move through some region of phase space space. Planets that
are perfect, that are in orbits will come back to where they began in position. And we can map out
their position. That's the simplest thing. A chaotic system maps out a much more irregular kind of
motion. And what, what, what, what is by far in my mind the most important figure in your book,
which happens to be figure 10, which really capulates everything, is the fact that one, while it's
impossible necessarily, you know, if it's system's chaotic to follow all the
trajectories exactly. It can be that the trajectories map out a nice shape that has that has maybe
a bimodal distribution. And it's either and the system either ends on one, say, one plane or another.
And those are called attractors, right? Right. The attractor, well, the, the, the attractor is
the overall kind of geometry on which the system evolves. But as you say, you can, you can look at how small,
you know, a small ball of initial conditions, you know, would grow.
And it would, it would kind of grow, it start off looking like a kind of distorted banana
or a kind of bent over banana and then it would start getting more complex and so on.
And in a sense, that's really, you know, that that's what, that's what we're trying to
calculate in doing, you know, probabilistic weather prediction.
It's these sort of small shapes and how they deform and whether they grow rapidly or,
or, you know,
my dog, my dog just likes to grow rapidly
his excitement every now and then.
And I'm just going to, I'm just going to lob in,
just for the discussion,
that the equation that describes the growth of these probabilities
is actually remarkably similar to the Schrodinger equation in many respects.
And that's kind of motivated me, you know,
in the later more speculative part of the book to look at that.
Well, the key,
The key point is, I think, not just that you can watch the shape of things change,
but the key point is, and this is where I think really the heart of trying to deal with
people's concerns about drastic events or unlikely events, is that the way is that you can look at
a region, so you can map out a region of face space and say, I don't know what the variables,
you know, I don't know the position of that particle exactly or what its speed is or that.
So there's some ring of uncertainty that we're used to.
but what's really interesting is in these chaotic systems,
you can watch how that ring of uncertainty changes,
and depending upon where you happen to be in the phase space,
it will change in very different ways.
And there are some places in this phase space where you don't have to worry very much.
It'll change, but not dramatically.
But there are other spaces where all hellblakes loose.
Right, right.
And I think that's the key point.
And if you can figure out a way to find out where you are in that,
some estimate of where you are in that face space,
then you can say whether extreme events are going to happen with some probabilistic.
That's right.
That's right.
Confidence.
And that's now, you know, really, that, you know, people now realize that's really important information.
This is not just sort of esoteric abstract science.
This is really, you know, this is really relevant.
It really is.
Now, I'm going to have to take a walk out because I want to see whether the face space of, I'm sorry,
whether a face space of my dog is diverged enough.
I'll just wait one second.
Okay.
Come on back, guys.
The problem is it's nonlinear.
I have two dogs here and that's,
if it was just want to be easier to handle.
But okay.
You need three dogs though.
I know,
I know.
But two dogs plus me as an intrinsically chaotic system.
In any case, so really, I want to get back.
So that's the key point.
If recognizing, first of all, that you've got to find a way to follow quantitatively,
how your uncertainty evolves
and the realization that why you can't do things exactly
there also realize that there are certain places
where that uncertainty can grow
and you can say with why you can't map out
and say what's going to happen,
you can say with great confidence
that there is a distinct possibility
which one can quantify in probability
that something dramatic will happen.
And that's important.
for weather for certain, but also for climate modeling.
And I think that's probably, and what you do is beautifully describe that.
And I think it's really, if I were to say the most important aspect for me in what the book really illuminates is exactly how the science has evolved to be able to do that.
And one of the ways, as you point out, it's coming back to what seems eminently reasonable in retrospect, which most things are in hindsight.
Yeah, 2020, is saying, well, look, if we can't model it exactly, let's, let's,
let's take into account our intrinsic uncertainty on our intrinsic ignorance by, by
randomly, by running the system and randomly choosing initial conditions that may be slightly
different than the ones I thought I had. And that's what, what, what I've always thought
of is Monte Carlo and you call ensemble, well, I guess an ensemble analysis.
Is there a difference in your mind between ensemble prediction and Monte Carlo prediction?
Well, so the two things I want to say.
The word Monteca, it's very interesting, actually.
The word Monte Carlo pretty much came out from the work of von Neumann.
You know, doing their nuclear bomb testing.
And, you know, they wanted to, you know, work out how the neutrons were diffusing away without
using probabilistic equations, which made everything very complicated.
And they came up with the word Monte Carlo,
because I think Ulam's uncle or something was a great gambler in the Monaco casino.
But the term was kind of kept as a bit of a secret idea.
And at the time, there was a guy called Chuck Leith,
who worked on this program,
who subsequently became a climate-man.
modeler and actually wrote one of the first papers actually about Monte Carlo forecasting in weather
prediction. But somehow it was, you know, it was not taken up for many, many years. And now,
the thing is, it's, if you, this gets a little bit technical, but if you just literally
random, if you just add random noise to an initial condition in a numerical weather forecast,
model, the opposite of what you think might happen, happen.
So the butterflies don't actually grow because right near the grid of these models, you
have to put in quite a lot of artificial diffusion to keep the model kind of numerically stable.
So typically kind of random perturbations actually decay more quickly than, you know, in reality.
So we actually had to work, I mean, quite a lot of the work I did in the, in the, in
the 1980s was developing techniques, you know, which actually turned out to be quite mathematical
techniques to do with kind of generalized instability analysis, where we would actually
introduce perturbations that we knew projected onto kind of finite time instabilities of the
system to overcome this problem of just damping by artificial diffusion in the models.
So it wasn't really Monte Carlo in the sort of strict sense.
the word. So that's why I thought, well, we'll just use the word ensemble in a slightly more
generic kind of way. Yeah, well, Ensemble has a long history and of a distinguished term in physics
and in terms of thinking of just about the first time uncertainty was really incorporated in physics
with statistical mechanics. And the notion of ensembles became as a central part of that
understanding of how a system of many, many particles behaving randomly can behave globally in ways
that you can try and understand, one of the great triumphs of Bolsman and others.
But when it comes now to this treating these ensembles,
you spent a lot of time talking about how the science and the history of what's been done
in weather prediction and then more relevant for many people nowadays climate modeling,
which I want to get to.
But one of the things that is actually that I learned that I hadn't really,
or maybe I'd forgotten it, but it really hit me in reading the book,
was again from Lorenz, I think,
was this limit of weather prediction.
We always wonder why we're limited by two weeks
and why, you know, I remember when I was younger,
I couldn't believe it for more in a few days
and then, okay, a week I could believe it.
And you might figure as you get better and better computers,
you'll simply get better and better and better
at forecasting the long term.
But in fact, one can say with more or less certainty
that there's a limit.
There's an absolute limit on your ability
to forecast the weather more than two weeks or so.
And the reason is it has to do
with the, now not the intrinsic uncertainty of, say, the motion of planet, but now fluids,
which have at small and smaller and smaller scales can have turbulence. And this very important
phenomenon, which you also talk about, which I guess I call scaling, you call power law,
but the notion that for systems that have all these activity on many different scales at
the same time, that what happens on smaller scales, uncertainties of,
on smaller scales can blow up much more, much faster than uncertainties on large scales.
And I think you use the argument that if you have a model, a weather model and your grid is, say,
a thousand kilometers or 10 kilometers or whatever, then you might expect that uncertainty
to blow up in a day. But if you have an uncertainty of, you know, 10 meters, then changes will
take place. It'll change things dramatically in an hour or so. And so want to go through the
mathematics of that sum, which I kind of like, because it's the sum of an infinite number of
numbers, which is always fine to go back to, the first person who first puzzled over that a long time ago
and then solved it. The fact that an infinite series can add up to a finite number, so I'll leave
it to you now to go through that. Well, I mean, just to say, Lorenz's, you know, Lorenz's famous paper
in 1963 was a model which just had three variables.
And it showed this property that, you know, a small amount of uncertainty in one of the variables could eventually, you know, cause the system to become unpredictable.
But that's, but that system has what a mathematician would call the property of continuous dependence on initial conditions.
So that sounds a bit abstract.
So let me put it like this.
if you say it to me, I want to be able to predict the variables in Lorenz's model,
you know, out to two weeks, three weeks, three years, 30,000 years, three trillion years.
I could tell you how accurately you would need to know those initial conditions to make that prediction.
And the point is that, you know, for no matter how far ahead you want to predict,
in principle, you can make that prediction as long as you can kind of bound the initial errors by a certain amount.
So, you know, in a way, there isn't an absolute limit to predictability of a simple chaotic system like Laurentz 63.
But in weather forecasting, we're not dealing with these kind of very what are called low-order chaotic systems.
We're dealing with the Navia Stokes equations, which is kind of new,
laws of motion for a fluid, which in principle has, you know, an unlimited number of scales.
Now, you could say, okay, eventually we'll hit quantum mechanics, but, you know, okay, well,
quantum mechanics is uncertain anyway. So let's just, let's keep quantum mechanics out of it.
We'll just use this classical Naviostok's equations, which a, you know, a mathematician would
call an infinite dimensional kind of nonlinear partial differential equation, which basically means.
Which sounds intimidating, certainly.
It sounds, well, it is intimidating.
It is intimidating.
It is intimidating.
But it basically means it can support, you know, large whirls and eddies, smaller
wells and eddies, you know.
So, you know, you have, you know, the very large scale weather, weather,
weather, weather, systems.
Embedded in those might be clouds.
Embedded in a cloud is subcloud turbulence.
Embedded in the subcloud turbulence is, you know, is even smaller.
And it kind of just cascade right the way down.
I mean, ultimately to individual molecules.
Yeah.
Now, Lorenz, I mean, in a paper which is not as well known as his 1963 paper, another paper in 1969, he said how, if I had a little error in one of these really, really, really tiny scales, how quickly or how rapidly would it propagate up to the large scale?
And this is actually, you know, what he meant by the butterfly effect.
It's kind of become a bit misused this term actually
because it doesn't, what he meant by the butterfly effect
doesn't really apply to his three component model.
It applies to this kind of Navia Stokes type of equation.
And basically what he showed is something
which is kind of mathematically rather interesting,
which is that, you know, your small scale error
will grow and eventually, you know,
it'll say, you know, you will,
lose any predictability in the weather after, say, two weeks.
Now, you then might say, okay, well, I'll try and reduce the uncertainty to even smaller
scales. So it's, you know, I'll add some extra instruments that measure, you know,
barometers and things which measure the weather. So I'm going to push that uncertainty down
to even smaller scales. And what Lorenz says is that your ability to predict is, you know,
the effect of those extra measurements have a kind of
law of diminishing returns, you'll get less and less predictability out from those extra
measurements. And ultimately, and as you say, it's a bit like adding, you know, a half plus a quarter,
plus an eighth, plus a 16th, plus a 30 seconds, plus a 64th, and so on and so forth. You never get
past one. You'll never get past one. That series converges to one. Now, normally, you know, when a,
a mathematician is very happy when a series converges.
But what this means in this case is that there is an absolute limit,
which we do think is around two weeks.
I'm going to qualify,
I'm going to just say a few words about it,
because I need to quantify that just so people don't misunderstand what I'm saying.
But let's say there's a limit of around two weeks,
just beyond which you can't predict beyond that,
no matter how accurately you know.
And this is kind of an example of something which is sometimes called a singular limit in the sense that, you know, in a hypothetical sense where you knew everything deterministically, perfectly, yeah, sure, you can in principle that predict infinitely far ahead.
But even an infinitesimally small initial uncertainty would suddenly reduce your predictive skill to or predictive horizon to two weeks.
So in practice, that means you just can't predict
because you're always going to have at least an infinitesimally small uncertainty.
Now, I just want to be clear, I'm not saying you can't,
because one has to kind of, this is an average of two weeks.
And as we were talking about earlier,
predictability varies from day to day.
So some days, actually, that limit will be more than two weeks.
Other days, it will be much less than two weeks.
So this is an average.
Yeah.
And the other thing is that there are, you know, we're talking about a kind of prediction of an instantaneous weather state.
And we know, for example, that when the El Nino phenomenon occurs in the Pacific Ocean, in the tropical Pacific Ocean, that we know something about the kind of average statistics of weather over, say, the winter or the summer.
you know monsoons tend to be weak when there's an el nino event i mean this year we with the we had the opposite
the so-called laninia which actually led to flooding in pakistan the two are probably causally correlated
and you know we know that there is predictability out to several months ahead several seasons ahead
actually for of predicting el nino in the ocean and so that gives a kind of a prediction not not on the
instantaneous weather, but on the sort of average weather. So I just want to qualify what I'm saying
is, you know, the statement about you can't predict more than two weeks is about a kind of an
instantaneous weather field on average. Instantaneous weather field and generally locally. I think this
we're going to get, I mean, I think it's really important because one of the big differences,
there's a difference in weather and climate. And climate is over many places. And climate is what's
happening in some sense globally over time. And weather is what's happening today.
hear. And so people often say, well, why should I believe, you know, climate change when I can't do
the weather for more than two weeks, why should I believe I'm going to talk about the climate?
And it's a fundamental difference. And I, and, and, and I, and obviously we'll get there, I hope.
You know, one of the, one of the things that when you talk just reminded me, what in some,
tell me if you agree with me in this. It seems to me the big, the challenge that's pointed out in
this inherent uncertainty of why you can't, of, of,
of small scales, affecting big scales, is that in a nonlinear system like that,
where there's Navier Scokes, where there's things happening on many scales at the same time,
in order to know what the system is doing, you have to know what's happening on all scales
at any one time, and that's not possible. And I just want to, I can't help but make the analogy
for why we can understand. You might say that should, how can we understand fundamental physics?
and one of the big surprises and wonderful things about fundamental physics that Feynman,
actually, I'm not sure he appreciated fully when he won the Nobel Prize for actually doing it,
is that it turns out remarkably we have discovered in a rigorous way
that one doesn't have to know what's happening on scales we cannot yet measure
in particle physics, on scales so small that we cannot yet measure,
that those things in the theories that we understand,
the effects of what happens on very small scales
actually disappears when you go to large scales
rather than the other way around.
And if it wasn't that way,
we could never have a fundamental theory of nature.
But that's, you know, at the smallest scales.
But that's why it's so hard to have a, quote,
fundamental theory of fluids of what's happening in the weather
and other things is because it's exactly the opposite,
that the small scale, what happens on small scales,
needs to be known at what happens to understand on large scale.
So it's a fundamental difference.
And that's why I argue that, you know, and I've certainly, you know, something I've worked on a lot over the years,
is that, you know, rather than just bury your head in the sand and pretend these small scales don't exist,
or they're kind of unimportant, represent them with noise, you know, with just, you know,
good old-fashioned sort of what we call stochastic noise.
And you do a better job in simulating your large scales with noise, a noisy model,
than with an unnoisy model.
And this is one example of where noise is your friend.
You know, noise can help you.
Well, in fact, you could have,
if you could see my page,
you'd see the next thing I was going to talk.
I have underlined is adding noise makes things better.
And that's the next things you point out.
And so really, again, something that I hadn't appreciated personally.
You have a great example, which visually we can't do here,
about if you're truncating a system,
if you're always have a limited number of variables,
a limited amount of information to describe a complex system.
And, and if you, if you, it turns out in certain cases,
if you add noise to that truncation,
then then you get a better representation of what really happens,
then if you don't add noise,
which is really totally non-intuitive and totally counterintuitive, in fact.
And what it, what it does is sometimes,
the best example, I mean, you show a lot of pictures, and I highly recommend people looking at them.
But there are two examples that I think are particularly important.
One is that is in this attractor sense where the system can bounce around and does between one state and another in an unpredictable way.
But it's either in one or another.
One state or another is an attractor.
It's the way I think of it.
And it does that if you follow it.
If you follow it on a computer, it bounces around.
But if you add noise, you basically can have it.
You can see how it, instead of all of the conflicting stuff of the transition back and forth,
it more or less stays in one state and then jumps to another state.
And you can see explicitly that bimodal distribution comes out much more dramatically than if you didn't add noise, which I think is a.
And since you mentioned it earlier, Lawrence, I mean, one of the bits I took out of the book, just because it was getting, that chapter was getting too long, was actually about Milankovic cycles.
You know, we talked about the Ice Ages.
And it's long been a puzzle because, you know, as you say, the Milankovic cycle is kind of related to the change in the solar forcing, you know, due to some, due to orbital, you know, very low, low frequency, you know,
100,000 year time scale changes in the orbit of the Earth.
Very, very predictable and periodic.
But when you calculate the change in the kind of solar forcing,
it's just tiny, tiny, tiny.
And for a long time, people couldn't understand how, you know,
an Ice Age could develop from such a small change in solar forcing.
But it turns out there's a theory called stochastic resonance,
which is basically along the lines of what you've just talked about,
that adding noise to a small external forcing can really amplify it.
And here the noise is coming from the internal kind of weather dynamics of the climate system.
And the two lobes, what do you call attractors, I just call them the kind of wings of the
attractor or something.
You could think of the ice ages and the kind of interglacials.
And we're getting this amplification from this, you know, from the internal effect of the noise of weather systems,
interacting with this Milankovic cycle.
So that's another example of how, you know,
we believe noise playing a crucial role in explaining the sort of paleo-climate
record of the Earth's climatic history.
Yeah, absolutely.
Okay, that's great.
That never made it to the final version of the book.
But, you know, but jumping between these widely different systems.
In some sense, the noise stabilizes the system in one case,
but it also allows the rapid transitions,
which is what one's talking about.
about the Malenka cycle.
The other example,
which I think maybe I'm going to try and,
we'll see if we can try and talk our way through it
without a picture.
But where we're adding noise is relevant.
Actually, I guess I could put it this way.
I'll describe it differently than you do in your book.
But if you want to,
if you're climbing a mountain and you want to get to the top,
but you can't see the top,
and there's a lot of hills and valleys in between,
you've got to figure out how to get,
to the highest point.
And one way is always go up.
And the minute you start to go down, stop going.
But then, of course, what that will do,
and it's well known in the case of mathematics,
but also in case of climbing,
we'll get you to a local point where you're on a local hill
and you may not be the highest.
Better would be to send out a lot if you've got a Boy Scout troop,
to send out a lot of boyscouts and say,
well, you know, don't worry so much.
Sometimes, you know, if you're going down, you know,
it's okay.
you know, that's not what you really want to do, but it's okay, but maybe it'll take you to another place.
And you point out, of course, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, it's, it's very important in a complicated
system when you're looking in a computer to find a maximum or a minimum that, and not to get stuck in, in, in local red herrings.
Right.
Right.
So, it's, that's, you know, it jumps from, you know, it jumps from what you might think would be the best place to one other place that might be worse, but
But that may then take you up to the high point.
And it's a really important example of how to find your way in a system where you can't see where you're going in some sense.
And I think, you know, personally, you may not agree, Lawrence.
Personally, I think that kind of explains to some extent the way in which we think our way around problems.
You know, when we're still at the early stages of trying to solve a problem, we'll kind of, you know,
we'll be prepared to go back to zero and try again and try again.
a different route and so on.
When we're getting pretty close to the,
you think we're pretty close to the problem,
then we'll be more discerning, if you like,
about the crazy ideas that we accept, you know.
And so I think there is, you know,
it seems to me that that has,
that kind of example has resonance in many different areas
and maybe human cognition.
And by the way, I think, you know,
evolution in general,
maybe also the way, you know,
that works in evolutionary terms.
Yeah, I was certainly trying,
yeah, trying lots of different,
options early on, especially when you haven't got a lot of time invested in it.
It's even easier.
I mean, I think that's psychological and you haven't invested a lot of time and it's early stages.
Why not just try all of the, all of the, uh, process?
One of the other facet of science, I will, since you gave me one example, the other important
thing that I've learned having at one point created a master's program in physics
entrepreneurship for my university, um, is that,
The other thing that's useful is you can sometimes find that you're,
when you're solving a problem that you've got to a point where you've solved it,
but it's not really the problem you want to solve.
And what you learn if you're a scientist is often,
is how to use that problem you've solved to solve something else.
Not to give up on it, but you've got to a local maximum,
and it's not taking you where you want to go.
But you know what?
That can be useful somewhere else, and that happens a lot.
It happens a lot in science.
It happens a tremendous amount of business, and I think it's important.
But this notion of adding noise called stochastic rounding, I think is really kind of interesting.
Now, before we get, I want to go on for about another half hour or so I think.
And I think there's not a digression, but when one is talking about uncertainty and noise,
one cannot help but talk about at some level at a basic level of quantum mechanics.
And the fact is, people know that quantum mechanics has uncertainty.
they've heard of the Heisenberg uncertainty principle.
They've heard that there's the quantum mechanics,
the central, you can almost define quantum mechanics
by recognizing there are certain combinations of quantities,
which have an intrinsic uncertainty
that you cannot measure that combination,
both you can measure position,
then you won't know the momentum or the velocity,
but you can measure velocity,
but you won't know the position exactly.
And that is a central premise,
of quantum mechanics.
And the big, the big, the big, the big question which interests you, not much, not many others
in the physical community, there are a few deep people are worrying about this question,
is whether that's really true.
Whether, as Einstein thought, that uncertainty was what you call, these are, must be come
from philosophers, because one of them is epistemic, whether that uncertainty is epistemic,
as you call it, which means that's just a property of us not knowing.
We don't have a good enough measuring device or we don't know all the stuff that's happening.
If we get to it better, that uncertainty will go down.
Or whether it's, quote, ontological, which means it's inherent to nature.
And the difference between Einstein and Bohr was Boer's argument and the rest of the physics
community's argument that it was intrinsic to nature, as you call ontological.
And Einstein's saying, no, no, it's really, there's really some theory.
which if you knew things well enough,
you could reproduce this seemingly crazy behavior of quantum mechanics,
all of the crazy spooky action at a distance,
and some people would call it non-locality,
which some people like to think of.
I don't think you need to think.
Anyway, and Einstein thought,
and he constantly presented these great Godunkin experiments
to point out potential problems with how illogical
or how irrational quantum mechanics would be,
if you took this notion that that uncertainty was ontological or inherent to nature and took it to its large dreams.
And what's happened is each of his crazy experiments where he said, look, this would be insane because this is what would be required.
You do it.
That's what happens.
The insanity happens.
And the best, let me get you give an example.
I'm just going to throw us out and I'm going to give you a chance to chat because I know in the early part of the book,
then in the later part of the book, which you won't have to have to happen to yet to, let's make
it clear you think it's ultimately epistemic in some sense. You don't think, you think that not only
his nature does not, God not play with dice in that sense, but that uncertainty is also related
to the nature of our ability to understand ultimately the quantum mechanics associated with general
relativity, that the two of them are tied together. So that's a premise you try and make later in the
book. But let's just get back to good all quantum mechanics. And the experiment that's often used
to show how fundamentally different quantum mechanics is from any kind of theory. You know,
it has to do with the spin of a particle. And of course, the two examples of the standard
double-slee experiment from the true polymath Thomas Young of one of a British doctor and
physicists and many other things who pointed out that, that, and the,
the typical experiment in quantum mechanics is if you send light through two slits,
you'll see what's called an interference pattern,
because light is behaving wave light.
But now having detectors of light,
you see that light is made of particles,
and if you try and measure each particle of light called a photon as it goes through,
the pattern you see is very different.
And so, but the really weird thing is that if you send one particle at a time
and don't measure it, you'll get the same pattern as you would have for a wave.
So somehow the particle appears to be going through both slits at the same time.
That's one weirdness of quantum mechanics.
But the other has to do with another experiment related to the spin of, say, a single particle.
Electron acts like it's spinning.
It's not really spinning, but it has angular momentum like it's spinning.
And quantum mechanics, the traditional notion of quantum.
mechanics, tells us that the particle is spinning in some sense in all directions at the same time.
And when you measure it, and there's a probability that it'll be spinning in the direction
you measure it.
And all other things being equal, the probability of any direction is the same.
So if you can measure whether the spin is pointing up or down, you'll measure it's
pointing up 50% of the time and down 50% of the time.
That's fine.
That makes sense.
You do the experiment.
You measure things not pointing up 50% of the time, down 50% of the time.
down 50% of the time.
I'm trying, by the way, just so readers, listeners know,
I'm trying to set up the diagram that you talk about in your book.
I'm not trying to take over this discussion.
But so 50% of the time would go up, 50% of the time to go down.
But the weird thing about quantum mechanics then says,
well, if you then subsequently measure the particle
and now ask, measure it along a different axis,
say the up and down axis is a Z direction or Z direction,
if in Europe or Canada.
And you want to look at the Y direction.
You now have a detector that says,
What's the component of the spin in the Y direction?
Well, now having done it and shown it was spinning up,
you now measure it and you find out 50% of the time,
you'll find out the spin is pointing in one direction of the Y axis,
and then the other direction of the Y axis.
Okay, that's really weird.
But then having done that,
you then go back and measure the spin whether point up or down,
and now you find out that what you thought,
you'd already measured it.
you'd measure having a spin up, but now when you do a subsequent measurement, you find out
50% of the time it's pointing up and 50% of the time is pointing down, telling you that it,
apparently telling you, it didn't have, you couldn't think of it as having had some
intrinsic spin in the, in the Z direction or Z direction during the times when you weren't measuring
it, because if it had, you would always get 100% of the time pointing spin up.
if you'd started this, showed it with spin-up at the beginning of the experiment.
This, this is this intrinsic uncertainty that results is often, is by most physicists taken to be a classical,
I shouldn't say classical, a prototypical example of the intrinsic Heisenberg uncertainty,
or the intrinsic, what you call ontological uncertainty of nature,
that the particle literally, as Feynman would have said, is doing many things at the same time,
whether you like it or not.
And the epidemic argument is,
no, it's doing one thing when you're not measuring it.
And we just don't know what that one thing is.
And there are lots of experiments,
and the Nobel Prize was given this year for a set of experiments,
a whole generation of experiments,
that continue to test these weird notions
of what's called Bell's inequality,
experiments that you can show
classically if the particle was doing something
specific you get one result of it
it's not doing something specific you get another result.
I should say, I don't know if you know,
do you know Sidney Coleman?
Yeah.
He would argue and I would too
that if I think when we're writing it again,
the Bell's inequality is less convincing
than another inequality which is really dramatic
where you get a plus one in the case
of classical physics and a minus one in the case of quantum mechanics,
and you always get minus one.
But in any case, these are the set of experiments that tell us that quantum mechanics
is intrinsically, I would say, and I think the majority,
and I'll also say we accept that nature has this fact that unlike classically,
particles are not doing one thing at any time.
They're doing many things at the same time.
Okay.
Now, you would argue, let me turn over to you, and you'd say,
okay, well, that's true.
In an analysis where you take things at face value, that's a fact.
But let's not take things at face value is what you would say.
Okay.
Well, yeah, I mean, what I would say is if you come to me with a, I mean, it's all very well arguing,
you know, I mean, you talked about this in a heuristic way.
I mean, we have to do that because we're talking, you know, to a broad audience.
But yeah.
If we were to get serious about this, I would ask you to write down your assumptions very, very, very carefully.
And I would tell you, because I think it's true, whether it's Bell's inequality or the plus one minus one that you're talking about.
There are a whole load of these, what are called no-go theorems.
At some stage, you will make a kind of an assumption which you may not even realize you're making.
which is about a world, a world that didn't happen,
but that might have happened if you had done an experiment differently.
Yes.
And I use the phrase counterfactual, you know,
which historians use and philosophers use.
And it raises the question,
and I don't have, you know, we don't have time to go into this
because we'll run over very quickly.
But let me just take that, if you like, as a fact.
Now, what I would say to you is that there's an interesting thing about these chaos,
these models of chaos with their fractal attractors,
which is that the fractals,
we're not talking about like a big solid volume.
We're talking about things with gaps in them.
And the question is,
could it be that these counterfactual worlds lie in the gaps in the fractal geometry?
Yeah.
And I would claim to have a model,
a possible model.
I'm not going to, you know,
I'm not claiming it's the truth or anything,
but I think it's a plausible model of quantum mechanics
where these counterfactuals do indeed fall in the gaps.
And what it means is that your theory says this is not a,
this is not a permissible hypothetical experiment,
you know?
So, so this is a kind of a loophole,
which is almost never discussed in the standard literature.
It's kind of assumed that,
we can talk about counterfactual you know you have a theory which admits counterfactual
worlds without any restriction at all and i think you know i think that is a mistake that is where
our you know our interpretation of these various no go theorems is incomplete and if i'm right
then yeah Einstein may well be vindicated after all and that you know the uncertainty really is
not inherent you know god does not play dice but it's our lack of knowledge
if you like that creates the uncertainty.
Now, I realize this is super speculative and so on and so forth,
but you have to remember, I started off in quantum gravity,
and I come very much from the view that general relativity
is a remarkably beautiful non-linear causal theory,
and that the problem in trying to marry the two theories
may be less to do with general relativity
and more to do with quantum mechanics.
So that's a bias, I admit to that, but that's where I'm coming from.
Yeah, exactly.
And I always framed the bias was if the only two you have is a hammer, everything looks like a nail.
And I think it's really important.
I mean, I can see the logical analysis in your book.
And it's because you spent so much time with these fractal geometries, and it is true that while, you know, you can see these surfaces, there's some motions that just are not allowed to happen.
Right.
And that trend, but let me make it.
And you use a similar example of your book, so I want to make it clear.
In some sense, it's like saying that, well, because that, you know, the quantum, the reality we experience,
which I'll call, let's say, quantum reality is such that the, that the present measurement of the spin of this electron is somehow kind of tied to whether your grandmother, your grandmother, you know,
married your grandfather at some I mean and and there's a there so that and you can say well you know
what would have happened if that hadn't happened but that's not allowed the only the only reality
is that she married that's allowed is that she is the correlated fact that she married your grandfather
and you measured the spin up and in some yeah sorry the the the the point is that the spin is
intimate intimately related to the measurement that you made on that spin yeah and
And the measurement that you made is ultimately linked to what your grandfather and grandmother did or didn't do.
And that's, by the way, you know, I mean, that's what's called a contextual theory.
And that's well, well known.
If you got to, if you have anything which explains quantum mechanics, it's, it has to be what's called contextual.
So, you know, in some sense, the properties of the particles are, there is an inherent relationship with the measurement properties.
and, you know, well, anyway, look.
Yeah, okay.
Yeah, so I think that, you know, to do more would require probably more complexity.
I think that fundamental difference, maybe I've already said this before.
I was sort of influenced by Sydney, and I talk about it my new book, that I think part of the problem is people want the world to be classical, and they want to talk about interpretations of quantum mechanics, and that's the wrong thing to do.
The world is quantum mechanical, and we always are driven to these nutty pictures, because.
because we insist on thinking of a world that we can picture classically.
And the right thing to do is think of the interpretation of classical mechanics.
And I would just say that these fractal geometries are not really classical.
I wouldn't describe this as classical.
You say that.
I know I go into this in a little bit of detail.
They incorporate things like non-computable determinism and the mathematics of peatic numbers
and all this sort of stuff, which is not really part of normal classical physics.
So this isn't a classical view.
I'm trying to put over, but I am trying to say that Einstein was not, maybe not wrong to be concerned,
both about spooky action at a distance and, you know, dice playing deities. Yeah, yeah. And well,
if it's a solution, anyway, I won't say, well, I will say it. You know, sometimes the question is
a solution worse than the problem. But anyway, I will argue, and something I would love to talk to you
afterwards, and maybe we'll do another program at this at some point, you give a cosmological argument,
and I've already debated some of the cosmology with Roger Penrose about how our expanding,
universe may be an illusion in some sense. I mean, I think there are good cosmological arguments
that aren't taken into account that argue that that's not that would, that I hope I could convince
you would argue that's not the case. But anyway, that'll be you and I talking cosmology at some
point. But it, but now I want to get back to the heart. I want to end in the last sort of 20 minutes
or so talking about getting from weather to climate. And, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and,
I didn't realize the first weather prediction was in 1861. Is that right? The first kind of effort to make
weather prediction. Robert Fitzroy. Yeah. Robert Fitzroy. Tell me about him. By the way, I had an
email out of the blue from a great, great, great, great, great, great grandson of Robert Fitzroy who'd read,
just read my book, which is rather nice. Yeah, that is good. He was the captain of the Beagle, which
took Darwin around the world, came back, he was obviously interested in weather and set up the
first British Met Office. I mean, his contribution really was to systematize the observations of
weather around the coast and to feed them in using the new kind of wireless technology to a kind
of a central location and all these observations could then be synthesized together to form a kind
of a weather map of the of the you know the pressure the isobars and everything and then fitzroy would
use his kind of rules of thumb he didn't have obviously a computer model but rules of thumb to sort
of try to work out how the weather would evolve um but what that i mean the point about fitzroy is he
kind of showed that one crucial part of a weather forecast are the observations you can't do anything
unless you have the observations of the weather today
to get the weather tomorrow.
Yeah, absolutely.
That's a key part.
But then the observations alone need to be supplemented,
not just by theory,
but by the recognition that the system is,
as you point out, nonlinear.
A sentence where you first talk about that,
you say, what is it, after all,
that distinguishes science from pseudoscience?
Surely one key feature is an ability
to handle uncertainty to estimate air bars.
weather forecasting inside the limit of deterministic predictability
had no reliable means of estimating these air bars.
And I think the central point you want to talk about
is that one had to go beyond a deterministic model
to a model that explicitly accounted for uncertainties
in an ensemble way to produce the weather forecasting we do today,
which you played a key role in helping develop.
So why don't you give us a little bit of that story?
Well, yeah, I mean, we talked about it a little bit, but I mean, basically it kind of seemed obvious to me that we needed to do something like this.
But the real problem was, you know, there were competing uses for the computers because this was computationally intensive.
You know, running 50 forecasts is computation intensive compared to running one.
We were helped.
I have to say my case was helped a little bit by the fact that the emerging supercomputer technology was of kind of massively parallel where you have many thousands of individual processes.
And this was a problem that's sometimes called embarrassingly parallel because you really don't need, the forecast don't have to communicate with each other until they get to the end.
So technology helped me a little bit, I have to say, because of that.
but yeah the prevailing you know the prevailing view
from the sort of fathers of numerical weather prediction
was that as long as you're less than about two weeks
then you know the problem is deterministic
and and I you know that Lorentz model the figure that you showed
was it 10 or something just kind of debunks that immediately
you can you know you'll have situations where even after a day
you can lose predictability.
And the famous Michael Fischstorm
kind of brought it home.
I think, you know, nature,
you know, where I was kind of
didn't do a very good job
and convincing my colleagues about this,
nature kind of spoke
in 1987.
And that I think convinced everyone
we had to go down a different route.
And you know, and nowadays, of course,
it's everywhere in the world,
Every weather forecast center around the world now runs ensemble forecasts.
But what's really exciting, I mean, the thing that's really exciting me at the moment is that the way this is changing,
the way disaster relief agencies and humanitarian agencies now act, you know, in the old days,
and old days only means a few years ago, they would only supply, you know, they would only go into a region that was hit by a hurricane or a tropical cyclone or something.
after the event.
And then it was often difficult to get medicine and food and so on to places that have been stricken
because all the roads would be down, the communications would be down and so on.
So how much better would it be to go in ahead of time?
The problem is that these agencies are not rich.
They don't kind of have money coming out of their ears.
So if they go in and nothing happens, it's a false alarm, then they've potentially wasted a lot of money.
So they need an objective criterion for determining when to go in ahead of time, when to take what's called anticipatory action and when not.
And probabilities, these ensemble-based probabilities give them a precise objective criterion now.
They do their cost-benefit analysis and they say, okay, if the probability exceeds 75% or something, we know from that cost-benefit analysis that it's worth going in ahead of time.
We can take the odd, you know, false alarm, but we know that 75% of the time we're going to be ahead.
And so they have an objective criterion where they have these, what they call probability triggers,
for determining when they go in.
And this is really saving lives and affecting people who, you know, who otherwise would either,
well, they may perish or they would certainly be leading very uncomfortable lives,
potentially for days or weeks, waiting for food and water and medicine and so on.
So that's a fantastically interesting and, you know, great example about how this has completely
revolutionized the way in which people use weather forecasts.
Yes, and I'm glad you got there because I wanted to talk.
I wanted to get there.
You talked about it later on the book that it's when we talk about risk, but that it's actually
being used, that probabilities are, which I want to come back to and parse a little more
carefully in a moment, are really being used.
And it's an example how I always like to tout the fact that.
None of the physics I've ever done has any practical significance whatsoever.
And it's an example of how that bifurcation, how that butterfly effect in your life,
change your own physicists who would have talked about fascinatingly interesting, perhaps,
ideas to some people, to a physicist who may, whose work ultimately can save people's lives,
which I just think must be incredibly satisfying.
And it all came out of that figure 10, you know, of the Lorentz model,
seeing that Lorentz model and realizing, you know, this can happen to the real world
just as much as it happens to the Lorentz model.
And it's interesting, by the way, that these ensembles in some sense do exactly what you think the quantum world doesn't do.
But, I mean, what the ensembles do is say, let's run 50 different realities and see in how many of those realities something happens.
And it's really nothing, well, I mean, in detail is more than that, but in principle, nothing more than that.
And it's really kind of amazing because one of the things, one of the more impressive figures is one where you actually look at that.
that poor event of October 16, 1987,
and show how taking the date available and running ensemble forecasts,
and running forecasts for many different models
over a short period, a 12-hour period,
shows you that given those initial conditions
and the non-linear aspect of condition,
there are a certain number of those final configurations
that something really dramatic will happen,
something totally dramatic like an incredible storm,
the worst in 300 years.
and and and and and and that's the sense in which one means probability so I want to go into this because
although you have great confidence that farmers in poor countries understand probability and you
talk about their ability to to do that in terms of of their actions it's I think relatively well
known and I know Steve Pinker's written a lot about this that that and all of us that
people don't handle probabilities intrinsically very well especially um um um um um um um um um um
Well, especially when the probabilities are small and hard to intuitively understand.
So the probability of rain or of a of a cyclone being 10% isn't that there's a cyclone in 10% of the places, right?
I mean, 10% of the area that you're looking at in that day.
And it is rather that if you have 100 worlds, all of which have the initial conditions within
your ability to know them.
They're all equally plausible.
They're all equally plausible and ten of them will have a cyclone.
Right.
And that's the way to understand this sort of frequentist idea of probability, I guess.
And that means, and then you have to ask yourself, I've now said it a few times in discussion
with people of my favorite lines when talking about climate change to some people who don't
want to do anything comes from movies of Dirty Harry.
I don't know if you ever saw Clean Eastwood movie Dirty Harry.
Yeah, yeah, yeah.
But where he points, the gun is, the gun may be empty.
He said, I don't know, the gun may be empty.
I may have you all, are you feeling lucky, punk?
And ultimately, we've got to ask ourselves that question if it's something catastrophic.
Are you feeling lucky enough that even if there's only a 10% probability, you're willing to say, ah.
And the fact that one can, I think the point that, that, that, that, that weather forecasting has proceeded from saying this will happen or this won't happen.
to saying with the probability and the probably means something
and then you yourself can make the decision of whether that probably
is worth right and there's another great figure here where you show the
the the motions of different of different cyclones of different hurricanes
right again all intrinsically obviously non-linear and turbulent and everything else
but on some of them you can look at these ensembles and they all more or less go in the same direction
In other words, they diverge quickly, and it's important to know that that's possible
and when it's possible if you're talking about people's lives.
And it's a wonderful discussion.
But now I want to go to from there, from weather to climate, which has been the next area you've gone in.
And I really want to sort of lead it to our discussion of risk analysis at the end, which means
we're going to skip pandemics and economics, but that's okay.
because I think I think when you read your book
I think it's make it clear that while one might want to apply these things to economics
one could if economics was a science but it doesn't seem like economics of science
so economics is moving well economics is moving more slowly
than than I had thought actually when I started writing the chapter
that it's moving maybe moving in the right direction but yeah I'm more pessimistic than you
but okay but that's why it's okay to ignore them I want climate we can't we shouldn't
ignore and is a real issue. And is an area where, and I wrote a whole book upon it,
saying it's science. It's not, it's not, climate science is not fundamentally different than
other science. That's the first thing. There are, there are aspects of it that make it different
than some other aspects, but there's some of it you can say, you can, you can, you can address
exactly like a ball rolling downhill and some of it, you have to use more sophisticated techniques
for. And, and in, in my book, the physics, climate change, I tried to focus on those things
where you could understand like a ball rolling down a hill to see that, you know,
these are well-tested and 200-year-old ideas.
But one of the important aspects, so one of the things that you've worked on
and you talk about at great length is a whole, and what is now the de rigour in climate
modeling is ensemble forecasting.
That what is done by the IPCC and all major groups in the world is to run, you know,
when a complicated system like the climate has billions of parameters, and you can't possibly
deterministically know the answer or even estimate the uncertain what the probabilities are the
result of without actually running them and finding what the answer is.
So you run many ensembles.
And one of the things that I think is really an important, I want to pick up one of the most,
what I think is the most important points that you raise, which should be sobering to people.
and that is in nonlinear systems where you have effects that, you know, uncertainties that can produce different things like the future temperature,
if you double the carbon dioxide abundance on Earth, one can have a model that has a most likely, the most likely answer, which may be two or three degrees.
And everyone will agree that's the most likely possibility.
But in fact, if you look at all the models, there's always a spread in nonlinear systems to the possibility of more extreme.
more extreme numbers.
And therefore, if you take the mean value,
which is different than the most likely value,
the mean value is the weighted average of all of those things,
that value is generically higher in almost all nonlinear systems.
And that's a very important point.
So that when one says, well, the most likely effect of doubling carbon dioxide
maybe two degrees Celsius or two and a half degrees Celsius,
that may be a true statement.
But what you find is the mean value,
in a hundred different universes where you have that climate is actually three or four degrees.
And that's very sobering.
So I just want to, you can elaborate on that, but that was, I think, an incredibly important point.
Is there anything you want to add to that?
I learned it from you, so I should say that.
Well, I mean, you're describing what's called a skewed distribution.
And as you say, it's very indicative of nonlinear systems.
In other words, that the, you know, the distribution drops much more quickly on one.
side and it extends to a great long fat tail on the other side. And of course, the problem is,
you know, compounding this is that the impacts of climate change on, you know, on society and on
ecosystems and so on, that increases very non-linearly with temperature. So these, even though
these tails, you know, the probabilities are getting smaller and smaller, the impacts are getting
larger and larger. And then you're faced with an awkward problem of multiplying, you know,
a small number by a big number. And we're not quite sure whether that will end up being small or big.
But yeah, I think, and by the way, I made exactly the same point in the, in the chapter on COVID.
Yeah.
That you see, you know, the COVID models also show these tales of high hospitalizations and high
deaths. So if you're trying to formulate policy, it's actually misleading to be guided solely by the
most likely prediction of hospitalisation and death. It's exactly the same problem as with climate change.
And one way of, as you say, of kind of quantifying that is to compute the difference between the
mean and the most likely. But, you know, the truth of the matter is really what this is all about
is
you know
what is it worth
to avoid
you know
getting anywhere in that tale
where things
you know I use the example
of four degrees as I think
you know even a
even a kind of really
died in the wool
sort of climate denier
would have to kind of admit
that if we ever got to a four degree
warmer world
it would be pretty catastrophic
for I mean I don't think
I don't think humanity
would go extinct
but we would be leading
dramatically different lives
to what we need.
I think the thing you point out, which is
which I know of, but it's an important point
is this, it is, it is
true that there, if the
if the climate,
worldwide global average temperature
increased by four degrees, there'll be
significant places around the world that will
simply be impossible to live in.
Namely, this due point,
this fact that, that, that
with a certain humidity and a certain
temperature, human bodies cannot
keep themselves cool. And it's
physically impossible to live.
And that's just a, that's a property of biology and physics, and you can't deny it.
And there will be many, many places and many times throughout the year where, where there'll
be uninhabitable places on earth, which is, certainly, I mean, there are still places where
you die easily if you go to the, you know, the various deserts and, you know, but, but there are
places where even in principle, there's nothing you can do, you know, if you're outside and to
survive.
And that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's, that's
sobering enough. But then there's this point, just like in all nonlinear systems, that there
are these other more extreme, even more extreme, in fact, not four degrees, but more extreme changes.
And in fact, I just finished a podcast and I happen to know this from having been in Greenland,
where you know that, you know, there are these ice cores that show a time is 10 to 12,000 years ago,
you see these incredible changes of 10 to 18 degrees and variations back in.
forth as the system is becoming unstable.
And those kind of things, you know, four degrees, you might say, well, no climate, no one
would say it is disastrous.
But 18 degrees is disastrous by any means.
And other nonlinear effects, such as the melting of the Greenland ice sheet, which
would change sea levels by 21 feet or so, all of these things are possible.
And one can, maybe not the Greenland ice sheet melting, but one can, at some
level talk about the fact that there are probabilities and try and estimate the
probabilities that things might happen. And then you've got to ask yourself whether you're feeling
lucky punk. And you give a you give a story which is sobering. And I guess it's a page long.
So let me read it because I think it really gives a feeling for the final thing I want to talk about,
which is this risk analysis. Are we going to do nothing? And you say, okay, well suppose international
climate change talks fail and carbon emissions keep rising. Attempts to cut emissions in the developed
world are half-hearted and developing world countries say they won't seriously contemplate cutting
emissions until the standards of living of people in their countries have reached levels
comparable to those of the richer countries. Temperatures keep rising, but none more so than in
the western half of the United States, where wildfires become a yearly fact of life. Temperatures
exceeding 50 degrees Celsius are commonplace around the world, including high-latitude regions
where such temperatures were previously unimaginable.
Lake Mead has dried up,
and the Hoover Dam no longer produces electricity
for large parts of the year.
Wheat yields plummet, not just one year and 10,
but pretty much every year.
European countries are suffering similar problems
with flooding events which all destroy crops.
A joint grouping of European and United States ministers
comes to a conclusion.
Something has to be done.
They resort to Plan B.
In this plan, military aircraft from these countries
fly continuously around the clock,
spraying sulfuric acid vapor into the lower stratosphere.
From this, sulfate aerosols are formed which reflect sunlight back to earth.
The atmosphere now is a haze of aerosols, which it is hoped will offset the effects of global warming and cool the atmosphere down.
And as I learned from Elizabeth Colbert, will also mean the sky won't be blue.
It'll be white.
Right.
They justify this action on the basis.
It will help humanity as a whole.
However, the impact of such geoengineering on climate isn't as simple as it might at first seem.
As you discussed in earlier chapter, the global warming problem is caused because we are trapping
electromagnetic radiation in the infrared part of the spectrum.
Sulfate aerosols increase the reflection of sunlight in the visible parts of the spectrum.
The one is not precisely offsetting the other.
What could be the consequences of this?
This is an area where estimates from current generation models are unreliable.
So back to the story, as you say.
After several years of spring, both Russia and India find that the atmospheric circulation patterns
over their countries have changed in such a way that rainbows,
weather systems no longer track across the main agricultural reasons.
Crops fail.
The Indian Russia braim the failures on the U.S.
And then a big international study is done.
I'm now going to just sort of paraphrase this.
But it's inconclusive because the world hasn't yet spent enough money on the resources
to say what the answer is.
And eventually, Russia and India say, well, we'll shoot down planes that continue to do this.
Because the thing about aerosol pumping is you continually have to do it
because it doesn't last in the atmosphere.
First plane shot down, sanctions, second plane shot down,
airfields are bombed, third one shot down.
Eventually you add nuclear war in here.
And so it's a dystopic future.
It's the butterfly effect in another sense,
but it is pointing out that while one can talk about
with some probability temperatures,
talking about what the impact will be on humans
is a very different story.
And if you understand that the impacts of climate variation,
especially differential climate variation across the globe,
can produce huge impacts on human civilizations
and on alterations of the geopolitical climate,
with the possibility of destabilizing the world in a disastrous way,
you've got to ask yourself,
are you feeling lucky, punk?
And then you have a chapter, the next chapter,
is on this rational aspect of decision-making,
which is probably risk analysis is probably one of,
I don't know if you'd agree with me here,
is one of the most non-intuitive,
when it especially comes to small probabilities aspect of being a human.
Let me give you an example, which sounds callous.
I don't know if I've used it before,
but I often think of this.
So a bomber was on a plane,
and he wanted to light his shoes on fire and make a bomb.
And now every, and now hundreds of millions of people,
every day have to take their shoes off when they go through airline security devices.
And you have to ask, so you could ask yourself the following question,
how many planes were bombed before this was done or might be bombed if this were done?
And you say, well, maybe there were, you know, there's a lock he'd bomb.
There were, the lock, not a lot, he'd lock, lock something bomb.
Lock could be.
Lockerby bomber.
And of course there were the other terrorist tragedies like 9-11.
And you ask, okay, well, ultimately how many people were killed?
And you say, you come up with a number of maybe several thousands, several thousand, less than 10,000, but maybe.
And then you say, how many people are flying every day?
What's the probability of this happening?
You might ask yourself, is it really worth having changed the economics and the
the quality of life for people who are flying,
or is it okay, let a few thousand people die every year?
You know, there are 100 million people flying every few days around the world.
Now, that's a discussion you could have,
and you might say even losing one person is too much,
but the notion, the fear that this is going to happen is often misplaced.
The fear that you're going to be killed by a terrorist if you live in the United States
is so small, as I once pointed out,
it's more likely that you'll have a refrigerator fall on you.
Right. And so understanding risk when it comes to these rare extreme events is something that's not.
Let me make a point here because I kind of deliberately kept the chapter on climate change,
you know, the science, let's call it the science of climate change.
I kept that separate from the chapter on risk.
And I did that for a very important reason, which I don't think is sometimes fully appreciated,
that, you know, if you talk to me about the science of climate change, I mean, I will try to give you, you know, the best information I can in a kind of neutral way. I'll try and tell you what the science says. But if you ask me then the question, does that mean we should cut our emissions? I'm kind of loathe to give you an answer on that. I will say, well, okay, if you allow me to put my man in the street, you know, man in the pub hat on, where I don't particularly have any expertise,
then I can tell you what I think.
But being a scientist doesn't actually,
I don't think gives me any particular, you know, reason to or, you know,
doesn't give me a like preferential status in deciding that.
So it's like, you know, when the weather forecaster says the probability of a storm is 20%.
That doesn't, he can't, he's not going to tell you, or he or she is not going to tell you,
that means you shouldn't go out tomorrow or you shouldn't do X.
That's your decision.
And, you know, so it's sort of a noise.
me a little bit when people say, oh, listen to the scientists or do what, follow the science.
Scientists are not going to, well, they shouldn't be telling you whether the science implies
that, you know, you should cut emissions. I mean, my good friend and colleague Sabina Hossenfelder
put it rather, or graphically, you said, I like your, I like science. Yeah, science does not tell you
not to pee on high voltage electricity cables. It just tells you that you're,
in is a good conductor of electricity.
Yeah, no, it's a great, it's perfect.
And I think, and there's another reason psychologically that separating them is very important.
It's the reason, it sounds well promoting, but, and maybe it is.
But the reason the physics of climate change, that book does not talk about policy.
Yeah.
Is I've also discovered that people who want to, who want to deny climate change in some
us are doing so because they feel it's going to, that it immediately leads to an infringement
upon their rights.
What we need to do is scientists need to.
say, here's what the science is.
And then it's up to the public and presumably an informed public and then presumably
informed legislators, none of which is necessarily true, for them to make a rational
decision.
But the scientists can just say, indeed, that peeing on the, you know, if you pee on the high
voltage wires, you know.
And as I say in the chapter on climate change, one of the biggest uncertainties in climate
change is a very pedestrian problem of how do clouds, how are clouds going to respond?
to these increases in carbon dioxide.
And depending on how the clouds respond,
they could damp the effects of climate change,
in other words, making it much less important
than we may have thought,
or they could amplify it.
And we don't really have a great deal of confidence
in actually even knowing the sign of that feedback process.
So, you know, it seems to me...
Although, let me interrupt for a second.
I think you do suggest that more models predict
a positive sign than a negative sign.
That's right.
Yeah.
If you, if I, yeah, that's right, the, the probability is unfortunately getting in the wrong direction.
But nevertheless, there's a lot of uncertainty.
I would say that is a very uncertain type of calculation.
Yeah.
And, you know, it seems to me that's where it's, I would have thought it's in the interests of, you know, people that may kind of viscerally hate the idea of climate change because of an infringement in liberty or whatever.
You know, it's in their interest to, uh, to fund the research on, on, on.
climate, you know, climate prediction and climate models to improve our ability to represent
clients because, you know, who knows, it may turn out they were lucky, you know, if you like.
But unless we do the science, we won't know.
And we have to make then our judgments based on quite big estimates of uncertainty.
The science, we could reduce those estimates of uncertainty if we, I mean, I don't want to get
on a hobby horse now, but, you know, the funding of climate modeling is pretty woeful, actually,
compared with many other things like the large Hadron Collider or the James Webb Telescope
or the square kilometer array.
And if we put in that sort of money into climate modeling,
we could have a much clearer picture about the future.
And that doesn't seem to me like a particularly big deal, you know,
when we fund these other big science projects.
I think that's a wonderful moral to almost end this,
which is the fundamental aspect.
What you learn is if you don't do the science,
if you don't do the science, you don't know.
And what's great about the book is one talks about how the science has developed in a way that can potentially allow us to know.
And then ultimately, there are human factors, and that's where it comes into risk analysis.
You present it and is mathematical, but it's also not mathematical.
Namely, you know, at some level, you could say I'll be rational.
And as you point out, the simplest way to consider whether you're going to do something is the probability that something's going to happen times the impact that it's, that it's,
that it's going to have, and that impact might be financial or something else.
And then ask yourself, is it worth it?
And that's what the punk is doing when he's looking at them, you know, okay,
there's small probability one-sixth, but the impact is pretty big,
your end your life.
And that's some sense of societal thing, because it is juristic,
even though, you know, one can talk about what the impacts are.
For example, you point out, we can talk about the value of human life,
but, but, you know, or the impact on economics,
But it's true that the impact on economically be small,
but that's because the third world already has a very small impact in economics.
But is it fair to say destroying their livelihood completely?
Well, that's not important because it's not going to have a huge global impact.
Well, it's really important to them.
And the P times L, the P times L is very big for them.
And so ultimately the science can give the P.
It can also give you some idea of the L, the likelihood of devastation.
But ultimately, you know, we have to use both.
And only an informed analysis can allow you to make that decision.
And your point is that, you know, letting economists lose where their only metric is GDP.
Yeah.
It may not be the right way to measure L.
I think that's the point.
And one of the great simplest examples I can think of is the one you brought up earlier,
which I said we come back to at the end, and we are now at the end,
having gone quite a while, but I think worthwhile quite a while,
is looking at asteroid impacts.
Here's a great example.
So the probability that an asteroid is going to impact on the earth at a level that would be civilization destroying is incredibly small once every 100 million years or so on average.
Okay.
So don't, you know, you can go to sleep, you know, don't worry.
It's not something you should worry about on a daily basis.
It's one in 100 million years.
But if it happens, you know, humanity and everything we've ever built, including Monte Carlo's and everything else,
go away. And that's for many people would consider it to be devastating. And so you might say,
even though it's a very small probability, the impact is huge. And therefore you say, well,
okay, and then you ask, is it worth doing something about? And you ask, what does it cost? And it
turns out it doesn't cost a lot to maybe defend us against, it's probably cost less to defend us
against an asteroid, I would argue, than these wild arguments about moving people to Mars so that we have,
so we have another civilization living there.
It thinks it's a lot cheaper to defend ourselves.
But that's the kind of dialogue you can have.
That's the kind of thing that science can give us.
And in the area of two areas that you might have thought,
well, that are so complicated that you might have not thought
you could do something about weather and climate.
You talk about how science does allow us to deal with those
and why the primacy of doubt is important.
And I hope I've tried to give a sense of the many different areas, although we've left a few out and done justice to your fascinating book.
So it's been great talking with you.
As you say, we haven't covered everything, but maybe at a future date we can cover the bits that we didn't get through.
But no, I think it's been fascinating.
And I think we're more or less in agreement, even over quantum.
Yeah, yeah.
Well, I'd say we didn't, you know, this quantum and cosmology will leave that.
I don't think we are, but someday I'd love to have an argument to convince you I'm right,
but I don't think I will.
But anyway, but the important stuff, namely, well, from a fundamental understanding,
obviously it's important to know the fundamental way nature works.
But from the stuff that is going to impact on people's daily lives,
I think we really, you know, did do our in complete agreement.
And it's a really interesting way to try and understand the science of uncertainty.
And so, yeah, thanks for giving us, which I began this,
podcast by saying to me that's the least understood and least appreciated aspect of science.
But I appreciate it. So thank you. Thank you. Thanks very much.
I hope you enjoyed today's conversation. This podcast is produced by the Origins Project
Foundation, a non-profit organization whose goal is to enrich your perspective of your place in
the cosmos by providing access to the people who are driving the future of society in the
21st century and to the ideas that are changing our understanding.
of ourselves and our world.
To learn more, please visit Originsproject Foundation.org
