The Joe Walker Podcast - Ergodicity — Ole Peters
Episode Date: August 23, 2021Ole Peters is a physicist and a Fellow at the London Mathematical Laboratory. Show notes available at: josephnoelwalker.com/136-ergodicitySee omnystudio.com/listener for privacy information....
Transcript
Discussion (0)
Ladies and gentlemen, this episode is sponsored by GiveWell.
Imagine if every year you saved a person's life.
One year you rescued someone from a burning building,
the next year you saved someone from drowning,
the year after that you're out for dinner with your partner,
you notice someone having a heart attack,
you perform CPR and save their life.
Think about the warm glow you'd feel living this extraordinary life.
The truth is we have an opportunity to do this every single year of our
lives just by targeting our donations to the most effective charities in the world. How is this
possible? Three premises. Number one, if you're listening to this podcast, chances are you make
more than 19.5 thousand US dollars per year post-tax and are therefore in the richest 10%
of the world. Number two, we can do 100 times more good for others than for ourselves by focusing on
the parts of the world most in need because a doubling of income will always increase subjective
well-being by the same amount. And three, in the same way as the success of for-profit companies
isn't normally distributed, some charities are vastly more effective than others. But how do
you find the most effective charities? Well, since 2010, GiveWell.org has helped over 50,000 donors find the places
where their donations can save or improve lives most. Here's how. GiveWell dedicates over 20,000
hours a year to researching charitable organizations and handpicks a few of the
highest impact evidence-based charities. The best ones GiveWell has found can save a statistical
life for $3,000 to $5,000. Donors have used GiveWell
to donate more than $750 million. These donations will save over 75,000 lives and improve the lives
of millions more. Here's the best part. GiveWell is free. They publish all of their research on
their site for free so donors can understand their work and recommendations. GiveWell doesn't take a
cut of your donation. They allocate your tax-deductible donation to the charity you choose. I personally give to the
Against Malaria Foundation, which distributes bed nets to prevent malaria at a cost of about $5 to
provide one net. If you've never donated to GiveWell's recommended charities before, you can
have your donation matched up to $1,000 before the end of August or as long as matching funds last. Thank you. podcast and then select the jolly swagman at checkout. This episode is also brought to you
by Blinkist. The opportunity cost of reading a book isn't really the $30 price tag. It mostly
consists in the hours of time you invest in reading or partly reading the book. Hours which
you could spend reading another book, building a business, rollerblading, whatever it is you like
to do. To help me triage which books to read, I often use Blinkist. Blinkist is an app which takes the key ideas and insights from thousands of non-fiction
titles across 27 categories and gathers them together in 15-minute text and audio explainers
that help you understand the core ideas.
It's kind of like Amazon's Look Inside feature or Kindle's Sample feature, but better because
it actually condenses the whole thesis of the book, making it perfect for those who want to cheat at their book clubs. Blinkist has extended their
philosophy of less is more to long podcast episodes, presenting the key learnings from
famous shows in 15-minute shortcasts. They do this by directly collaborating with the podcast
creators, like Michael Lewis, who hosts Against the Rules. For Lewis's part, he personally shares
the highlights from his own podcasts with you.
To discover the world of Blinks and Shortcasts,
head to blinkist.com slash swagman.
Right now, they have a special offer just for JSP listeners.
You can get 25% off an annual subscription
and try Blinkist premium free for seven days.
That's Blinkist spelled B-L-I-N-K-I-S-T.com slash swagman.
You're listening to the Jolly Swagman podcast. Here's your host, Joe Walker.
Ladies and gentlemen, boys and girls, swagmen and swagettes, welcome back.
It is great to have you back.
It is great to be back.
Before I introduce the episode, let me address my recent hiatus.
Now, I've been off the air for a couple of days now.
Wait, how long has it actually been?
Wait.
Oh, geez.
A couple of months.
Wow, that has gone quickly. The weeks have blurred together. Wait, how long has it actually been? Wait. Oh, geez. A couple of months.
Wow, that has gone quickly.
The weeks have blurred together.
I'm very sorry for my absence and more particularly for not updating you.
The truth is I haven't had time to put out content worthy of you all.
And I would rather make you wait than make you listen to something that I didn't think would maximally benefit you.
At the start of the year, back when I committed to doing one podcast episode per week, I said, this is the sort of podcast where I'll
never be captured by my audience. I'm not going to feed you up more of what you like, like some
insidious algorithm. Sometimes you'll be disappointed in me. Sometimes you'll disagree
with me, but that's okay. Well, now I'll add to that advice. I also won't push out content for the sake of it.
I want this to be a podcast where you can come and listen to an episode and know that it'll be
important. Not just another weekly show where authors are flogging their new books and the
content is replaceable, not unique. So, that is part of my rationale. But what have I been doing? Well, this podcast is not the only hat I wear.
I'm also now director helping to scale part of the operations function at an amazing Australian
born startup called Forage.
We're changing the world of education by enabling companies as opposed to universities to teach
skills to students.
So you go into our platform, for example,
and take a software engineering course by Electronic Arts or a law program by Wilson
Sincini. I'm very excited about the company for a number of reasons. And we also on Friday
announced our Series B, which was led by Blackbird Ventures. Obviously, raising money is not a perfect
proxy for success, but nevertheless, it's
some measure of what the team has achieved.
So that is part of what I've been, what has been keeping me from podcasting.
On top of that, if I didn't have enough spinning plates in the air already, I've just been
part of launching a new global monthly magazine, which I should plug, called The Podcast Reader,
which publishes select long-form transcripts.
We have agreements with
this podcast. That agreement wasn't hard to secure. Conversations with Tyler and EconTalk.
So how did this happen? In January, a podcast listener, David Loggia, a great man, reached out
to me and we got chatting about how we both liked long-form podcasts, but they're not always easy to
consume. You can't skim them. It's easy to drift off. Pausing and rewinding are impractical.
And with friend of the pod, the great Nick Gruen,
we set up this magazine.
Now, Dave and the team around it have done much more work than I have.
They're the real drivers of the project, but it's amazing.
You can get Edition 1.
It launched in August.
It's a real magazine, beautiful and glossy.
You can hold it in your hands.
Edition 1 features Tyler Cowen's conversations with Margaret Atwood and Peter Thiel, my conversations
with Ali Hochschild and Frank Wilczek, and Russ Roberts' conversation with Christopher
Hitchens, the transcript of which I believe has never before been fully published.
That's the podcast reader.
You can buy a copy, print or digital, or subscribe at podread.org.
And I just love the serendipity that a podcast listener, someone way better and more successful
than me, contacts me through my website and seven months later, a magazine exists.
Radical uncertainty truly is the zest of life.
Another time we can talk about why I think it's important to work in early stage companies
like startups and do real things
as opposed to just hosting podcasts. But I've spoken about myself for the last three or four
minutes and it's time to introduce this episode. My guest is Ole Peters. Ole is a physicist.
He's a fellow at the London Mathematical Laboratory, the principal investigator of
its Ergodicity Economics Program,
and an external professor at the Santa Fe Institute. Ole works on different conceptualizations of
randomness in the context of economics. Like I suspect many people, I first encountered Ole on
page 224 of Nassim Taleb's book Skin in the Game. But like many people, I also suspect the description
of Olay's work in that book left me wanting more. Olay's been driving the ergodicity economics
research agenda since at least 2011. Ergodicity is an esoteric mathematical concept, which we
clarify, explain, and explore in this episode. Now, before we begin, I'll note that this was Ole's first
ever podcast. It is therefore somewhat of an historic episode. I sense that he is rather
ginger about appearing on shows given some of the, let's say, public communications he receives from
economists and others. So, I'm honored he decided to join me on the show. In preparing for this
podcast, I've benefited from correspondence
with a number of people, including but not limited to Matthew Ford, John Kay, Mervyn
King, Jason Collins, Michael Harre, David Sloan Wilson, and Timo Henkel. Of course,
any mistakes of omission or commission are entirely my own. Please enjoy the conversation. Ole Peters, welcome to the Jolly Swagman podcast.
Thanks for having me. It is great to talk to you. Great to finally talk to you. I feel like this
has been a long time coming. And this is your first podcast, right? Yeah, it's my my first podcast and i think we've been trying to
set this up for i don't know when did you contact me two years ago three years ago i think two years
ago and then i i gave i gave you some time and then i i contacted you again this year or end of
last year but it's great that we're finally here as you know yeah well i guess time chance and and podcasts happeneth to all these days and you're the latest
casualty but we're going to discuss ergodicity probability theory economics and more but first
um because this is our first time talking together i was hoping to briefly get to know you a little better.
So where were you born and where did you grow up?
Oh, in Hamburg. That's easy. I can answer that question.
Yeah, I was born in Hamburg and I grew up there and moved to London eventually, studied physics there.
Well, and then I moved to the US for a little bit for my postdoc at Los Alamos and the Santa Fe Institute.
And then I moved back to London.
When did you first realize you were interested in mathematics and physics?
Probably... I don't know I mean I've probably always been interested in it
you know sort of
runs in the family a bit
so I've always been exposed to it
and I
think
in
when it came to thinking about what to do for university i thought physics was a
reasonable thing to do because i felt it was something where you need a young brain to to
really you know uh get into it and i was i had other interests too but i thought maybe i can
pursue them later and i'll start with physics.
It really can't hurt to know a bit about that.
Why is rain like earthquakes?
Ah, okay.
Yeah, we're going back a long time.
Seriously?
Yeah. But we can talk about that too
we can talk about that
do you want to talk about that
yeah let's do it give us the
short version
yeah so this is a very
long time ago this is
this was
a problem I was working on um in my final undergraduate year
and it then also became part of my phd thesis and sort of haunted me for for a long time um it's
it's a concept that came up sometime in the late 80s called self-organized criticality. that puts them at a critical point where their global behavior changes, basically.
So this is inspired by phase transitions in equilibrium systems,
but it's an idea for non-equilibrium systems. So specifically, you're driving a system in some way.
So it's a non-equilibrium system.
That means you're slowly putting energy into it.
And this energy builds up in the system.
And occasionally, it's released.
And by doing that under certain circumstances you can you can get
to a situation where a system is always at this critical point between being at rest and releasing
energy in sort of burst-like events and earthquakes are one example and rainfall is another example so in the case of rainfall
you're constantly driving the system with solar radiation you're basically pumping water vapor
into the atmosphere all the time and you're destabilizing the atmosphere all the time by
heating the ground and cooling the top and every now and then you get these burst-like events
convective events that produce rainfall.
So it's sort of the statistics of the energy releases from the system
are similar in the case of rainfall and earthquakes.
That's where that came from, but that was a long time ago.
So real curveball. Thank you very much.
I wanted to start with at least one.
So in the 2000s, you're studying weather patterns.
And then just before Christmas in 2006,
you start working on ergodicity in economics, right?
Yes, true.
Yeah, I think that's right.
Yeah, yeah, yeah.
So what prompted that that how did you
come to it i was curious about finance actually so i started looking into some um some problems
in finance and contacted people who you know i thought might point me in interesting directions.
And I more or less immediately got to Kelly's work and thought,
well, this is sort of phrased in a language I found surprising
because it's all phrased in terms of information theory,
which is perfectly
fine um but i felt it it didn't emphasize this point enough that it's really a problem of
ergodicity so so yeah so i got interested in that and um and then i said pretty much immediately that year, 2007, I wrote the first draft for this paper called Optimal Leverage from Non-Ergodicity.
And I thought this is very nice because you can solve this leverage problem just by computing a time average instead of an expectation value. And I showed it to some people and they said,
no, but that's not possible
because you need to have a utility function in there.
And I said, what is a utility function?
And so then I went down this rabbit hole
of how these types of problems have been treated
in the literature centuries before.
And yeah, I mean, that's been pretty exciting since then.
So that paper, Optimal Leverage from Non-Ergodicity,
and another paper, The Time Resolution of the St. Petersburg Paradox,
those were the two papers that sort of kick-started
the formal development of Ergodicity Economics, right?
Yeah, I think that's right.
They were the first to be published.
They follow on from each other, right?
Actually, exactly the story I just mentioned.
So I worked on this leverage problem,
and then people told me about utility, and thought where does that come from and that's what got me to Bernoulli in the 1738 paper
and I thought well okay so that's well that's interesting because I could solve the leverage
problem without using a utility function utility functions were introduced to solve the St. Petersburg paradox so I should be able to solve the St. Petersburg paradox using a utility function. Utility functions were introduced to solve the St.
Petersburg paradox, so I should be able to solve the St. Petersburg paradox without a utility
function. And sure enough, that works. So it is really one paper following from the next,
the previous. What's the most effective way you've found to explain the concept of ergodicity
oh well that really depends on the level of technical detail you want but
i think for you know for a podcast um oh dear we use we try to be very clear when we talk about
aggodicity so it's not it's not a vague notion um it is a concept in dynamical systems so it's
a concept in mathematics and dynamical systems and stochastic processes. And it's a property of mathematical objects, like, let's say, a stochastic process.
And the only agonic property or the only definition of agonicity that we are ever interested in is the following. If you have some quantity,
a random quantity,
that fluctuates over time,
so in other words, a stochastic process,
then you can take two different types of averages. You can take an ensemble average of this quantity,
so you can average over the statistical ensemble,
which is all the possible realizations of this process
at some moment in time.
Or you can focus on one single realization
of the stochastic process
and average the quantity over a long time.
And if these two ways of averaging give you the same result,
then the quantity you're averaging is ergodic.
And if they don't, it's non-ergodic.
And if they don't, it's non-ergodic.
Yeah.
Yeah.
I don't know if this is informative at all.
No, no.
It is, but we will keep building on it.
So...
Good.
The first point to pick up on there is that
we're applying the adjective ergodic to an observable
or a quantity in a mathematical model.
So ergodicity is a property of mathematical objects,
not of physical objects.
So sometimes people like to say swanky sounding statements
like life is non-ergodic or the world is non-ergodic
or the economy is non-ergodic,
but they're not being strictly accurate.
Yeah, I think that's that's yes that's absolutely that's actually a really key point in this debate because somehow the debate spilled over and became bigger maybe than i had expected in a sense
so these statements life is non-ergic i mean they're sort of poetry right
and that's totally fine but it's not a scientific statement it's the sort of thing you you know you
might think while you're having a shower. But it's not strictly meaningful
unless you really specify what you mean by it.
And the moment you start specifying what you mean by it
in the terms that I mentioned,
so where you want to test equality of a time average
and an expectation value or an ensemble average, then you get to the point that you got to,
which is that this is a statement about an observable,
a mathematical object, and really nothing else.
Strictly, that's it.
So this whole part of the debate is really about the relationship
between models and reality.
So when you speak about ergodicity, you speak about the properties of a model of something real.
And sometimes we are sloppy in our language.
So we might say something like stock prices are non-ergotic or whatever it is.
A stock price is a physical observable,
it's not a mathematical object. So what we're really saying is that we have in mind
some mathematical model that we think is a reasonable analogy to the physical
object that is a stock price and then we make a statement about that mathematical model regarding its ergodicity or not
i want to come back to ergodicity and how it's relevant to finance and economics
but first i'd love to go back in time and we can trace the the history of the theory of probability. Actually, I want to go back even further.
I want to go to time itself and start there.
So what does time mean without chance?
Well, probably not much.
I mean, you know, there's this Solomon quote, time and chance happeneth to them all.
And I think the notion of time and chance there is that there's sort randomness, then time becomes sort of like a knob that you can turn back and forth.
Everything is determined, right?
Every moment in the past determines every moment in the future.
So time is then perhaps an illusion that we, with our limited consciousness,
believe to experience.
But it has less of a physical meaning.
Whereas the moment you have randomness,
you can start thinking about time
in terms of what is determined and what is not.
So the past is then everything that is determined
because it's already happened,
and the future is that which is not determined,
and it's open.
Could you talk about the connection between time and risk?
Yes. I mean, you know are there's such trivial statements when i say them out loud but um uh yeah time and risk i mean risk doesn't really exist if you don't have irreversible time
so if you imagine you know imagine a computer game where you can enter a
casino and gamble. So you enter this casino, you put everything on black and you lose. But because
it's a computer game, you were able to save the state of the game before you entered the casino.
So you jump back to that state and you go back to the casino and you put everything on black
and then on red or whatever you want to do do and you just keep doing this until you win so there is no risk you have
you have a mindset that is
informed by the irreversibility of time, we will get questions of risk wrong.
You mentioned Ecclesiastes 9-11 before before time and chance happeneth to them all
are there any other beautiful depictions in literature of time's irreversibility i would
imagine i was thinking yes i mean yeah okay just thinking of uh of nature and in thus spoke zarathustra the quote is it was that
is what the will's teeth gnashing and most lonely affliction is called powerless against that which
has been done the will is an angry spectator of all things past the will cannot will backwards
that it cannot break time and time's desire that is the will's most lonely affliction.
Yes, that sounds familiar.
Where did you get that quote from?
I think I heard you read it out in a lecture once.
Yeah, yeah, that's true.
I think I read it out in German just to confuse everyone.
That's true. I think I read it out in German just to confuse everyone. That's right. I was thinking, in the interests of fairness and being fair to the economists in the crowd, we should think of some beautiful quotes on ergodicity. find a couple that capture the the other idea so there's ecclesiastes 1 6 the wind goeth toward
the south and turneth unto the north it whirleth about continually and the wind returneth again
according to his circuits that's kind of like an ergodicity flavored quote and then there's
another one i'll read this out tell tell me if you tell me if you
know where this one's from still round the corner there may wait a new road or a secret gate and
though we pass them by today tomorrow we may come this way and take the hidden paths that run
towards the moon or to the sun oh no i hadn't heard that one what what is that that is jr r tolkien a walking song
ah yeah yeah okay uh-huh we've spoken about time i want to move to the the history of the theory
of probability and we're not going to cover the whole history, but I'd love to just touch on some key landmarks with you.
And the first is Cardano, who lived from 1501 to 1576.
He was a physician and a mathematician from Milan, among many other things.
And he wrote a book called Liber de Ludo Alliae, which is the book on the games of chance, which was a kind of manual for
gamblers. And it's particularly interesting because people hold up the Pascal Fermat correspondence
of 1654 as the sort of beginning of probability theory, yet he was Cardano writing around
1550, more than 100 years earlier. Obviously, it wasn't publishable at the time. I
don't think the manuscript was published until 1663. But there are some amazing ideas and quotes
and theories expressed in the book. I think you read Cardano about email. What did you take from Cardano?
So, I mean, in terms of the history of probability theory,
Cardano is at that,
that is in this transition period between a pre-formal and a formal time in the theory of probability.
And his thinking is, I think, quite humble and quite practical.
So the book is called Book on the Games of Dice.
And it really is the sort of wisdom that he felt he had collected by gambling.
And, you know, he thinks it's not a bad idea to gamble a little
because life is full of surprises.
And maybe it's good to have some experience with
that and some practice, you know, so bet some money and learn how to lose without getting too
upset about it. And so he has a lot of good advice. I mean, there are some passages where
he says, so I don't have the quotes here in front of me, but he says that it's inevitable that he who should
play less frequently will be less skilled
but should you therefore abandon all study of the arts
so that you may become proficient at games of dice
and of course it's a rhetorical question right so he says
there are all sorts of ways to expose yourself to basically random stuff that will happen to you
and if you get a little better at understanding the rules of the game that uses this sort of
randomness then yeah you can probably win at games of dice or you can
you can make some money but you haven't really understood anything fundamental
you've you've abandoned the arts so the arts to him are ways of understanding nature not arbitrary
man-made games so translated into today's world, this means something like, you know,
don't spend all day looking at idiotic graphs of stock prices going up and down and wondering how
you can profit from some silly game we've made up. You know, try to understand a little bit
about the world instead. So it's those sort of passages that I find
that I find most most interesting in Cardano. But in terms of the the formal
theory he he did introduce some some some early ideas of of combinatorics.
So he started asking, you know, you roll some dice
and how many ways are there of...
Actually, I don't know exactly what he did.
I can't remember because this question comes up
a few times in later papers too.
But, you know, let's say you take two dice and you roll.
How many different ways are there of rolling an eight or something like that right so this notion that there is an
ensemble of possible futures before you roll the dice and you can count the equi-probable states
in that uh in that ensemble um that that is something that cardano seems to have thought about.
There's also a paper by Galileo, I think,
well, it's somewhere between,
maybe it's from 1623 or in the decade before that,
where he does the same thing.
So he very explicitly counts the possible ways
of throwing different, I think it's with three dice, how
many ways are there of throwing a 10, something like that.
So these ideas were, they were in the air.
Cardano was probably the first to write them down.
Although, of course, his book wasn't published until long after his death.
Galileo, you know, had a knack for getting
himself into into hot waters uh so you know whenever there was something you weren't supposed
to do he he would he would go there cardano too i think he was arrested by the inquisition for
for a little bit for uh forecasting the horoscope of christ This was sort of a popular thing to do at the time.
But, I mean, an accomplished inquisitor
would have probably found better reasons for his arrest
because he was doing a lot of interesting things.
Yeah, so that's where Cardano sits.
So sort of in this, you know, just on the cusp,
just as it's becoming formal.
And then Fermat and Pascal are just very explicit in their combinatorics and especially in the invention of the expectation value. averaging over these possible futures, averaging some random quantity
over all possible future states.
That's something they identify as an important trick.
It's probably the most important concept they introduce, right?
I think so. I it's it's crucial it's the solution that they propose to the problem they're studying right so they are studying this
problem of the unfinished game um where you you know you imagine that we're playing a game of
dice and it's i don't know maybe it's just you and me and we play three
rounds where in each round i i roll a dice the new roller dies and we die and then we we count
the total points and whoever has the most points at the end you drop the die and it falls off the
terrace where we're sitting and drops into the ocean and is forever lost and so we can't finish
the game but we've both bet some money in this game and we now want to find a a fair way to split the pot of cash that's sitting on the table.
And you had seven points and I had five.
So how do we split the $100 that we put on the table?
That's the specific question that they were asking.
And their answer to the question was, well well the fair way of splitting the pot is to give to both you
and me the expectation values of our um our winnings at the state the game was in when it
was when it was abandoned when we had to stop so there were there were earlier
treatments of this problem even pre cardano there's something from the from the 15th century
even but they didn't get there you don't have to get there there are many ways of resolving the
problem you could say well you know we didn't finish the game so let's just let's just return our wages to everyone.
So we, you know, we do,
if we each put 50 bucks on the table,
then we take our 50 bucks and that's it.
Or we say, well, we couldn't finish it.
So who knows what might've happened.
Let's give the money to charity.
Or, you know, I don't know,
you just lost your job.
So why don't you just take the money?
There are many, many ways of splitting this in a fair way. But the way that Pascal and Fermat proposed has nice mathematical
properties, linearity properties. So there are reasons why their solution became so prominent, why the expectation value became so prominent.
It does raise a question or it poses a puzzle.
So the ancients gambled.
The Athenians had dice.
They played with dice.
And the mathematics of probability theory aren't particularly hard.
Yet it took until the 16th century with Gu took until you know the 16th century with
gadano or the 17th century with vermara and pascal for probability theory to be developed
newton and leibniz are devising calculus around the same time which is way harder
why did it take so long for the mathematics of probability to arrive in the history of human thought well i mean i i can only speculate um but
yeah why and why is that so first of all I think the people who were engaged in risky business knew a lot about risky business just from experience.
So, you know, insurance contracts were written by the Babylonians, basically maritime insurance is a very, very old business.
And that's about, you you know gauging risks and so there's a lot there actually
a lot of practices relating to to risk and risk mitigation and of course risk assessment that are
much much older than the theory of probability i think so and gamblers of course so some of these probability. I think
and gamblers of course, so some of
these problems
like the problem of the unfinished game
those problems were
really motivated by gamblers who said
come on, I'm constantly making
quantitative decisions
based on games of dice
how am I doing this? Because I know I'm doing
it more or less right.
I'm surviving here.
But they just had rules of thumb.
They had sort of empirically established
that there are more ways of throwing a 10 than a 9 or something
with three dice.
So... with three dice. So anyway, so there's the empirical side that existed and maybe the need for a formal theory
was not really recognized, you know,
because there was this practical knowledge out there.
I think that's one.
But there is something deeper. And I think it has to do with ancient worldviews and cosmology. The way the ancient Greeks saw the cosmos, well, most of the ancient Greeks, is the geocentric model, where you divide the world into a celestial sphere and a terrestrial sphere.
And the celestial spheres where, you know, the stars move around on
their perfectly circular trajectories. And then there are some aberrant planets that move around
on less circular trajectories. But all of it is pristine and mathematically determined. So this is sort of a platonic world of perfection and ideals.
And this is contrasted with down here on earth. And down here on earth is basically just chaos
and decay and most of all time. So these two realms are a time-bound realm here on Earth and a timeless realm up above.
And this goes through all kinds of spiritual systems, including Christianity.
You know, the eternal soul would live in heaven.
And these used to be very spatial, physical images.
And literally, it was about looking, right?
You could look up at the sky at night and say, oh, look at this mathematical perfection up there.
Mathematics, of course, is another one of these eternal things, right?
Mathematical truth is eternally true.
So it's something timeless.
So it belongs to the celestial timeless world.
And it doesn't belong on Earth. And so it's not just a theory of randomness that was
lacking in a sense, but it's a theory. It's physical theories in general for terrestrial processes.
The idea was that there are no laws down here.
We can look around and everything just seems a mess.
Whereas if we look up at the sky, that's where we can have theories
because things are nice and ordered and pristine.
And down here, what's the point?
And this is the whole spiritual struggle of the 17th century i
think that you have this collision of those two realms of the timeless realm and the time-bound
realm and so probability theory is sort of stage two right it happens in the 1650s whereas this
well galileo is is 1620s 1630s that's when that's when it really sort of heats up
around him of course galileo is not the first to suggest really a centrism um as copernicus of
course and and in antiquity there were there were others um but yeah so you you get this uh you know
you get this clash of worlds and i I think before that clash of worlds really started happening,
there wasn't much of an appetite or optimism
for finding any laws of nature on earth, really.
You know, you had rules of thumb, okay,
but not really laws in the sense that
you had uh laws like Ptolemy who could tell you when Mars would be where you know that was just
not expected down down here and so I guess that once Galileo um yeah pointed that out it became more likely for for all kinds of developments to happen including
the development of probability theory that's so interesting
there's another idea this is speculative on my part actually this isn't my idea this is simon dadeo's idea
and it kind of connects to the first idea you discussed
about how people relied more on heuristics than theories but the ancients mostly played with
dice or knuckle bones that weren't precisely even or fair. They weren't precision engineered like today's dice.
So gambling was more a process of learning empirically
about the idiosyncrasies of these particular dice
or knuckle bones and not about reasoning theoretically.
Speculative idea, but pretty interesting.
Might be true.
Yeah, I would imagine that's that's true so you know by the time you get to the formal period there must have been
dice must have been somewhat regular because you know it it made sense to people to count actually what was known empirically at the time of
even cardano and and galileo so so around 1600
is very precise so these these differences in these combinatorial
differences that you can arrive at by the assumption,
via the assumption that you have a die whose faces are all equally likely to show up.
They're very small differences that were detected empirically.
So gamblers knew this.
So their dice were pretty good at that time.
In Roman times, I doubt that.
I think it was more like you said,
that you're throwing some knuckle bones
and if you know your particular set of bones,
then you have a pretty serious advantage over your opponents.
So let's fast forward to 1713,
Nicholas Bernoulli and the St. Petersburgburg paradox what is the st petersburg
paradox and why is it important yeah every time i try to
just go through the uh definition of the st petersburg gamble it always gets
weirdly complicated or shouldn't be do you want the actual technical definition of the St. Petersburg gamble. It always gets weirdly complicated.
Well, it shouldn't be.
Do you want the actual technical definition of the problem
or do you just want to know conceptually what's wrong?
We can do both maybe and then we'll see.
Yeah.
Okay, so technically, so what had happened?
So in the 1650s, Fermat and Pascal had this exchange of letters
and they came up with the expectation value as an important quantity
for random variables or quantities that are unknown.
And it became really dominant very quickly.
So people just felt we've got almost a universal solution
to all the problems we might ever encounter
that have to do with randomness.
So if we have some random quantity,
we can just get rid of the randomness
by replacing that quantity with its expectation value
and run our analysis as if it wasn't a random quantity,
but the expectation value of that random quantity.
So you're now not working with random variables anymore,
but just with numbers.
And that's of course much easier.
And sometimes it's fine.
Again, it's a mathematical model.
And in some cases that that describes the
physical reality uh it's supposed to describe well but sometimes it doesn't and so nicholas
bernoulli was very playful he looked at this and started asking himself in under what circumstances this model was valid and
where it would fail so he's also the first person to speak about extreme values and extreme values
are of course something really right that's that's something very obvious to, maybe that's not the right word, but it's a really good illustration.
If you want to explain to someone that the expectation value of some thing is not so important, think of anything where the extreme matters.
And the extreme often matters.
Like building a dike for for example building a
dike yeah building a dike but even you know more generally a chain is as strong as its weakest link
so the moment you have a bunch of elements that all have to work together if one of them breaks the system
breaks down and that means any biological organism any machine anything
breaks down dies the moment its weakest vital component breaks down so that's an
extreme value right you're looking for the
weakest. So there's the superlative, the extreme. So you're looking for the weakest link. And that's
something that Bernoulli sort of noticed, right? That in many cases, what determines the behavior
of the physical system is the weakest component, not the average component.
But that's a bit earlier. I think that's 1708. And then in 1713, he said, okay, now let me
hit you on the head with something. If you still believe that expectation values are the answer to
everything, I'll give you a gamble. And we all know how to evaluate gambles you compute
the expected value of your net income from this gamble so you may buy a lottery ticket you pay
some fee and you receive perhaps some some winnings so you you compute this net gain, and if it's positive, then you take the gamble, the net expected gain.
If it's positive, you take the gamble.
If it's negative, you don't take the gamble.
So he just invented a gamble where that expected gain doesn't exist because it diverges.
It's infinite.
Of course, it's not a physical gamble because nothing physical is infinite, but it's a nice mathematical
joke. So the
St. Petersburg Paradox is basically this mathematical
joke where he says,
okay, let's
play the following game. I
toss a coin and
if it shows heads,
you win a dollar.
And if it shows tails, I'll
toss the coin again and if it then shows head
heads i'll give you two dollars and if it then shows tails i'll toss the coin again and if on
the next toss it shows heads i'll give you four dollars and then eight, and then $16, and $32, and $64, and $128, and so on. So you go up by factors of two.
And the trouble here is that the probability of winning a large amount
goes down in proportion to that amount.
So if you are computing the expectation value,
and you're multiplying the gain and its probability,
then every possible gain contributes a finite amount to the expected gain and you end up with
a divergent sum. And there is then, if you follow this principle of computing the expected gain from
participating in such a gamble, there is then no fee that would be too great for you to pay to enter this gamble.
But the large gains are ridiculously unlikely.
So the whole thing falls apart physically.
It makes no sense.
No one would pay a lot to enter into such a gamble.
And that all makes perfect sense for many different reasons. But mathematically, it's a very clever intervention by Bernoulli to say, well, be careful.
Just for mathematical reasons, this doesn't always work.
And he literally writes it in a letter to Mormont, and they were sort of sending each other these little nuggets
of mathematical insight or teasers or jokes.
And so it's one of those jokes.
He just drops it in there and says,
compute the expectation value of this.
You'll find something very curious.
That's his comment.
Understatement.
Yeah, understatement and yeah understatement exactly so that's the bomb he's trying to throw into this into this world of of expectation values that were so
dominant at the time so 25 years later in 1738 another Bernoulli Daniel Bernoulli comes along
and he offers his answer to the St. Petersburg paradox.
And what's his answer?
Yeah.
So there are different levels, again, to answer is, don't think about the dollar amounts involved in a gamble, but think about the meaning of those amounts.
So the meaning that you or I or anyone else individually, idiosyncratically, may attach to such dollar amounts.
So someone very, very rich may not really be terribly interested
in winning a dollar or $100 or $1,000.
And someone very poor may be very interested
in winning $1 or $10 or $100 or $1,000.
So at least it's true, I guess.
And Bernoulli says, well, let's just try to stick this into the mathematics.
So try to find a way to incorporate the psychological aspect
that is involved in human decision-making into the mathematics.
And he does this by inventing the infamous utility function.
There's nothing wrong with this in a sense.
It's just circular in a way because you are trying to...
So what happened here? So Nicholas Bernoulli came up with this gamble,
and this turned into a puzzle.
And the puzzle is,
if you offer the St. Petersburg gamble to real people,
and you ask them,
how much would you pay to participate in such a gamble,
where I toss this coin and give you two, four, eight, whatever dollars,
how much would you pay for this?
Then people will say, well, I'll give you three dollars.
No one will say I give you a thousand dollars or a million dollars.
So the expectation value model completely fails in that case.
Now, you could phrase this by saying, well, people's preference is to avoid the risks involved in this gamble.
And that is the phenomenon you're trying to explain.
And then the utility solution becomes very circular because it just says, well, let's write in mathematical terms what people's preferences are but we already knew
what their preferences were their preferences were not to pay much for this gamble so now we invent
some function that just says the same thing mathematically so it's like saying the same
thing in french or italian it doesn't really add to our understanding it just it just restates the
problem so this is one criticism of the utility um utility solution
so let's rejoin the ergodicity story or rather begin the ergodicity story with maxwell and baltsman
in the 1860s and 1870s so so what happens tell me about the birth of ergodicity
yeah so the the dates are important right so you have the uh you have daniel benuly's utility solution to the uh
st petersburg paradox in 1738 and this is a long time before egregiousity becomes a
even a word let alone a concept. And in the 19th century, or at the beginning of the 19th century,
there wasn't really much probability theory in physics, or let's say there was none. Physicists were stuck with this sort of clockwork universe image
inspired by Newton, really, by the success of Newton's laws.
So they loved their mechanics.
Physics essentially was mechanics.
Of course, people knew about optics and things like that,
but the real deal was mechanics.
And, well, there was no randomness in those models.
And probability theory felt like something for gamblers
or economists or people who are somehow exposed
to the unknown in a different way from the way that physics is exposed to it.
So they just sort of stayed away from it.
But in the 19th century, the big technology that came up was steam engines and industrial revolution, steam engines and so on.
And with steam engines,
people became very interested in how they work quantitatively.
So we needed a theory for this,
and the theory is thermodynamics.
So that's the theory of how gases expand
and undergo phase transitions and and so on so
it's literally you know gases and boxes that's steam engines um and at some point in the 19th
century uh people started to wonder about an underlying theory for thermodynamics. So was there some microscopic theory to explain those phenomena,
phenomena like pressure, temperature, volume of gases?
And, well, the candidate explanation was a molecular theory,
so a belief in molecules, in very small particles that are invisible,
and there are very, very many of them, and they constitute gases.
So this was quite controversial at the time.
In the 19th century, this wasn't universally accepted at all,
that gases were not continual but consisted of molecules.
And maybe because it was so controversial,
people who worked on these theories were really pushed
to cross their T's and dot their I's.
And someone who started working on this early is Maxwell.
And then someone who really had to kind of fight
for a molecular view was Boltzmann in Vienna.
So continental Europe was maybe leaning a bit more
towards the continuum theory of gases and they weren't so much into molecules.
So Boltzmann was really pushed to be very precise in all of his statements.
So what did Boltzmann do? it's possible that these gases actually consist of little molecules, and these molecules follow Newton's laws,
but there will be very, very many of these molecules.
So if we wanted to describe the dynamics of a typical gas,
we would have to write down so many coupled differential equations
of these molecules zipping around
that we would never be able to write it down in our lifetimes.
And if we wrote it down in our lifetimes.
And if we wrote it down, then what would we do with that information? It's just useless,
this approach of actually computing the individual trajectories of all molecules.
So he was saying, well, but in the end, we're not even interested in the trajectories of the individual molecules. We just want to know aggregate effects of those trajectories,
like pressure, temperature, and so on.
So these emergent properties from the microscopic dynamics,
that's what we're really interested in.
We don't really care about the actual microscopic processes,
trajectories that are happening.
And so he suggested, well, maybe we can just,
instead of actually computing everything individually,
we can just describe this system probabilistically.
And this was another controversial idea.
So he now introduces randomness into these pristine physical
dynamical systems of mechanics.
That's called statistical mechanics.
And as he does that, he realizes that these tools in probability theory are sort of missing something because they always
operate with these expectation values.
And he says,
well,
I mean,
what I'm really looking at here are trajectories of particles.
Yes,
there are many particles.
So in a way there's an ensemble of particles.
So maybe these expectation values are okay, but these particles move along trajectories.
So some of the microscopic properties will really be time averages of something that is happening.
So for instance, you might have a membrane, you know, a balloon or something. There's a pressure on the inside of the balloon from the particles that are hitting the membrane of the balloon.
And if I'm measuring this pressure, then I'm really taking a very long time average on the time scales,
on the relevant time scales of the molecules hitting the balloon.
So I'm averaging over time when i'm taking a pressure measurement so let's say i stick a pressure gauge into the balloon or
just use a pressure gauge it's it's a very inert large object so i'm recording you know billions
of collisions of molecules um over time over a very long time time, when I'm taking a pressure measurement.
So he's saying he realizes that there are two components to these sorts of averages.
There's a temporal component and there's an ensemble component.
And he starts asking himself, he wants to work with ensemble averages
because they are easier to work with.
But he raises this red flag and he says well it's we need to think here
carefully that um you know there's there's the temporal element and now i'll just assume that
it doesn't really matter whether we're averaging over time or over a suitably defined statistical
ensemble um and this assumption i call the ergodic assumption.
So I'll just say this is okay.
And then based on this assumption, he makes all kinds of predictions,
and those are the predictions of equilibrium statistical mechanics.
And they work very well, so long as the equilibrium conditions are satisfied.
So the justification for the ergodic hypothesis in Boltzmann is kind of experimental.
It's just that the predictions are so accurate and so good that you say, well, this seems to be a
reasonable assumption. But he does introduce this question, right? And then curiously,
around the same time, it's always the same. You have these ideas that come up and they're sort of in the air, right?
It's this sort of zeitgeist.
I mean, so Nietzsche wrote his passage that you read out earlier on around that time,
thinking about time and its irreversibility.
And there's also someone in Cambridge, what's his name, Whitworth,
who starts thinking about gambling in these terms.
And he says, well, you know, but isn't that also a problem in gambling?
So if I'm gambling sequentially, don't I face a different kind of effect from the randomness
than if I'm gambling in many, many systems in parallel?
And so he starts working,
he basically makes the ergodicity argument in 1870 or so.
He doesn't call it that.
I don't know if he was aware of boltzmann's work
specifically but it was around that time so it was it was in the air
so he makes this he makes this argument somewhere in in in his book
and then curiously retires from cambridge and becomes a vicar at old saints Church in London. I don't know what happened there, if he just, you know,
became frustrated because people weren't really listening to what he was saying.
I don't know, but that's, I could imagine that. So you have the ergodicity concept explicitly
introduced by Boltzmann in the context of statistical mechanics. And you have this way of thinking
popping up here and there, specifically
in Whitworth in the 1870s in the gambling context.
And then it's sort of every few decades someone
discovers it, rediscovers it, and says, isn't this how you
should approach these problems
and you know you have
Ito's work in the
1940s
which is
very neat, it's very easily
applied to this
to these problems so we use it
all the time
and then the 1950s
you have Kelly.
Then there was some big resistance from some economists to Kelly.
So you get these repeated sort of discoveries of this approach,
and it feels like it never really gains the sort of visibility or support
that it that it deserves um and and it then dies down again um or it it stays confined to some
you know some some smallish group of of people so the ke Kelly criterion is known to every gambler, of course,
but probably to most investors.
But it's not really used to revisit these big outstanding theoretical problems
in economics or decision theory.
So that's sort of what we are doing, right?
We're saying, well, what is it it is it just a special case of something
more general can we can we do more with this and you know can we go back to can we go back to
Bernoulli and find different solutions there can we explain some of the observations in behavioral
economics risk aversion loss loss aversion, biases.
Do they actually have a physical explanation if we put on these ergodicity classes? So one of your collaborators throughout your ergodicity story has been
Mari Gelman, thebel prize winning physicist in february 2016 you and murray had your paper
evaluating gambles using dynamics published in the journal chaos and i think it became the most
red chaos article of that year what what's it like what was it like working with murray
and how would you describe his mind?
I enjoyed it tremendously working with Murray.
I don't know if I can describe his mind i mean but murray is uh you know
murray was an absolute exception it's it's he's not your average physics nobel prize laureate
as you probably know um so this was a very very very special person to work with or, you know, to have as a friend.
I think one thing that I noticed was his ability to detach himself from ways of thinking that we were developing or ideas that he had.
And I think that I haven't seen it to that degree in anyone else I've come across.
So I mean, when you start thinking about a problem in the process of
thinking you develop a perspective and you develop that perspective and you have to give it a bit of
breathing space you have to say well let's run with this idea for a little bit and you do that
and as you run with the idea you sort of tend to fall in love with it.
And before you know it,
you've become uncritical of the idea.
And Murray was just outstanding at not falling in love with ideas.
So he would,
you know, we would be sitting there and get all excited about, I don't know, some, you know, whatever it is, some way of thinking, some kind of idea.
And Murray would go along for a little bit and then suddenly look up and say, so why is this wrong? just to you know provoke that part of the mind that may have fallen asleep in the process of
of giving the idea breathing space so literally this phrase why is this wrong kind of stuck in
my mind you know to to always um keep getting back to that and say, okay, you've developed some nice thing, but why is it wrong?
Because everything is wrong, right?
So this is sort of the baseline.
Guaranteed, whatever you come up with has limited applicability in reality
and may even be formally wrong, right?
There are a million reasons why something might be wrong,
but certainly when it comes to describing physical reality,
any sort of formal theory has its limits.
So it's a good question not to ask, is this wrong?
Well, is this really true?
No, just ask yourself, why is this wrong?
So I don't know, but I, it's funny, man.
Most, most people would, most people would ask themselves the exact opposite question
about an idea that they were wedded to.
Why is this right?
Perhaps they'd ask why other people's ideas are wrong, but it's almost like we should
completely invert that practice.
Yeah, I think that's right.
You know, the other thing, I mean, mean you're asking you're asking about murray um
there was something there was something cheeky about him that i enjoyed tremendously it's um
um
i don't know how to i don't know how to say it.
It was some kind of...
I remember a talk that I gave, and he was in the audience.
And, you know, we were already working together.
He was aware of my thoughts.
And we came out of the room, and he said to me, he said to me, I felt that there was a lack of out of the room and he said to me he said to me i felt that there was a
lack of joy in the room and what he meant by that was that good thinking good ideas
are identifiable by the joy they generate.
So if you feel joyful after hearing a seminar,
if it feels right, then there's something there.
And if you see someone give a good seminar and,
and you feel the response in the audience is off,
then you might say something like that.
Right.
So he,
he sort of felt,
well,
this,
this kind of deserved a bit more happiness from,
from the people in the room.
And,
you know,
so it was a very nice compliment that,
that he paid me there.
But I think there's something deeper to it.
And it's this notion of letting your sense of joy guide you as you think through problems.
There's, you know, something aesthetic, something about taste.
He also used the word taste sometimes.
Mm-hmm. Something intuitive. about taste. He also used the word taste sometimes.
Something intuitive. Yes and I think specifically related to this kind of
smirk, you know, some kind of a smile that puts on your face. Yeah. So, economics, how does it think about ergodicity? And how has ergodicity been defined
within the economics literature over time? mean obviously outside of this podcast conversation you
and i have have had a brief discussion by email about some of the different definitions
floating around in the economics literature um i have to say i'm i'm basically confused
and i don't feel like i have a good handle on how economists think about
ergodicity or what they mean when they say ergodicity but do you have i mean being charitable
do you have any kind of sense of how that term is used how the profession thinks about it
i mean i think you know economics is less well-defined than other scientific disciplines.
And this term...
You're just an arrogant physicist.
I think it's the nature of the beast, no?
No, no, no, I don't mean that.
I agree, I agree.
And so first of all, I agree with you.
I'm confused.
As you know, I've said this by email too.
I don't, but the simple answer is I don't know.
I don't know what they mean when they say agonicity.
But part of the reason is that they is a large diverse group
and some people mean this and some people mean that.
So it's not one of the established terms in the field.
If you pick up a paper on economic inequality,
sometimes you get a precise definition of ergodicity that I find recognizable.
So, you know, people might actually write down a stochastic process as a model of wealth
as it evolves over generations of people in an economy, something like that. And then they ask themselves whether the distribution of wealth
in an economy converges to some stationary limit.
And then they speak about ergodicity as that convergence,
because once you converge to this state,
then over very, very long timescales, and that's usually what you consider
in the strict agudicity question, if you average somebody's wealth
or relative wealth over a very, very long time,
then you will get the same result as if you average over everybody's wealth in the economy if
it's in this stationary limit so there i can i can recognize it so this is a right this is a branch
where where we can we can connect um but in the broader debate i think
it is less well defined and it sometimes seems to mean something like
an openness of the future.
So I can see that these concepts are inspired by the mathematical concept of ergodicity,
but they really are applied to, you know, the physical
objects and not mathematical objects. So it is sort of an analogy, a metaphor, something like
that. But I think it's this notion that the future is truly open. So when some of these economists say
the economy is not a gothic,
they mean something like
there may be an innovation next year
that we can't even dream of today.
And therefore, modeling where things might go
in a quantitative way is extremely difficult.
Or they say, well, looking at past data will not be informative of the future.
Well, I mean, of course, you only have past data.
So that just means you can't, there's just nothing you can say about the future if you take that too literally.
But it's this sort of notion that it can, you know, that there are sort of these radical changes in the economy that are likely to be missed by any model we build of it.
But I'm really guessing, right?
I feel that there are many different uses of the term out there.
And this is, I don't know, this is roughly what I've taken from what I've seen.
Do you agree? I mean, you were't know, this is roughly what I've taken from what I've seen. Do you agree?
I mean, you were actually looking into this more.
I think when it's used in that metaphor.
Yeah, I looked into it myself.
And I actually spoke with John Kay's current research assistant, Matt, about this.
And he was quite helpful he directed me to some more
pieces of literature that that actually provide more rigorous definitions and I mean whether you
characterize them as being within the economics literature is another question I guess like
functionally they are like there's there's econometrics textbook, which is popular and widely read,
which gives quite a formal definition.
So that's one thing.
I mean, you can go back to Paul Samuelson
in his 1968 article
where he talks about the ergodic hypothesis,
but as you know, Ola,
that it's kind of unclear what he actually means
and whether he indeed actually believes in it
when he
discusses it um another economist paul davidson who has has kind of critiqued samuelson over the
decades has a reasonably technical definition um so if you go to paul davidson's 1982 83 article
rational expectations a fallacious Foundation for Studying Crucial
Decision-Making Processes. He says, if a stochastic process is stationary, then the statistical
averages are the same at every point of time. If the stochastic process is ergodic, then for an
infinite realization, the time and statistical averages will coincide.
But then moving forward in time, there are many examples of that more metaphorical usage you described. So like there's a recent article by Paul Collier, an economist who I admire,
like a really great man, but a recent article in the new statesman where he he writes that even bad ideas
can be self-fulfilling the fancy term being ergodicity and uh other richard bookstaber
in his book the end of theory an ergodic process is same old same old it is one that does not vary
with time or experience it follows the same probabilities today as it did in
the distant past and will in the distant future, quoting here. And then later in the book, he says,
to know if we are in an ergodic world, we can ask a simple question, does history matter?
End quote. And so, I feel like when it's used in that metaphorical sense, it's almost conflated. Ergodicity is conflated with stationarity,
which is not even correct in itself
because, as you know,
a process can be both stationary and non-ergotic.
Or it could be non-stationary and ergotic.
So I feel like, yeah'm i'm just confused i don't think i don't think there's like a
a settled definition um i think my my inclination is to to be charitable to the people who are using
it metaphorically and to just say okay they're meaning something like stationarity or non-stationarity
you know radical uncertainty um i'm i'm not even sure it's it's uh
well not even sure they mean stationarity or it that that's not so much the problem
yeah i could be wrong about that.
First of all, I think it's totally fine.
You know, words acquire different meanings through time,
all kinds of words, not just technical words,
but, you know, and those meanings change.
And, you know, whatick says in his book that sounds reasonable i i see
i see where he's coming from i see how you can spend a lot of time looking at the agonicity
problem in mathematics or in physics and then make such statements, right? About does history matter?
I don't think that's good.
So if I'm speaking to a colleague in physics or mathematics
and I say something like that,
they will know what I mean and it's okay.
That's how you speak.
If I put it in a popular book,
you're free to put whatever you want in your in your in your book um but the danger is that people might think this is
the definition of this term and because the term has a specific technical definition you might then
draw wrong conclusions, right?
Because you sort of, you think you can establish
whether something is ergodic or not
by answering the question whether history matters.
And then once you've answered that question,
which is clearly a question about the physical world
and stories that you hear
and what you read in the newspaper and so on,
not about mathematical objects. But once you read in the newspaper and so on, not about mathematical objects.
But once you've established the answer to this question, you then go back to your mathematical
model and say, oh, look, here we can assume periodicity.
It's fine because history doesn't matter or, you know, or we shouldn't because it matters.
And that would be wrong because then you're drawing conclusions about mathematical objects based on stories that don't really have anything to do with them.
Proceeding with your definition of ergodicity, the definition we discussed at the very beginning of the conversation,
how is it relevant to economics and finance?
What is the ergodicity economics critique
of expected utility theory?
We'll start with expected utility theory.
So, yeah, expected utility theory
is probably a good point to start because it's a touching point.
There's a mapping.
So we can take the approach of ergodicity economics and answer the sorts of questions that are usually answered using
expected utility theory so those are those questions are you know should i take this gamble
should i well basically should i take this gamble so you're buying a lottery ticket and it's offered
at some price you know the probabilities of the various prizes and you can now ask yourself is this is this a risk worth taking and expected
utility theory if you specify a utility function can answer that question for you
agrodicity economics can also answer that question for you if you specify a wealth dynamic because you need to know what happens
over time and without dynamic information you don't you but you don't you don't know
so in both cases in expected utility theory and in economic economics you recognize the original
problem as underspecified so if i'm presenting you with
with a gamble st petersburg paradox whatever it is um you can't really answer the question whether
this gamble is worth taking or not unless i give you some extra information utility theory
does that through the utility function agonistic economics does that through the utility function. Agodistic economics does that through the dynamic.
The utility function is a psychological concept.
It's supposed to summarize your preferences,
your risk preferences, essentially.
And that's something that's very difficult to observe. You can try to infer it
by asking people a lot of questions about whether they'd like this gamble or that gamble, but
it then becomes very self-referential, it becomes sort of circular, because you are, right, you are,
you need information about people's risk preferences in order to answer a question
about their risk preferences so
you know you might as well just ask them do you want to take this gamble or not
so that may be the critique the circularity is really the critique
with agonicity economics the missing piece of information is the dynamic but the dynamic is
something we can we can reason about more clearly it's not we don't
have to look into your head we don't have to you know we we don't need neuroscience or or psychology
for this um we really need to know your circumstances we need to know we need to know
more about the story so you know for instance we need to know can you repeat this gamble uh is it
what other gambles are you currently playing is it the part at what part in your life history
is this gamble sitting right what are your other commitments and opportunities in life
and how old are you those those are stories that that parts of the story that context yeah context and
it's this physical context that allows us to build reasonable models of the dynamics so for example
you know if you're very wealthy and your your income mostly derives from investments,
then your wealth dynamic is mostly multiplicative because you invest in something that goes up some percent
or down some percent,
and that's what determines your income.
If you're not very rich,
or you have no money at all, or maybe debt,
and you go to work every day,
then your dynamic is not very multiplicative, because you have nothing to invest.
So it may be better described as something additive, where at the end of every month,
you get some amount of money that may vary depending on whether you had a good month or
bad month. You know, your costs may fluctuate a bit, but it's not something where
a multiplicative element is very important. So by considering those sorts of circumstances,
we can inform the kind of dynamic that we feel is a reasonable model for, you know,
your wealth, essentially. And that then allows us to answer puzzles
like the St. Petersburg paradox.
So I guess the critique is,
utility theory to some extent is circular
because it answers questions about risk preferences
only if you specify the risk preferences.
And the sorts of information that you need in utility theory is difficult to obtain.
It will always be very subjective.
It's just, it's not really observable.
It's all in your head.
Whereas the additional information
that we need in agonistic economics
is more observable.
You can reason about it more easily.
If geometric growth maximization
is essentially the same
as logarithmic utility maximization,
does it matter which one we use?
So technically, mathematically, no.
But it matters in the conceptualization.
In one case, in the expected utility utility case you do what you do because that's who you are
in the agonistic case you do what you do because
that's where you are right so it's more about your circumstances and of course it's always a mix of the two in in
real life you know that some people i i i strongly believe this that some people are just by nature
more risk averse than others you put them in the identical situation if that was ever possible and
you will see some people take a risk and others not and systematically i i believe that i don't know it but it's my my intuition
would say that that's that that's um true um but your situation matters a lot and probably a lot more than is usually assumed in economic theory
because it doesn't really have a good way of including the situation in in its models because
it's all overridden by by psychology so the big right the the big effect uh of what's assumed to
be the big effect the dominant effect is psychology and then to be the big effect, the dominant effect is psychology.
And then you lose the information about,
about circumstance and situation.
So the crucial thing missing from conventional,
conventional analysis is context, the context in which the decisions are being made. and that context is considered in, you know, much of the modern literature.
Well, I mean, that's where we get back to a large, diverse field, you know.
I think you can't really say economists do this or they do that
because there are thousands who do very
very different things and you know some of them think about nothing but context um it's it's
really the formal theories where i think we can improve things it's and and we've moved away
in a sense from what we i mean economics has moved in emphasis from
these formal theories to much more observational work you know experiments
became a big thing then there's behavioral economics and I think this is
related that somehow the formal theory only gets you so far
because in some important cases
it's just not informative
and then at some point people just dropped it
and said well this doesn't really help us much
so let's just go and basically collect observations
we don't even want a theory
I think behavioral economics is sort of on that end of the spectrum,
and almost a rejection of theory that just goes and says,
let's just collect data and summarize them in patterns that we think we find.
And of course, the problem is that many of these patterns don't reproduce
and people got too excited about them and there's much too much of it
and all these biases and so on.
Priming.
But other bits, behavioral patterns that are observed in behavioral economics, I think are real and may have explanations in terms of ergodicity economics.
So that's how I see the connection, right?
You have sort of very classical theory.
It's limited because it somewhere has this ergodicity arrow sitting in it,
then you get a move towards more observational work
that goes a little bit off the track
because people get too excited about it
and it becomes a bit of a monster.
And then you have ergodicity economics
that can explain more of the observations in a way that is somewhere more similar to the classical theory.
So in the sense that these are formal mathematical models that you can analyze, you can even solve them analytically in some cases.
And you see some of the biases emerging, right?
So, I mean, we have these papers on cooperation,
why people might want to cooperate.
You can, of course, you can explain this in terms of psychology.
People just like working with others, right?
But you can also ask, well, why do they like working with others?
Is there some evolutionary benefit of doing that?
And we can see such benefits maybe at a more fundamental level
than the literature is focused on at the moment mm-hmm how do you think about framing effects mental accounting
and status quo bias do they do they pose a challenge to a good isity economics
actually let me I mean there's a way to reframe that question
what experimental evidence poses the biggest challenge to ergodicity economics
it depends on what you mean by ergodicity economics so i think you know what we've done so far is really using trivial null models and asking ourselves what sort of patterns we get.
Do we get preference reversal?
Yes, we do.
Do we get risk aversion?
Yes, we do.
So that's very nice.
But these are ridiculous models, right? We are assuming infinite
time horizons, and we assume that we know the dynamics perfectly. So we stick all these
unrealistic assumptions into our models to see how far they get us. And what we see is that they get
us really surprisingly far. But that doesn't change the fact that the models are ridiculous
right they are just very very simple so i would just expect these very simple economic models like geometric ground emotion to make wrong
predictions
in
you know in a large
regime
I mean think about
so what we say
we are
often considering
a simple temporal wealth
maximization criterion right we say so people just act so as to optimize their
wealth in the long run that may make absolutely no sense if you're coming to
the end of your life and you are thinking about what to do with your
wealth you know do you pass it on to your children?
That doesn't maximize your wealth.
And then, of course, you can try to rescue this and say,
oh, maybe it's some genetic whatever, but maybe it's not, you know.
Or what if you're very rich?
You just, you know, I don't know, you invented something
and now you're a billionaire.
Well, you don't need any more money.
So maybe you just start giving that
away because that feels like more fun and i think that's where the model of just maximizing wealth
forever and ever is completely silly so there will be lots of cases where these simple models are
just nonsense they just don't they just don't apply and. And we see genuine philanthropy in the world, right?
It's clearly an observation.
And this is not people maximizing their dollar wealth.
What do you make of the criticism leveled by Jason Docter, Peter Wacker,
Tong Wang, the economists who replied to your Nature article
when they said that it's inappropriate or incorrect
to apply static expected utility theory to a dynamic context.
It was never intended to apply to dynamic contexts.
How would you respond to that?
I would say that I don't know what a non-dynamic context is.
So it means there is a theory that exists, expected utility theory,
that is made for an atemporal world, where we don't have time, though not temporarily extended,
an ensemble of possible universes.
You know, and so this is sort of,
that's sort of the critique, I guess,
to say, well, these theories are made for a context
that doesn't reflect reality
because it doesn't include time.
This is one answer to that question.
Another answer is, to be honest,
I don't really mind so much about,
I'm not so interested in the performance
of expected utility theory.
It's just a dominant model out there.
So it makes sense to look at what it does
and what it doesn't do.
If then someone comes along and says,
I don't like this model either
because it's not meant to solve these problems.
Well, I mean, it was meant to solve these problems, clearly.
That's why it was introduced.
But if it, for some reason,
is just structurally
unable to solve these problems and i would say i agree it's structurally unable to solve those
problems that's our that's our point um the positive point that we make of course is that
we propose a starting point for addressing those problems dynamically. And I'm really not an expert in the
alternative models that try to incorporate these multi-period games in utility theory, but
from everything we've seen, so my group has seen so far those models just become very
unwieldy so that they're just not they're basically not usable in in practice and our models are
usable in practice they're well they're just they're just very simple. So there may be other models out there,
but I don't think they've been used much in practice
because it just doesn't work.
You just hit computational limits.
If you actually try to practically use them,
you run into trouble.
That's what I'm hearing,
so I haven't really worked on this,
but that's what my colleagues tell me.
I mean, we've taken all these criticisms on on board you know we're we're designing experiments with the group in in
copenhagen um and uh we're actually we're collecting any kind of criticisms we can we can
find to see how we can um tweak the experiments to see, you know,
whether there are other models out there that do similarly well as
agroecology economics and where exactly agroecology economics fails.
You know, it's the question, why is this wrong?
Yes. So we're trying, you know, we're trying,
we're trying to gather as much information as we can from critiques like that.
Why was the Copenhagen experiment so significant?
Because this was, I guess, an important moment for the Sydney economics.
Yeah, and I've thought about it a lot.
So I can tell you a little bit.
Okay, I'll tell you a little bit about the history, how this came about.
And it starts with the 2016 paper with with Murray and in that paper we address the question how you can evaluate gambles by using dynamics by
making assumptions about about wealth dynamics And we play through these two
simple example dynamics. One is additive and one is multiplicative. So, you know, very simply,
a multiplicative dynamic, wealth dynamic would be something where you toss a coin,
and if it's heads, you win some percentage of your current wealth and if it's
tails you lose some percentage of your current wealth and an additive dynamic is where you toss
a coin and if it's heads you win some dollar amount some fixed dollar amount and if it's
tails you lose some fixed dollar amount and we go through how one might evaluate gambles depending on whether one's wealth is subject to one or the other dynamic.
And in the additive dynamic, you get risk neutrality in certain limits. And in the other
dynamic, you get logarithmic risk aversion. So you get the equivalent of logarithmic utility theory. So we wrote this paper and you can
then later you can generalize the dynamic and do more with this but we put those two specific
dynamics in there and this group from Copenhagen picked up the paper and said well this sounds like
it can be tested in an experiment. We can just present someone with a sequence of multiplicative gambles
and let that person make choices. And from the choices, we infer what utility function
the person has. And then we change the dynamics and we test the same person. We change the dynamics to additive and we see whether the risk preferences of that person change when we change the dynamic.
This is something that utility theory can't handle really or it's not the perspective of utility theory. In utility theory, you have a utility function. So you will
evaluate any gamble that is put before you according to this utility function. And the
dynamic is not part of the model. It doesn't exist in utility theory. In agonistic economics, in ergodicity economics you behave differently depending on which dynamic you're you're facing
so now you have a an experiment that at least in principle can discriminate between these two
these two theories so it can say well if people essentially you know you take on the subject day
one i give them additive dynamics day, I give them multiplicative dynamics.
And I can infer on both days a utility function.
And if that utility function is roughly the same, then we would say, well, utility theory does a pretty good job here.
And if we see that the utility function changes as predicted by agonicity economics, then we would say, hmm, agonicity economics does a better job here.
And so that's what these people did and actually they approached me before they put the experiment together and
they said we have this we would like to do this and at the time i said well i i really don't think
this will work you know because my intuition was that um life in general is multiplicative I mean sort of by
definition life is that which reproduce which self reproduces so it's a multiplicative process
so you know the number of offspring grows the number of coronaviruses grows exponentially
anything in nature grows or decays multiplicatively or exponentially.
And I thought that this evolutionary fact would be so deeply ingrained in our psychology
that we basically all act according to logarithmic utility functions.
So I had already made this sort of dynamic argument in my head
and thought what's the most important dynamic that living things face?
And somehow I thought it will just be dominated by this evolutionary argument for multiplicativity.
So I thought that if you try this experiment, what you'll find is that irrespective of the wealth dynamic, people will just behave multiplicatively because they won't learn fast enough.
We've had billions of years to learn multiplicativity.
So if I stick you in a scanner for an hour and show you some additive dynamics, you won't catch on.
You won't realize that the dynamics have changed
and you'll just behave the same so my prediction for the experiment was you will find no signal
and um and i said well because you find no signal i'd really prefer if you didn't do the experiment
because people will misinterpret this i still think there's a lot of merit in in uh agonicity
economics and i think an experiment that just shows that there's no signal to be found of it would be kind of bad for a young theory.
So it was sort of an anti, I hadn't learned Murray's lesson yet or more charitably.
I thought it was too soon i wanted to develop
this framework a little bit more to maybe think of situations where you can see a difference
so they went away and didn't speak to me again uh and secretly did the experiment and then come
came back and said we've done the experiment and then i said oh, we've done the experiment. And I said, oh, no, you've
done the experiment. Okay, so what happened? And what happened was that they found an extremely
strong signal. So you can see very, very clearly that people change utility functions depending
on the dynamic. And to me, this was really, really surprising for many, many reasons.
You know, I also thought that people that people i mean i'm sort of joking
about the uh i didn't want a negative result i just thought the experiment couldn't possibly
find a signal because i didn't really believe in these lab experiments where you you know you let
people play a game but that's not their real. So why would they care about wealth within the game
rather than just caring about the wealth they have outside the game?
And wouldn't that completely mess up the signal?
So I had all these concerns that you just wouldn't find a signal.
But these guys are very good.
They're neuroscientists.
They know how to design these experiments.
They know all the kind of weird effects that can happen.
They know how to get around them.
And they're very careful in their data analysis.
So anyway, they found a very strong,
they found very strong experimental evidence
for the significance of agonistic economics in their experiment, in their setup.
I mean, another person who went down the agonistic route is Brian Arthur, right?
So his work on these polyurethanes, I don't know if you're familiar with that, uses agonisticity breaking to explain phenomena in economics that are
otherwise maybe difficult to explain.
So it's very similar in flavor to what we're doing, but he has a specific example so his example or his key example is
a technology capturing market share so let's say you have a few competing technologies and
you know one of them emerges as dominant in the market, as 90% market share or something like that,
then you can ask yourself,
does this mean that this technology is much better than the others?
And classically, that's sort of the first guess, right?
You would just say, well, it's the best product.
That's why people are buying it.
And Brian used this polyurne model as a
mathematical example
where there is no difference
between products
but one product may end up
dominant even though
even if it's a worse product it can still dominate
so the polyurne
is the following
you imagine an urn
and you put two balls in that urn, a red one and a white one.
And then you stick your hand in and you pull one out at random.
And then you put, so let's say you pull out the red ball.
And if you've pulled out the red ball, you put two red balls back in the urn.
And then again, you stick your hand in and you pull out one ball at random.
And if it's again red, you put two red balls in.
If it's white, you put two white balls in.
And so as you keep doing this, the fraction of red balls in the urn evolves to something.
And the curious thing here is that it stabilizes. This fraction over time converges to
a value, but the value it converges to is a uniformly random variable in the interval 0 to 1.
And he used this as an analogy for the evolution of market share among competing technologies.
So you can do this with many different colors of balls in your urn
and those may be the different technologies that are competing.
And for no reason whatsoever, if you play this game once,
some technology ends up with 87% market share,
and if you start all over again,
that technology may have 1% market share
because there's actually no difference between them,
but they can still dominate in a stable way.
And so this is a kind of, this is agonistic breaking, right?
The time average of the fraction of red balls in the urn is one thing let's say 87 percent
but the expectation value is is a half i guess for any color in the urn and the time average is
different every time you play you play the game so it's it's the literal same definition of
ergodicity that that that we use and you can use it in this economic context.
And it reminds me a little bit of George, of course, right, with his reflexivity ideas and
so on. So that simply the fact that you have chosen some ball makes that ball more dominant.
It had nothing to do with the inherent properties of this ball. Besides organicity economics,
what do you think is the most promising branch
of non-mainstream economics?
I mean, promising in what way?
Maybe...
Exciting?
Yeah, maybe exciting or important.
So, or urgent or something like that. or important so
or urgent or something like that
I mean
I'm very concerned
about
you know
resource limitations
and
climate change
and you know those are issues and climate change.
And, you know, those are issues that we have to really think about.
And I think it's something
where agonistic economics puts you
in a reasonable mindset on one level,
but not on another level.
So when you start doing agonistic economics
the first thing you write down is a t to infinity limit so you know what happens in the long run
the moment you start thinking about what happens in the long run you have to start thinking about
sustainability right that's that that's sort of clear i think. But then the next step in agricity economics is often this assumption that people just try to optimize for growth.
Now, that seems to be a fairly good description.
It does seem to be what people are doing, but it's not necessarily a good thing, right?
So that is something that I find quite disconcerting that from from what we're doing it looks like
there's a lot of growth maximization going on and from what we're seeing on a
planetary scale we may have to move away from what we we have to move away from that.
And anything that goes in that direction, I think, is very urgent and very important. And I hope that ergodicity economics can contribute in some way to that debate.
What does it mean to be rational in light
of ergodicity economics well that's another one of these terms that has been sort of
defined in too many different ways in economics um so i'd probably stay away from it but if you
if you want to introduce it in the context of this theory,
in the simplest case, it's just optimizing growth over time,
optimizing wealth growth over time,
whereas in expected utility theory,
it is optimizing expected utility of wealth over the ensemble.
So those are the two different definitions of rationality
that you could attach to those two frameworks of thinking.
Finally, Ole, I want to invite you to speculate with me on political philosophy and culture
so two questions does ergodic theory lend itself to a particular political philosophy
if we accept ergodic theory for example should we lean more towards burkean conservatism so i i think no i mean my
answer to the first question is is no i think it's apolitical What may make it political is that existing theory is one-sided
and we're adding the other side to the reasoning.
So maybe take this example of the cooperation story
where you can show that under certain fairly general conditions,
simple pooling and sharing of resources, repeated pooling and sharing of resources,
leads to faster growth for the individual. So it's something that is favored by evolution. And this is a route into complexity theory. You don't get complex
anything without cooperation. So you need to build aggregates of something in
order to get complexity. So, you know So imagine a bunch of cells floating around in some primordial soup.
They need to start forming cooperatives.
They need to start forming pairs or larger agglomerations
in order to become organisms in the end.
So there has to be some sort of a benefit
for these things to stick together
and start building function.
And so you can make one argument
along the lines of resource sharing.
So this now says,
if you look at expected wealth then pooling and sharing your
resources has no benefit and that makes perfect sense. It can't have a benefit because you've
already averaged over the ensemble so it's as if by taking the ensemble average, you assume that all possible
cooperation has already happened. And then if you compare it to actually including cooperation in
your model, then of course nothing changes. So there's no benefit of doing it. If you consider
the time average of an individual entity that may be affected by fluctuations and risk and
randomness and so on, and now you let this individual entity cooperate with another entity,
you can reduce the fluctuations, you can reduce all those nonlinear effects,
and you can boost time average growth for this individual entity.
So now you have something where ergodicity economics tells you that
there is a fundamental benefit to cooperation.
Some people would now take this and run with it and say,
oh, you see, therefore we should all pool our resources
and live under communism, right?
And that would be a political interpretation.
But I don't think that interpretation is valid.
And if you go to the next step in the analysis,
then you see that the reason that cooperation or one precondition for cooperation to be beneficial
over time is diversity. So you need these individuals that cooperate to experience
different strands of randomness. And if by introducing cooperation, you make everyone the same,
then you lose the diversity and you lose the benefit of cooperation. So it becomes a much
more involved story. And the simple political messages out of this, I think don't exist.
You will have to really continue thinking hard whether, you know, the form of cooperation that you may introduce in order to capture its benefits is worth it.
Does it, you know, does it cost too much?
Are the structures that you're building in order to enable this cooperation just too high?
So they, you know, they outweigh the benefits or are we losing diversity
because we are building these gigantic i don't know you know cooperatives whatever whatever they
may be um so i think the answers are always really nuanced um and i don't think there's a simple
political message that comes out of this which i think is is great. And it's something that I noticed, I noticed that people
from, you know, if you want to split politics into left right spectrum, I noticed that people from the left and from the right of the spectrum look at ergodicity economics and say, this makes
sense to me. And I think that's good. Because, you know, it's supposed to be a scientific theory. It's not really... It would be odd if only people on the left believed in relativity
or in electrodynamics.
I don't know.
Electromagnetism is a right-wing concept.
That just doesn't make any sense.
So if you want a scientific theory of anything,
its credibility shouldn't correlate with political views.
Is there something in our culture that causes us to overlook the importance of time
and see everything through a built-in ergodic lens?
Yeah, I think that goes back to the beginning where we started you know when we were thinking
about sort of ancient roots of you were asking why did it take so long to even develop a theory of
of randomness and in a way what we were dodging uh up until that time was the issue of time right we we had this ancient image of yes time down here
on earth but then the the timeless uh celestial world and somehow we are always very very very
uncomfortable acknowledging the reality of time because you know we all know where this leads, right?
It's our demise.
So, yeah, sure, we have a massive psychological hurdle to overcome if we want to really include time in our thinking,
in our theories or in our everyday thinking.
Ole Peters, thank you so much for your time thank you very much
thank you so much for listening i hope you enjoyed that conversation as much as i did
for links show notes transcripts everything that was discussed head to the jspod.org
the audio engineer for the jolly swagman podcast
is lawrence moorfield our video editor is alf eddie i'm joe walker until next time thank you for
listening ciao