Modern Wisdom - #512 - Will MacAskill - How Long Could Humanity Continue For?
Episode Date: August 13, 2022Will MacAskill is a philosopher, ethicist, and one of the originators of the Effective Altruism movement. Humans understand that long term thinking is a good idea, that we need to provide a good place... for future generations to live. We try to leave the world better than when we arrived for this very reason. But what about the world in one hundred thousand years? Or 8 billion? If there's trillions of human lives still to come, how should that change the way we act right now? Expect to learn why we're living through a particularly crucial time in the history of the future, the dangers of locking in any set of values, how to avoid the future being ruled by a malevolent dictator, whether the world has too many or too few people on it, how likely a global civilisational collapse is, why technological stagnation is a death sentence and much more... Sponsors: Get a Free Sample Pack of all LMNT Flavours at https://www.drinklmnt.com/modernwisdom (discount automatically applied) Get 20% discount on the highest quality CBD Products from Pure Sport at https://bit.ly/cbdwisdom (use code: MW20) Get 5 Free Travel Packs, Free Liquid Vitamin D and Free Shipping from Athletic Greens at https://athleticgreens.com/modernwisdom (discount automatically applied) Extra Stuff: Buy What We Owe The Future - https://amzn.to/3PDqghm Check out Effective Altruism - https://www.effectivealtruism.org/ Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact/ Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hello, everybody. Welcome back to the show. My guest today is Will McCaskill. He's a philosopher,
ethicist, and one of the originators of the effective altruism movement. Humans understand that
long-term thinking is a good idea, that we need to provide a good place for future generations to live.
We try to leave the world better than when we arrive for this very reason. But what about the
world in a hundred thousand years, or 8 billion?
If there's trillions of human lives still to come, how should that change the way that
we act right now?
Expect to learn why we're living through a particularly crucial time in the history
of the future, the dangers of locking in any set of values, how to avoid the future being
ruled by a malevolent dictator, whether the world has too many or too few people on it, how likely
a global civilizational collapse is, why technological stagnation is a death sentence, and much more.
I think that this is a very interesting conversation, considering the fact that we might have a
galaxy colonizing civilization on our hands, and the fact that we see things mostly on human
life span timelines.
And the very most, maybe a hundred years out,
we're not thinking about the huge long-termism scales
that Will's talking about here.
And it's a very interesting philosophical question.
What sacrifices should we make in the moment
for successes in the future?
I really like it.
I think you're gonna enjoy this one.
But now, ladies and a gentleman, please welcome Wilma Cascale.
Wilma Cascale, welcome to the show. Great, thanks for having me on.
Given the fact that we're seeing James Webb telescope images all over the place at the moment,
it kind of seems like a smart time to be thinking about far-flung futures and potentials for
civilization and stuff like that. Absolutely. James Webb is making very vivid and in high resolution, an incredibly important
fact, which is just that we are at the moment both very small in the universe and also very
early in it. So almost all of the universe's development
is still to come ahead of us.
That's wild to think about the fact,
especially on our time scales, right?
You know, you think about 20 years
as being a very long time in a human lifespan,
and then you start to scale stuff up to continents,
to the size of a planet,
to the size of a solar system,
or a galaxy, or the universe, and
it puts things into perspective.
Yeah, well, we're used to long-term thinking being on the order of a couple of decades
or maybe essentially at most, but really that's being very myopic.
I mean, how long has history gone on for?
Well, that's a couple of thousand years.
Home with sapiens have been around for a few hundred thousand years.
The earth formed four billion years ago.
First, the big bang was little under 14 billion years ago.
And if we don't go extinct in the near future, which we might do and we might cause our
own extinction, then we are at the very beginning of history. Future generations will see us as the ancients living in the distant past.
And to see that, we can just use some kind of comparisons.
So a typical mammal species lives around a million years.
We've been going for 300,000 years.
That would put 700,000 years to come.
So already, on that by that metric,
life expectancy is very large indeed.
But with not a typical species,
we can do a lot of things that other males can't.
That creates gravedics, such as
for the engineered pathogens or AI
that could bring our own demise.
But it also means that if we survive those challenges,
then we could last much longer
again, where the earth will remain habitable for hundreds of millions of years. And if one,
we will undertake to the stars, well, the sun itself will only stop burning in about 8 billion
years, and the last stars will be shining in hundreds to billion years. So on any of these measures, humanity's life expectancy is truly
vast. If you give just a 1% chance to us,
splitting to, you know, the stars and staying as long as lasting as long as
the stars shine, well, we've got a life expectancy of a trillion years.
But even if we stay on earth, the life expectancy is still many tens of
millions of years. And that just means that when we look to the future and when we think about
events that might occur in our lifetime, that could impact that future, that could change humanity's
course, well, you know, we should just boggle at the stakes that are involved.
And when you talk about long-termism, that is looking at the future as something which
needs to be taken seriously.
There is this sort of grand potential for human flourishing, for human life, for all of
the good stuff that could occur for a very, very long time over a very, very wide distance.
And we need to take it seriously.
Exactly.
Long-termism is about taking the interest of future generations seriously and appreciating
just how big that future could be if we play our cards, and how good it could be.
And then from there to think, okay, what are the things that could be happening in our lifetime
like engineered pandemics, like later than human level artificial intelligence, like World War III, what are the events that really could have, you know, civilizational trajectory level impacts. And then finally,
taking action, trying to figure out, okay, what can we do to ensure that we navigate these
challenges and try to bring about, you know, a wonderful future for our grandchildren and
their grandchildren and their grandchildren and so on.
It's nice to hear, to think that you've been focused so much on long-termism because I fell in love with
the precipice by Toby Ords that I know you work with and anybody who's been defined.
He's phenomenal man, but anybody that reads that book, especially the beginning, the premise that
he talks about, right? And the premise is he believes that humanity
is at a very particularly unique, dangerous inflection point
in between sort of prehistory
and our civilizational inheritance
that we could continue on and be lovely and flourishing with.
And in that first chapter, he talks about the fact
that the huge vast inheritance potential that
we have downstream from us is, it is, is crazily big.
And yet almost all of the things that we do now are focused on such a narrow time span
when not thinking even the most long term of long termism projections don't get into
the hundreds of thousands or millions of years, like you're talking about.
Exactly. We focus on this tiny window. And, you know, in many cases, that makes sense. You can't control what people in the year 3000 have for breakfast.
But surprisingly, there are things that we can affect now that do impact people in the earthy thousands, where number
one among those, which Toby talks about length, and where you can see the precipice and
what we owe the future, my book, is kind of compliments to each other.
Absolutely.
Both plowing is a very similar photo.
Yeah, one of the things that Toby talks about, a length is ways in which we can cause
a loan extension.
So obviously, asteroids, supervolcanoes, there are natural risks that could wipe us all
out.
Thankfully, we've actually done a pretty good job, at least, of navigating asteroids.
Space Guard and NASA Progtham just came together, spent only on the order of about $5 million per year, but has basically eliminated the
risk of them asteroids. It's very, very unlikely that we now know it's very, very unlikely
that we'll get hit by some kind of dinosaur killer in the next few centuries. But we are
creating new risks, so that you have nuclear weapons created any other like new destructive power
and the next generation of weapons of mass destruction could be considerably worse again
engineered pathogens could create the ability to create new viruses that could kill
you know hundreds of millions of people billions of people perhaps everyone on the planet and
if you know millions of people, billions of people, perhaps everyone on the planet. And if large bioweapons
program starts up, focused on such things, and we have seen large bioweapons, program
in the past, that could be fairly dangerous indeed. And that's exactly the sort of thing
that we want to ensure goes to kind of zero as a risk.
So how are our actions now important?
Is it about investing in trajectories
that people move down in the future?
What difference does what we do in 2022
have in 20,000 and 22?
Yeah, so I think there are two ways
of impacting the long-term future.
So one is ensuring we have a future at all, such as by reducing the risk of extinction
or reducing the chance of civilization just collapsing and then never recovering.
A second way is by making the future better, assuming we do survive, improving kind of future
civilizations' quality of life.
And on the first of these, well, one thing we can do is carefully navigate new technologies.
That can mean accelerating defensive technology, beneficial technology, and it can mean putting
in policy such that we either slow offensive technology or just choose not to build it.
So on the accelerating offensive technology, or thing for example is far UVC lighting,
where this is a sudden small spectrum of light.
If you implant the light into light bulb and it kind of radiates a room,
just as a normal light bulb does, then it also kills off the pathogens
in that room. And this is a very early stage of research, but seems, you know, it's quite exciting.
I think with some foundations, I advise, we're going to be funding it to a significant degree,
where if this checks out, if it's sufficiently efficacious, if it's sufficiently safe,
then we could launch a campaign to ensure that this is installed in all light bulbs
all around the world.
We would have made it very, very hard,
near and possible to have another pandemic.
And along the way, we would have eradicated basically
all the spedicity disease.
And that's the exciting.
That's something that we can do by taking these risks seriously
to say, look, we can have an enormous impact, not just on the present generation,
but on the course of all future generations to come, creating a safer world for the next generation.
Is this a particularly crucial time, do you think, in the history of the future?
Yeah, I think there's good arguments for thinking that we're certainly a very unusual time.
I don't want to make the claim that we're necessarily at the very most important time,
perhaps the next century will be even more important.
And I think there were some hugely crucial times in the past as well.
But we're at a very unusual time compared to both history and the future.
One reason for this is just that the world is so interconnected.
So for most of human history, or a large chunk of human history,
there just wasn't a global connection.
There were people in the Americas, people in Asia and Africa,
people in Australia, and they just didn't know each other at all.
Even within the landmass of Eurasia, you know, in the early, the first couple of centuries
AD, the Han Dynasty and the Roman Empire, they complied about 30% of the population
each, but they barely knew of each other.
They were like, you know, tales that one would tell of a distance civilization.
Where there's now, we're global and we're interconnected.
And that means that say you have a certain message
that you want to get out there.
In principle, it can get out to the entire world
or more darkly if you want to, you know,
achieve conquest and domination.
You can potentially do so over the entire world.
And in the future, again, I mean, we were talking about galactic scale thinking and
the James Webb telescope. In the future, we'll be disconnected again. If one day we took to the stars,
we're in different solar systems, then even to our closest solar system,
there and back communication would take eight years. And at some point in the very distant future, well, actually different galactic groups will be
disconnected, such that you could never communicate between one and another.
Although I'll have you, that's very far away indeed, about 150 billion years.
So hopefully we're still going by then, but no guarantees.
So that's one factor we're just so interconnected is one way in which
the one way in which the present time is so important.
And so what I'm going to say is so unusual.
And seemingly important because it means that we can have
this battle of ideas and competition between values and if one value
system like took power, then it would take power over everything. And that's PE, you know, to
potentially put it away. A second way in which the present is so unusual is just how fast technological
progress is happening.
So for almost all of human history when we're hunter
gathers, economic growth, which is one measure
of technological progress, was going about basically
close to zero, very, very slow accumulation of better
stone tools, spearophowers, things like that.
Agricultural revolution meant that sped up a little bit,
developed farming, better farming techniques,
but we're still growing at about 1% per year.
Over the last couple of centuries,
we've been growing more like 2% or 3% per year
and the frontier of economies.
Now, can we keep that, and most of that growth
is driven by technological advancement
that enables us both to have more people
and for those people to have better material quality of life. Now how long can we keep that going for? Sometimes you get this idea
of oh well future science is just boundless, we can never come to the end of it. But we've only been
going properly for a few hundred years since the scientific evolution. And it seems to be hard,
like if we go at this pace, well,
at some point, we'll have figured out pretty much everything
there is to figure out.
But we can think about this economically as well,
where if we keep growing at 2% per year,
after 10,000 years, because of the power of compound growth,
civilized, you know, the world economy, of total economy, would be 10 to the power of compound growth, the world economy, the total economy,
would be 10 to the power of 89 times current civilization.
There's a very big number, and to put it in context,
there are only 10 to the power of 67 atoms
within a thousand, 10,000 light years.
So we would need to have an economy,
a trillion times the size of the current world economy,
for every atom within reachable distance.
And that's just extremely implausible.
I really just don't think that's possible.
And so that suggests where at this period of a rapid technological growth,
and rapid technological advancement, that cannot continue.
And that means we're moving through the tech tree
and unusually fast rate.
And that means there's a lot of change.
It also means that we're a kind of unusually high-density period
in terms of developing technology
that could be very important, used for good,
or very harmful, used either good, or very harmful.
You know, use either to lead to civilizational collapse, to end civilization, or to allow
kind of certain ideologies to gain power and to influence how the course of the future
goes.
Early on in our history, we weren't moving sufficiently quickly to be able to develop
anything that
would be that much of a surprise because it was iterated at a much slower rate.
Potentially, or it seems like further into the future, not only are we going to have such
advanced technologies that any dangerous technologies can probably be mitigated at least a little
bit, but also, again, it's going to slow down.
You're going to have this sort of S-shaped curve, right, Like flat, to a hockey stick to then start to flatten off again.
And at both of the relatively flat areas of that,
not much change, which means, therefore,
relatively low amount of risk.
I'm going to guess this links in with Nick Bostrom's ball
from the urn thing, right?
That there's just fewer balls that can be picked out
of the urn when the change isn't occurring so quickly.
Exactly.
So if we're thinking, if we're asking, is now an unusually important time?
Well, Nick, Bossam has this analogy of technological progress as like drawing balls from an urn
and if you pick a green ball, then it's like a very good thing. If you pick a red ball,
then it's a very, you know, bad thing. It's maybe it's even catastrophic. And we're picking balls in the urn just very quickly.
I mean, I'm actually not sure, like, most balls are both green and red, depending on which way you
look at them. Most technologies can be used for good or ill. Fish and gave us nuclear reactors.
It also gave us the bomb. But, you know, we're picking balls out of this earn at a faster
rate than we did for almost all of humanity's history, and that we will do for almost all
of humanity's future, even if we don't go extinct in the next few centuries.
What are you talking about when you say trajectory change?
It's perfect. So we've talked so far about kind of safeguarding
civilization and ensuring that we just make it
through the next few centuries.
And that's been the kind of main focus of discussion
when it comes to existential risk.
But we also, we don't want to merely
make sure that we have a future at all.
We also want to make sure that that future is as good as possible.
We want to avoid, have effect, dystopian futures.
We wanted to line to the eight of future
that is positive and fly the shing.
And so to the change of the first two,
efforts that you can make to make the future better
in those scenarios where the future lasts along,
we civilization lasts a long time. And how can you do that? Well, a number of ways, but I think
the most likely way is to influence the values that will guide the future, like the moral beliefs and
norms, where, you know, at the moment, we are used to living through periods of great moral change.
The gay rights movement emerges in the 70s and it's only a few decades that gay marriages
are legalized and that's like rapid moral change compared to history.
But that might change in the future.
This moral change that we fast-model change that we know of might end in the future. This moral change that we fast model change that we know of
might end in the future.
Because to often set moral worldviews or ideologies,
they try to, in their nature, they often
try to take power and they try to lock themselves in.
So we saw this with the rise of fascism,
with the Nazis during World War II,
Hitler's night of the long knives gets into power,
clashes the ideological competition,
similarly with Stalin's purges,
gets into power, clashes the competition,
similarly with Pol Pot and the Camerarouche.
Because if you're an ideology and you want power,
then you want to ensure that that ideological
competition goes away.
And my worry is that this could happen with the world as a whole, where, you know, in
the 20th century, we all already had two ideologies, fascism and Stalinism, that really were aiming
at global domination. And I think, you know, luckily we were not in a state where
the world came close to that. But it's not like that's a million miles away in terms of how history
could have gone. And so then when I looked to the next century, well, one major worry would be,
for example, there's an outbreak of a Third World War, which I think
is more likely than people would otherwise think. We're used to this long period of peace,
but that really could change, I think, something like one in three in our lifetimes. And the
result of that could be single world government, single world ideology, and depending on what
that ideology is like, could be very bad indeed. You know, something like what you got in 1984, George Orwell,
with the Handmaid's tail, Mark that would.
And then finally, I think that with the development of new technology,
and in particular AI, that ideology could last, persist,
for the very long time indeed,
potentially just as long as civilization does.
So value lock-in isn't necessarily a bad thing as long as the values that are locked-in
are values that we would want over the long term.
Right, so if you, if it were the case that the values that got locked-in were the best
ones, the, you know, the ones that we would all come to given sufficient reflection
and 10,000 years to think and reason and empathize.
Machine extrapolated volition has been utilized correctly in all of that.
Yeah, exactly. Then that would be okay. However, we should appreciate that we today are very,
very far away from having the light model views.
I mean, it would be this amazing coincidence if all of history,
and for all of history, people had abominable model views supporting slave owning or paid to thearchy or
atrocious views towards criminals or people of different sexual orientations or people of other nations. But we in the early 21st century
nailed it. West in countries. We figured it all out. We're very surprising indeed. And so what we
want to ensure is that we don't end moral progress kind of too soon. And if anyone kind of came to power in the world, I'm like,
yes, I'm going to lock in my values now. I'm going to ensure that the things I want
persist forever, I think that would be very bad. I think there would be a loss of almost
all value we could achieve, because we need to keep morally progressing into the future.
That's what I was going to say. Is there an argument here to be made that optionality
or whatever the word would be, or the regular
change of a particular moral structure, should be something which is optimized for, that
even if you were to potentially get rid of a moral structure that was more optimal and
switch to one which is less optimal than that, the fact that you've baked in the ability
to switch helps you to mitigate some of the fact that you've baked in the ability to switch helps you
to mitigate some of the dangers of having complete lock-in for the rest of time overall.
That's exactly right. We want to have a world where we can change our model views over time,
and in particular where we can change them over time in a light of good argument or empathy or just like moral considerations.
And there might be many points of lock-in in the future. So the design of the first world
government would be such a point of lock-in, first space settlement as well. Like you could imagine
there's a race dynamic and everyone's just trying to get out into space to claim those resources as fast as possible.
I actually think that, you know, a couple of examples of lock-in in the past were the colonial era where you got this like single kind of world view suddenly dispending all over the world, kind of Western Christian worldview.
Earlier times with the first world religions,
where again, you've got this kind of bubbling of ideas
than it compresses into the sanctified holy book
and then persists for thousands of years.
Or also actually just the rise of homo sapiens as well.
There used to be many homo species.
We used to have a diversity of human species.
And then there were only one because one was a little bit more powerful, and you want to,
you know, predictably that, it means that the competition is to slide.
What's your problem with that? And so, what's your problem with the Happy Birthday song?
Thank you for asking me about that. That's not even in the book. What we are in the future,
does not discuss Happy Birthday.
So I use the example of Happy Birthday as an example of bad locking to illustrate the fact that
even the fact that something becomes universal does not mean that it's necessarily like the right solution
as it were like the best state of affairs. So Happy Birthday is by far the way the most sung song in the world.
It's the song that's used to recognize someone's birthday
in at least many of the major languages.
And it's terrible.
It's like, it's be slow,
so it's like a little bit like a dirge.
The emphasis is on the U,
birthday to U, sorry, so the emphasis is not on the U,
which you should expect.
It's on the two, which is like
why on the position doesn't make sense.
But then it also has like an octave leap.
Like happy birthday, it's like,
and no one can sing it.
So everyone's,
because people are singing at different ranges. So this is meant to be like a communal song, you know, your family gets together. So you
want a song with like a really pretty small kind of melodic range. But instead it has this interval
and it means that like everyone just suddenly changes key at that point and then like now like half
your family are singing in one key and half your family are singing in the other key. And it's just a cacophony. And there's no reason at all that it couldn't be much better.
You can go on YouTube for people creating new versions of Happy Birthday. It sounds much better.
And why did that happen? Well, they didn't used to be a happy birthday song. In fact, their melody was for a different, it's called Good Morning to All, I think. But in, I think it was the early days of radio,
gramophone, and so on, I think perhaps like that was what I call a moment of plasticity. So
little moments in history when things could go one way, they could go another way, and we can really have an influence
over what direction happens.
But then after that moment of plasticity,
there's a kind of ossification.
And so at that moment of plasticity,
perhaps any of these songs could have
become the one that gained the most popularity.
But once Happy Birthday is the most popular song, once it's known that that's
what you do to sing Happy Birthday, they can know someone's birthday, well then you're
kind of locked in, it's very hard to switch from that, because if I start singing some
different melody, then everything's like, what is this? And in the case of Happy Birthday
it's probably just, you know, we could create, you know,
there could be some government DicTap that says,
okay, we're all gonna stop singing Happy Birthday
because that doesn't make any sense.
We're gonna sing this different melody instead.
And perhaps that would work.
It's not sufficiently important issue
that I think that would happen.
Some things like that have happened in the past.
So I think it was Sweden that used to drive
on the left side of the road, but its neighbors drove from the past. So I think it was Sweden that used to drive on the left side of the road, but
its neighbors drove from the right. And so they had one day where they just switched.
They were like, okay, we're also going to drive from the right side of the road as well. I think
it was 1978. And they had this huge kind of government campaign about it. They had like songs about
it, the big song competition, the winner of
which was called Get To The Rights, Vincent. And it was successful. They actually managed
to switch from being locked in to driving in a left side of the road to driving in the
right side of the road. In the case of Happy Birthday, I don't expect that to happen.
But yeah, Happy Birthday illustrates how the fact that this song became so widely known
does not at all, and culturally, let's almost culturally universal, does not at all suggest
that it's the very best thing, that it's the best way we could have sung Happy Birthday.
And I think Happy Birthday has an important lesson
about future dystopia, which is that
model norms and model means they can take over the entire world
without necessarily being the best ones.
And I think we're living at a moment of plasticity now
with a specter, what are the moral beliefs of our time
that may not occur in the future.
If we end up with this like single
world culture, whether that's the conquest or just through the kind of merging of different
ideas, and suddenly just everyone believes that X is right and Y is wrong, then what's
the pressure to change from that? I don't think there would be much pressure. And if those
views are wrong, if it's like the melody of Happy Birthday
and not the melody of some better song,
then that could be very bad.
And that would be a bad thing that persists
for the very long time.
What it shows is the power of culture
to be able to enforce norms.
A lot of the time when you think about the future
and potentially bad outcomes,
you think about the 1984 dictatorial bureaucratic, evil world
government in tall buildings telling people that they're supposed to do something, but
one of the most powerful enforcement mechanisms is social approval and disapproval and just
grandfathered in expectations about what you're supposed to do and what you have one of the
biggest problems you can move against is when the people that are in charge to the bureaucratic
organization or the government or whatever, when they maybe even try to change something for the better
and that runs counter to the flow that you've got going through the culture. That's when
you get uprisings and revolutions, sometimes kind of like idiotic ones. But the point is
that culture is so important. We saw this, I use this example all the time, we saw this
with the word woke. So think about the fact that woke was originally used in rap songs, then it kind of got weaponized
and adopted by people on the left that wanted to identify someone that was kind of socially aware
and cared about social justice issues. And then very quickly, no one needed to mandate that
woke was going to be the sort of thing that you didn't want to be associated with. But all of the comedians and satirists and people online
managed to culturally enforce a norm
where woke became such a toxic term
that you didn't need to tell people not to use it.
No one wanted to be associated with it
because it was just such an uncool word.
But what's cool?
Where did it show me the cool mandate or the cool policy?
Doesn't exist.
Simply enforced by norms.
Yeah, I mean, there's just a huge amount of what humans do
is determined by these cultural attitudes,
just what's high status, what's cool, and what isn't.
And we can see this, so take conspicuous consumption,
the fact that people like to show off how rich they are. And that can be,
you know, across many different cultures that is used as a way to show like, you know, how
successful you've been. There are different ways of doing that, that are cool in different
certain, in different societies over time. So it could be buying fast cars, having expensive walks at watches, or if we go into the
past having very nice fabrics or things like that. It could be owning slaves. So in the Roman Empire,
the more slaves were the status symbol. Some Roman senators had thousands of slaves.
It could be philanthropy. So this is at least to some extent in the United States that engaging in
philanthropic activity is a way of demonstrating conspicuous consumption.
And which of these do we have? I think it's largely a cultural issue, very largely a cultural issue. And that really matters because whether the demonstration of conspicuous consumption, which is, I think, just human, again, a human universal,
whether that's done in a particular culture,
through philanthropy, through buying fast cars
or through slave owning,
makes a very big difference to the well-being of the world.
And I certainly know which I'd defer.
And I think, yeah, social scientists are only really starting to appreciate the importance
of culture in the last few decades. It's the sort of thing that hasn't gotten enough
attention because, well, it's kind of a familial. It's like you can quantify it or measure
it as much as perhaps other things like laws or economic matters. Exactly.
But over the course of writing this book, just more and more, I got convinced that culture is
an enormous force. It's almost, it's generally culture that influences political change,
rather than the other way around. If you get a political change without cultural change, so I think the other way around, if you get a political change without cultural change, then that often doesn't go well. And in the book and what we are the future, I focus in particular
on the abolition of slavery, which when I was kind of, you know, before writing this book, I
would have thought is just clearly kind of an economic matter, something that was inevitable as our technology improved. Slavery
was just no longer viable means of production. But I think I was wrong. Actually, I think
that the primary driver of the abolition of slavery throughout the world was a cultural
change, and that was actually based on people considering moral arguments and making changes on the basis of moral arguments.
And I think in the future we could have equally large changes that could or not could not occur
based on what moral arguments are present. I've got a fix for the happy birthday problem by the way,
super-thesan. I've got a fix for the Happy Birthday problem by the way, which is... Hit me. So, we can't... I'm not strong enough in my mental capacities to change the actual tune,
but you can safeguard yourself from not being able to do the octave by starting the song
one key lower than you think that you need to. Everybody should do this. Everybody starts
belting out Happy Birthday pretty close to the upper bound of where they can go
in terms of melody.
No, no, no, no, no, no.
Bring it back, give yourself some headroom.
That's what you need.
And then when it comes to that,
when it comes to that key change,
you can nail it.
Yeah, yeah, exactly.
Exactly, you've got this sort of beautiful,
warm sound.
You heard it here first.
Life hacks, for everyone.
That's it.
If there's one thing that you take from this podcast.
Forget about the future generations.
Sing Happy Birthday in a Ballotone.
Happy Birthday.
That's you.
I'm telling you, that's it.
There we go.
You've proved my life.
Thank you.
I think it's really interesting to think about what cultures and stuff lock in over a longer
term, but presumably this means that we need to safeguard civilization from a bunch of
suboptimal futures.
And you've got three different ones, extinction, you've got global civilization or collapse,
and you've got stagnation.
So starting with extinction, what's the, like, do we gonna go extinct?
Like, what's gonna happen?
Yeah, I think probably not.
So, in general, you know, it's hard to kill everybody, thankfully.
There are 8 billion people in a very diverse way of environments, with diverse societies. And thankfully there are very few people in the world that really want to kill everyone.
So the scenarios where that happens, I think, something's going to be really badly, but it's not
zero at all. So if we just consider pandemics, what's the risk of an engineered pandemic?
That is a pandemic.
There's not from natural causes, but that is the result
of design in a lab where we already have technology
to improve the destructive power of the viruses now.
And that's just getting better and better every year.
And so it's not very long that it will be quite widely
accessible that we'll have the power to create viruses that could kill hundreds of millions billions of people maybe even more.
What extinction list could I put of that? Maybe something like 0.5% this century.
much higher that there will be some engineered pandemic that would kill very large numbers of people. Maybe that's like 20% or something by the end of the century. And that's just
far too high, like far, far too high. Because there are things we can do. So I mentioned
far UVC lighting. There's also early detection. So we could just be monitoring wastewater, all
then the world, scanning it for DNA, ignoring human DNA, and seeing, is there anything new
in here? Is there something we should be worried about? And if so, then we can act kind of
immediately. There's also just more advanced personal protective equipment. So masks that are just super protective,
not just like the cloth masks you get, but full head things that would ensure that if
you were a key worker, you would be gavantied to avoid infection. That's something we could
be working on now as well. So yeah, this is just how we respond to this is contingent. It's
up to us. We can choose to get that
visc way down to zero where that ought to be. What's your opinion on Nick
Busterham's vulnerable world hypothesis? So it is a hypothesis. So to explain
the hypothesis is that there could be some technology in our future that gives the power to destroy the
world to basically everyone in the world. And if so, then it would seem like it would
be very likely that the world would end pretty soon. And he gives the analogy of, imagine
if it was as easy
to create, let's say, a Doomsday virus as it is
to just put sand in a microwave.
Then it just seems like we wouldn't last very long
because there's just so many actors
each making their own independent choices
that we would just, at some some point would do so.
I think it's very unlikely to be honest
that the future looks like that.
The main reason that is that we just ban technology
all the time.
So there are many technologies that we don't like.
So take human cloning or something.
We could clone humans now if we wanted to.
And we choose not to on ethical grounds, because it's taboo.
And that's kind of globally enforced.
In his essay on the vulnerable world hypothesis, Nick kind of think.
Nick, you know, if we were in this vulnerable world, would
that mean that the only solution would be some very powerful surveillance state? I think
I think like no, obviously that would be like a really bad outcome too. And what we can
do instead is just like have strong international norms about what technologies we do allow to develop in which
we don't.
One is that humanity is at least somewhat good at recognizing risks and taking action
on the basis of them, and actually in general being quite cautious with respecting new technology. And then secondly, technology is often used for
defensive measures as well as offensive. And so in general, you know, in general,
I think they have vulnerable world hypothesis. It's possible that I will
occur in the future, but I think I'm a little more optimistic than perhaps Nick
might be.
Given the risk or potential future of an extreme surveillance state,
which would be one potential solution to try and constrain the degrees of freedom
that people can do fuckery with whatever it is that they've got,
can you see a potential human future where an effective long-term as civilization is basically incompatible with democracy?
I mean, you could.
So in what we are the future, I talk a lot about,
this is why they're kind of considering both values side of things
and like risk side of things are so important.
Because, you know, if you're only focused're only focused on the extinction side of the spectrum, then you might think,
okay, we need some undemocratic civilization that can monitor the
bit everyone's behavior so that no one can pose a risk to the future of civilization.
And to be clear, Nickboss, they don't believe this, but this is a kind of
straw man view
that you could come away with.
But then you've got this authoritarian state
that I think has lost most of the value that we could have had.
It's not just about making sure that the future is long,
it's also about making sure it's good.
And so, you know, is the ideal governance
of the distant future democracy?
I don't know, maybe we can do something much better.
Democracy would have been unheard of, you know, for most of, most cultures throughout history.
You know, perhaps there's something we haven't even thought of.
On the flesh and we would think is an even better mode of governance.
But I think I'd be very worried about something that's more authoritarian, precisely for the reason that I think we could easily lose most value in the course as a result of that, where
the great and actually quite fragile thing about liberal democracy that we have in places like the US and the UK is just that you've got a great
vibrancy of debate and discussion and therefore are they able to kind of drive forward
model progress. People are able to like have model campaigns. People are able to criticize the views
of people that empower. And when you think of the flector on human psychology, that's like
a surprising thing. And the fact that it's actually quite rare in history should make
you appreciate that it's really something I think should be treasured and we should
find protect. And so, yeah, anything that's like, oh, we need strong, really strong kind
of government in order to reduce this sort of extinction
even further. And genuinely like, look, can we get 90% of the risk reduction by other
means? And I think that often you can.
If you're concerned about extinction, presumably more people on the planet would spread
the risk more, would make complete extinction more difficult because the virus or the AI or the asteroid or the Supervolcano or whatever,
simply has got more work to do. And for every human that you add, there is a potential chance that they may survive, and there may be a few of them, and there may be they could repopulate.
What's your view on whether or not the world has too many or too few people in it? It's a great question.
And there are considerations on either side of the ledger.
So it's a very common kind of idea
that there are too many people who are
sourced a pleasion and climate change,
and you shouldn't have kids,
because they will contribute to climate change.
But I think those are, and you know, that is true.
I contribute to climate change through my, if I were to have kids,
they were as well. But that's only looking at one aspect of human, of what people do,
because people do good things too, and they innovate and they, you know,
affect a moral change. They contribute to infrastructure, pay taxes, that benefit all society.
I'd also say they just,
if someone has a sufficiently good life,
that's just living is good for them as well
and that's a sort of moral consideration
we should take into account.
But then yeah, you're right, actually, having more people.
I mean, so actually, yeah, so I'll go back a step.
For those reasons, I actually think that on balance,
we should have more people, rather than fewer.
The benefits from an additional person,
in particular via both technological and model innovation,
as well as the benefits to them,
kind of outweigh the negative effects,
especially given that you can counteract those negative effects.
So if you're having a child, you can offset their carbon emissions and actually you can do so for
a few hundred dollars a year. It's a very small fraction of the cost to have a child.
But then how does having kids put in a side climate change, how does having more people in the world
putting aside climate change, how does having more people in the world impact extinction risk? It's interesting, lots of considerations on either side. So you're
totally like that it spreads the risk. We've got more people in more diverse
environments and that makes it a safer. And it's actually a little unclear to me whether, like in the world today, this year, is
extinction of risk higher and lower than it was a thousand years ago.
Now, a thousand years ago, we couldn't have blown ourselves up with nuclear weapons.
But the extinction of this could be nuclear, at least with current arsenal sizes, not with kind of future, potentially much larger arsenal sizes.
Extinction of this is pretty hard, and that's partly because there's just,
extinction is a really unlikely, I think. And that's partly because there's just so many people
in the world today, and we have like, you know, we have technology that can protect ourselves.
in the world today and we have technology that can protect ourselves. Whereas a thousand years ago, well, there was a risk of asteroid spikes
and it's not clear that the world would have been able to come back from that.
However, I think that most of the extinction risks we face,
having a difference between 8 billion people and 10 billion people, it's going to be
pretty small. We already have it basically, all the inhabitable areas of earth, and that's
the much bigger consideration compared to sheer population size. The biggest consideration
I think is this, is the latest stagnation where it's relatively plausible to me that technological progress will slow down
over the coming century and centuries to come.
Basically, if AI doesn't speed it up, then I think there's a good chance it slows down.
And that's because we're just not able to add more and more searchers to work on R&D.
So further technological progress is just harder and harder and harder, the more we do of
it.
But in the past, we've solved that by just adding more and more people doing R&D and
trying to do technological innovation.
That's by both just having a larger population.
So we have 8 billion people alive today.
It was I think 200 years ago, we had a billion people.
But also increasing the proportion of people devoted to
R&D and we know that population is going to peak,
maybe by about 2050 and then afterwards decline,
maybe as late as 2100, we don't really know.
And we just, there's only so far you can go by increasing the
proportion of the world of your population devoted to R&D.
And that does suggest, and if we stagnate, period, a very risky period, when let's say we have
very advanced bio bioweapons,
but not the technology to defend against them, I think that would be a bad thing from the
perspective of civilization. It would increase the risk of us going extinct. And in that respect,
having, you know, there being more people would be helpful, it would give us a longer lead time,
you know, help further technological progress.
Okay, I've spoken for quite a long time, but that being said,
I don't think this is like a huge issue either way.
Can you dig in a little bit more to the risk
of technological stagnation?
Why is it that there's kind of an embedded growth obligation
within technological progress?
Yeah, so that was pretty quick. So many people who think about the long term
who focus on the future are often pretty bullish on economic growth. And the argument for this
is like, oh, well, it compounds over time. You know, if you're getting, even just increasing a growth rate by 1% over 70
years means you've made people on 70 years, twice as rich as they otherwise would be.
There's a huge difference in it compounds over time. That's actually not a reason why I am
concerned about technological stagnation, because as I suggested earlier, I just
don't think you can have economic growth
that compounds for very long periods of time.
That's you get to the situation where you've
got 10 to the power of 89 civilizations
for worth of out pertinence.
This just doesn't seem plausible.
So instead, at some point, we're going to just plateau.
And that means that if you speed up economic growth now,
well, then you get to the plateau a little bit earlier.
And it's kind of good into the intervening years,
but not good over the long term.
Or it makes no difference over the long term.
So one way of putting this is that I think that in general,
tech progress is kind of inevitable.
It probably will keep happening.
Maybe just a faster or slower rate.
However, stagnation would be very different.
And that's where it's not just that growth slows,
but we just stop growing altogether.
Or even the economy starts to shrink.
Where we're just not inventing new things that are improving
people's quality of life. And that I think could be quite bad if we're at this period of high
extinction risk. So as an intuition pump, suppose that technology had completely stagnated in
the 1920s. So with each the 1920 1920s and then after that there's no more
innovation ever. Would that have been good? Would that have been sustainable? And the answer
is no, I think. If we'd stayed at 1920s level of technology, the only way we could have supported
civilization is by burning fossil fuels and burning them. And definitely until we burned all of them,
that would have obviously caused extreme levels of climate change and absolute catastrophe. burning fossil fuels and burning them indefinitely until we burned all of them.
That would have obviously caused extreme levels of climate change and absolute catastrophe.
And also then we would start to progress because we just could no longer have that out of
fossil fuels and we would no longer be able to power civilization.
It was only by further technological development that gave us clean sources of energy,
like nuclear power and solar panels.
And I think we could enter similar unsustainable states
in the near future, where again,
bio weapons are the main one, where imagine,
okay, now we're at 2050 levels of technology.
We have advanced bio weapons, the sorts of things that in principle could kill everyone on the planet, but not the technology to defend against them.
And now imagine we're at that level of technology for a thousand years, okay, 10,000 years, the risk will add up and over the long run we're almost certainly doomed.
So we need that consideration suggests that we need to at least in a measured way, kind
of navigate our way through the time of pedals, the time of heightened existential risk, so that we can get out the other side
and be sufficiently technologically advanced that we aren't facing risks of catastrophe just
every year that add up over time and instead have a position of what my colleague Toby Ord calls
existential security, where actually we've gotten risk to a very low level indeed.
Yeah, you want to get to sort of ex-risk mastery in one form or another. One of the things that I
always thought about when I considered long-termism, especially after Toby's book, was, well, why aren't
the smart people in ex-risk campaigning for unbelievably slow technological development. Let's say that the earn, ball, analogy works
and that there's dangers with every new technology
that you develop.
Why not take 10,000 years to add in another line of computer
code to the AI that we're doing?
Why not?
If what we've said is true and that there's
this basically limitless endless
duration for the potential of humanity and yeah, we need to get off earth within two billion years
or whatever, or the oceans are going to boil, but we've got time. Why not mandate or somehow enforce
an insane slowing of technology, but it sounds like one of the reasons that you can't do that
is because you need to be able to continue the conveyor belt of technological progress in order to
protect ourselves from some of the negative externalities
of previous technologies that we've already locked in
their existence of.
Is that right?
Yeah, I mean, you make it sound like there's like that
that way it's conveyor belt.
Is that not kind of how it is?
Like, well, we've already started.
I mean, we've now have, you now have close to 10,000 nuclear warheads
ready to launch.
We're running that base every single year.
And so hopefully, there's a state in the future
that does not have such high risks.
And then we can just stay in that state
of technological advancement.
I should say that, yeah, I'm protect growth,
not everyone who endorses long-term resumes by any means. Other people actually would want
technological progress to go more slowly and more sustainably. One thing we all agree is we want
certain technologies to be slowed and others to move faster. What would be some incentive?
What would be some incentive? Well, again, things are often easiest.
We haven't talked much about AI,
but things are often easiest to talk about in the bioresque case,
where this far UVC lighting,
that if it works like safely satellites as a room,
that's a defensive technology.
Let's have that as fast as possible.
Technology that allows you to redesign viruses
and to make them more deadly. Let's just delay possible. Technology that allows you to redesign viruses and to make them more
deadly. Let's just delay that. How about we do that in the future and not just now.
So, and that's an idea that Nick Boston calls differential technological progress. So, basically
we all endorse, I think almost everything would endorse that paradigm. But then if we're
talking about tech or economic growth as a whole, should we go faster, should we go slower?
I lean on the faster side, other people would lean on the slower side. Some people think that with AI in particular, we should really just be trying to slow that down enormously if we could, perhaps even just say, look, there are certain sorts of AI technology that we
shouldn't allow, like human cloning, like in the same way that globally we don't allow
human cloning.
But the main thing is just when we're taking action, we need to consider the factability
of what we're trying to do.
And I think it's extremely hard to slow down tech with this or economic growth as
a whole for the world as a whole. So let's say, you know, I'm, I'd become an activist.
I dedicate my life to doing this and I convince the UK to stop growing. Doesn't make a difference
in the long term because all the other countries in the world are going to keep growing. Okay, let's say I managed super hero activist, managed to convince, you know, 109, I've
actually forgotten how many countries there are in the world, but I convinced every country
but one to keep, just stop glowing.
Well, the last country keeps glowing.
Before long, it's just become the entire world economy. Because if you've got compound
growth, even if you're the small country growing at 2% per year, when all of the other countries
are stagnant, within a couple of hundred years, then you will be the world economy. And the activism
of those other, all those other countries will have been absolutely for naught. And that's how I
feel about the kind of degrowth movement
in general, which comes from a very different perspective,
kind of environmentalism, is whether or not it's a good idea,
and I tend to think that the sentiment is not great,
but whether or not it's a good idea,
it's also just ultimately futile,
because it would need to be a global effort,
and I think given the, you know, given the just next level of difficulty, we're talking about
and trying to do that, there are just better things we could be doing, such as accelerating
the good tech, delaying the bad tech.
It's a combination of a lack of a God's eye perspective and ability to deploy stuff with some sort of
Malthusian trap and a tragedy of the comments for the future. It's like all of that kind of mixed up together to create this sort of
terrible terrible potential.
Yeah, exactly and in general
You know one thing I really always want to clarify with long-termism and work on existential risk and the stuff
that what we do with effective altruism in general,
and also what I'm talking about in what we are the future,
is that I'm proposing action on the margin.
So it's like take the world as it is,
have a perspective on the world as a whole,
and how are the sources being allocated?
Should we check, like, in what way are they misallocated?
Should we change them a bit? So when I'm advocating for long-termism, I'm not saying all the
sources should go to positively impacting the long term. What I am saying is that at the
moment, 0.01% of the world's resources are focused on representing and trying to defend the interests
of future generations.
And maybe it should be higher than that, maybe 0.1%, that would be nice, maybe even as
high as 1%.
And so, similarly, if we're thinking, oh, how fast should the world go?
Why not?
In order for the action of elephant question,
it's like, what should I do?
Should I try and speed it up or slow it down on the margin?
That's the action of elephant question, not this.
Oh, what, you know, if I could control the actions
of everyone in the world, what should I do?
Because I don't, all we can ever do
is make this little difference on the margin.
What do you think about the volume of attention that's being paid to climate change?
Over my overall view is enormously positive about it.
So, you know, when you look at different model traditions over time. It's actually the remarkably concern for the future
is like the remarkably about it.
Surprising, like, you know, for the book,
I was really, I went to be like,
oh, I want to find Christian thinkers talking
about the distant future at like what we owe future generations
and Hindu thinkers and Buddhist thinkers and confusions.
And it's not like I did the deepest dive, but it's kind of surprisingly hard.
There are actually more thought in kind of indigenous philosophy, such as I look why. But yeah, then, but certainly kind of secular post-enlightenment thought,
it's like surprisingly there. And then, over the course of the 20th century, and then,
certainly the last few decades, we've had this enormous upsurge from the environmentalist
movement that really is standing up for future generations. And they've seen kind of one part of this,
which is focus on, you know, stewarding the sources, especially other available losses, like species loss,
and a particular problem of climate change. And I really feel like, oh wow, this is just
this amazing and like again, kind of contingent thing that there is been that there has been this glance well of concern for
How the other actions could be impacting yeah not just the present day, but also
The world we leave for our kids and our grandkids and then the thing I just want to say on top of that is like, okay. Yeah, this is this great model insight
and The thing I just want to say on top of that is like, okay, yeah, this is this great model insight. And that model insight makes you concerned about climate change,
here are a bunch of other things you should be concerned about.
But do that.
That really is the main takeaway from reading Toby's book.
And he's got that table of the chance within the next century
of something happening in a supernova explosion
is one in a hundred billion or something like that.
And a super volcano is one in ten billion or something. And you start to move your way down
and you get to what climate change, which I think is either one in ten thousand or one in one thousand
over the next hundred years. And then you get to engineered pandemics and unknown unknowns and AI safety and it's one in ten and I think the overall
risk is maybe one in six. So when I read that, it did, it does make me, I understand your
point, right, that anything that encourages people to think about the future generally of
the planet and of humanity is smart. But I'm worried that there is a little bit of a value lock-in that's
going on here where anything that detracts away from a focus on climate change is seen
as almost like heresy and that all of our future existential risk conversations have been
completely captured by a conversation about climate change.
Dude, five or seven years ago, the only people talking about AI safety
were getting laughed out of a room.
That comforts us.
I totally think, I mean, I was there.
I was like, the early vats of Nick Boston's
super intelligent, so I was part of the seminar of UMS.
And I was like, this young guy
to kind of figure out what should I be into,
or buy, and I was like, I mean, I was curious in it,
and I was helping, and we're having conversations.
It was also, it felt very, very, very,
laughed out of the room.
And now it's kind of, yeah, now it's much more mainstream.
That's my concern, and that was the takeaway
from Toby's book, and also one of the reasons why
I think that I know that you guys do testing on messaging
and stuff, and I really, really think that that's
one of the most important areas. Look at me, bro, sighing my way into stuff that you guys deconstruct on messaging and stuff, and I really, really think that that's one of the most important areas.
Look at me, bro, sighing my way into stuff that you guys deconstruct
very, very fine-tuned on a daily basis.
For me, I'm so compelled by the ideas that came from Nick's work and Toby's work
and your stuff and Anders and, you know, like, pick your favorite existential risk
philosopher.
Yeah.
And it blows my mind that that hasn't made even a fragment of the impact
of a Swedish girl shouting at adults on the stage.
And it I find myself being less drawn and almost triggered
sometimes by the environmental movement because of how much attention is paid to it and how
little attention is paid to other ex-risk that I think should take priority.
For sure. So a general thought of just if someone saying my cause is X and it's the only thing
that matters and everything should be determined in terms of how it impacts my cause. That's just like social media does not help with this.
But yeah, that is just like systematically,
not a good way of thinking and it happens.
One thing that writing the,
but like what we owe the future has made me appreciate
is just that like chain model change takes time.
So if we look at climate change, so okay, now it has, I mean, yeah, it's interesting.
I listen, I hear climate change and I feel kind of inspired by it when I'm like, okay, cool.
Give us a few decades and we're going to be, maybe we're going to be there.
Where concern about the climate, you know, it actually goes back a really long way.
So the first quantitative estimate of the impact
of emitting CO2,
it goes back to a climateologist called Svant Arrhenius
in 1896.
And his estimate was actually pretty good.
It was a little bit on the high side.
And then the term, the Gleenhouse of Guest,
Gleenhouse effect was early 20th century.
Frank Kapatar, the director of it's a wonderful life,
had a documentary about climate change in the 1950s.
So the level of concern was there.
And then the scientific consensus was the 60s and 70s.
Then it's like the 90s, not really,
the 90s that like the activism around climate change starts to really happen and build up.
And so when I think about AI and bioresc and so on,
I'm kind of like, it's like where the 1950s with climate change.
Okay. And there's actually a leading, and so on. And kind of like, it's like where in the 1950s with climate change. OK.
And there's actually a leading, and so I'm just like, yeah,
there's certain scientific technological facts
that are just enormously important.
People don't know them yet.
We've got to build this movement.
This is going to take time.
It's frustrating that I completely understand being frustrated.
It's like, this is so big and people aren't thinking about it. But I guess like, I've
responded by that by being like, okay, let's go.
So they've wet. You kind of saying that climate change and the activism around that is a appetizer. It's an aperitif to what's coming in.
Well, dude, yeah, exactly.
I very much admire your positive,
optimistic outlook around it.
And that does fill me with a little bit more hope,
the fact that people needed to understand
about the fragility of human future.
One of the ways that this got delivered
because it's very
obvious, you know, smoke in the sky and fires and heat and stuff, right? It's like, it's experiential
as opposed to AI risk, which is just cold you can't see until it's not. So yeah, maybe, maybe,
maybe you're right, maybe that's a thing. Going back to what we've spoken about before.
So you've got extinction, but you've also got
unrecoverable global collapse.
What's the difference between those two things?
And how would a global collapse happen?
OK, so I said extinction seems pretty hard.
I mean, I still give it 0.5%.
That's a lot. When you're talking about, you know, that's only 0.5 over the next hundred years. It was 20%.
Oh, yeah. I mean, no, 20% was the risk of some major, like hundreds of millions of dead, kind of level pandemic. Yeahide whether to test me or not, but utterly terrifying, I think.
Yeah, so I mentioned killing literally everyone.
Okay, that's like, you know, maybe that's, you know, low, you know, low, but very, very
significant likelihood.
I could ask the fee that just set us back to pre-industrial
levels of technology, so killed 99% or 99.9% of the world's population. At least you
might think that might be much more likely. And could occur from, like, could also occur
from pandemics, and could occur from an all-night nuclear war that leads to nuclear winter.
Perhaps could occur via other means.
Would we come back? Because if not, then the undercover collapse of civilisation would plausibly be just as bad or almost as bad as out like extinction. We would go to
like a farming society again or even hunt together a society, limp along an asteroid would
cause a lot of wipers out. It would just be a matter of time essentially.
essentially. And so I, you know, there hadn't been that much work before, before what we are the future on this question of how likely is a civilization or collapse? If there was
one, like, would we actually come back? And so I really wanted to do a deep dive into
this and really tried to think that, yeah, assess the question of like, well, yeah, would
we come back and if not, why not? And I actually kind of came out, pthetio optimistic, certainly
over the course of doing the search for what we owe the future. I came out being a lot
more positive thinking it's like well over 90% I think that civilization would bounce back if we moved back to the industrial levels of
technology. And that's for the few reasons. So one is that if you look at local
catastrophes, they very rarely lead to collapse. I'm actually not sure if I even
know any examples of collapse in the relevant sense, where you
take the fall of the Roman Empire, that was a collapse of a civilization.
But firstly, it was only local, there's never been a global collapse of civilization.
And secondly, it's not like people disappeared from the region.
It's just that technology went backwards for the wild and other indicators of advanced civilization kind of went back for the wild, whereas if we
would be thinking about that happening on global scale and going to kind of pre-indust the
levels. And so even if you take something like the Black death in Europe, which killed somewhere between 25% and 60% of the population of Europe.
There wasn't a civilisation or collapse.
It was enormously static with such a loss of life, but civilisation, European civilisation
kept going, and in fact, there was the industrial revolution just a few centuries later. And yeah, in the book I discussed many other ways in which locally societies have
taken these enormous knocks and then kind of bounced back. So I give the example of Hiroshima as well
where, again, prior to the search, I'd had this image in my mind of Hiroshima even now is just
this like smoking ruin.
Whereas it's not true at all.
Within 13 years, the population was back to the population
before it was had an atomic bomb locked on it.
Now it's this like flourishing city.
So that's kind of one reason.
A second is just how much technology
that could be imitated or information in libraries that people could use
in order to make technological advancement again, where the early innovators, they were
doing this from scratch, there was nothing to copy, whereas if you're like, oh, there's
this thing, it seems to burn oil in order to make a motor
go around.
I want maybe we can copy this.
It becomes much easier, especially then there's materials in libraries too.
Then the final consideration is just that if you think, if you try and just go through
a list of what are the things that could stop us from getting to today's level of technology again, you kind of come up short.
I think the one that could be decisive is fossil fuels, where we've already kind of used
up easily accessible oil, and over the course of a few hundred years we would use up easily
accessible coal.
But at least for the time being, we have enough in the way of
fossil fuels that even a catastrophe that sent us back to the Stone Age. We would come back
and if we were industrialising again, we'd have enough to get to today's level of technology
at least.
What's your idea about coal mines and what we should do with them? Yeah, so in the book I talk about
kind of clean tech in particular as this, like, just very robustly good thing we can do.
But the reasons for that aren't always the most intuitive.
So one, you know, there are many reasons I think for wanting to keep coal in the ground.
Climate change is one, enormous health pollution, that air pollution and health costs from burning
fossil fuels.
But one is just we might need it in the future.
If there's a catastrophe that sends civilization back to agricultural or pre-agricultural
levels of technology, and we need to be industrialized.
Well, we got to where we are by burning
the digits amounts of coal.
And we might need that again.
And yet we're burning through it.
And so, yeah, I think the best thing to do
is just invest in clean tech.
So that's not just solar panels, but also alternative fuels.
Super hot rock geothermal,
where you drill just really far into the band
and harness the heat from closer to the mantle.
But one idea that I looked into was just,
can you just buy coal mines and shut them down?
That was the kind of like no brain take.
And like, could you do this at scale?
Could you do this as a way of carbon offsetting where
a large group of people get together they all contribute to a fund that like pays for the
coal mine to be the tired. There are people looking into this and I commissioned some of the
search to look into it. It seems to be hard unfortunately mainly for regulatory reasons
unfortunately, mainly for regulator reasons, where governments have, I mean, there was just very powerful fossil fuel lobbies, and they don't like you buying coal mines to check them
down. So there's going to use it or lose it, laws where if you try and buy a coal mine
with a purpose of shutting it down, the government just voids your contact because they say, if you buy a
coal mine, you have to use the coal. And that I think, you
know, even if you can get around that, that lives up the price
a lot more people perhaps just shift to other coal. So
unfortunately, I'm like a little more negative on that
particular strategy than I used to be. But for the while, I was
just really taken with it. I just,
I just really love this idea of just like, we own this coal mine now. We're going to turn it into
this museum, um, museum for obsolete technology. And it'll have like, you can go down like a theme park
in the like, uh, the little trucks that you see in Indiana Jones, like going along the carts.
Um, I had the whole vision,
but maybe one day I'll still do it. We've got the most impactful thing. We've got those seed banks,
right? Is it in Iceland, I think, or in Svalbard? That's it, in Norway. And that bunch of every
plant seed on the planet, and presumably there must be equivalent backups digitally of every book probably
distributed across the world. And they'll be in underground bunkers or on the side of a mountain
or something like that, so that if there was some sort of huge collapse or any kind of existential
risk that had a bit of kinetic energy to it, that it would be kept away. And what that does is it
it shortcuts the pain and the investment
that you need in order to be able to actually find out what to do. You can just go back
and read what you need to do and then you get to rediscover the technologies. And I've
never heard anyone else talk about it. The fact that one of the advantages we had was
that all of the coal and the oil was relatively close to the surface. And you pick the by design, you pick the low hanging fruit first, but that doesn't fruit
future preview against collapse, recoveries potentially well, because if you do collapse
and you need to recover, presumably you're going to be able to get low hanging fruit,
but not the higher hanging fruit.
So easily, which means that you actually need to keep that you need to portion off a little bit in future. I was thinking about this as well. Is there
a reason? Would there be a justification for having an almost like an air-gapped civilization
somewhere on Earth? I know that we're not a multi-planetary species yet, so we haven't
put somebody on Mars, and maybe that would be one of the obvious ways to do this. But to have a selection of people with a wide genetic pool and you've done the testing on them,
and they go to a particular place and they live there for their entire lives,
or perhaps you could do it like military service and people could cycle in and out and you would
maybe go for four years. But that's kind of like as close to airgapping a backup for civilization behind the normal
civilization that we've got going on in the same way as the seed bank or the same way as
the modern library of Alexandria or whatever we've got.
Is there a justification for ascending some people into the side of a mountain and making
them live there for ages?
Yeah, absolutely.
On your previous point, I just wanted to briefly mention
that the idea of harnessing the low hanging fruit.
This is historical cantifactual where, if we were to collapse,
where would the industrial revolution happen?
And I can bet you my last dollar would not happen
in Western Europe, because we already
used up all the surface coal.
Instead, it would be in the US or India or China or Australia,
where there's more.
But OK, civilizational backup.
This is not only, I think, a good idea.
It's something that people who buy into the idea of long-termism,
I think we really might make happen.
So again, with foundations, I advise.
In front of the best side of advise, this is something we'd be
pretty interested in doing. So having a hermetically sealed refuge where a certain number of people
stay in that for long periods of time, yeah, perhaps it's six months at a time and kind of cycles
out, perhaps it's like a stage. There's a couple of them and you do it asynchronously.
Or perhaps it's even longer. That is completely protected against the outside world. So again,
if there's a really worst case pandemic, then, and it's, you know, we're not just talking
about hundreds of millions of people dying here, but literally
it will kill everyone.
Well you have this population of let's say a thousand people, including leading scientists
who would be able to work on medical countermeasures like vaccines.
And they would then be able to stay there, and you'd equip it such that they could stay
there for years,
design countermeasures so that when they emerge,
if they need to, they're protected against the pathogens.
And then the second thing you would also need to stock it
such that it could rebuild society after that point.
This seems just like so wacky as an idea to many people,
but I think it makes sense. On the scale of the resources, like a few hundred or a thousand people
to be just like protected in case
for the very worst case outcomes.
I think from that perspective,
the answer seems like, yeah, obviously yes,
because even if it was a small population,
a few hundred people,
that would be enough to the build civilization again,
and I think it's worth preserving.
Rob Reed laughed at me when I gave him that idea last year.
So this is, yeah, I did. I like it. I think it's a...
But I think it's a cool idea.
But I think it's interesting, the response that, you know,
and I have it too, when you get converted with certain ideas,
it's a sound like the sci-fi or something
and you laugh at them. But that's just been through for many ideas in the past, like the idea that
viruses were carried by these little monsters that live in your hands and claw in your skin. It's like
what? This is nonsense. Yeah, it's true. The idea that we could fly, build big metal planes
and take them into the skies. That was pretty wacky too. I think we have this very natural
human impulse to just laugh at these silly ideas. But I think when the stakes are this
great, we should actually just be taking a little bit of time to reflect and think,
okay, does this actually make good sense? And I think in this case, it does.
Given everything that we've gone through and the potential duration that we're moving towards
for our genetic inheritance and civilizational future, the risks that we've got, the way that we
can mitigate them, how we can move back against them, what's the actual goal? What should our goal for the future
of humanity be? What are we even optimizing for? What are we building for? So I think the key thing I
want to say is that we don't know yet and that's okay. So imagine you're a teenager and you're
wondering like, oh, what's the goal of my life? And you just want to be as happy as possible, let's say.
Now, you might just not know yet.
You know, what do you want to do?
Well, okay, as a teenager, you want to not die,
because if you die, then you're gonna,
you know, you're not gonna have a full-of-the-shing life
after that point.
So, similarly, we want to make sure we don't go extinct.
But then you just want to have lots of possible futures open to you and be able to have time
to figure stuff out.
I think how to see what's actually is going to make give you the most flourishing life.
And that's okay.
So you can have a plan that is about getting yourself to a position, such that you can figure
out what to do for the rest of your life.
And I think the same is true for humanity as a whole.
We want to, we should try to get ourselves to a state where we are not facing existential
catastrophe. We are, we do have many possible futures open to us. We're able to reflect
and delivery and then make further moral progress so that we can then collectively figure
out from a place of much
more enlightened, from much more enlightened, these in perspective, what we should do with
the potentially vast future that's ahead of us.
And I call these ideas, this idea of kind of exploring and kind of figure out different
things, a model, a model exploratively society in the book with the kind of the limit case of that,
I call the long, the longer the flexion, which is where it's like, okay, we've got to a state where
we've kind of solved the most pressing needs. Do we want to immediately rush to just
settling, settling the stars with whatever our favored model views are? I'm like, no,
certainly the stars with whatever our favorite model views are. I'm like, no, we've got time.
As we talked about, the very start of this podcast, we've got an awful lot of time.
And that means we can, before we engage in any activities that lock in a particular world view, such a space settlement or formation of a world government, then we really take the time to ensure that we've gotten
to the end of model progress, that we've really figured out all the way that we can.
Are you of the mind that we should move more slowly with model progress and our considerations
for what we should do once we've got ourselves to technological maturity
than in technological progress en route to get there.
So I think with technological progress how fast we should go is kind of tough.
Where you know there are these reasons, I think, why at the moment, going faster technologically
is a little better, it's not that big a deal.
I do think that if you can make model progress faster, you just want to go as fast as you can,
because you pick the everything better than the future. It's just that I think that real model progress might take time.
So this might be through for technological progress too, maybe it takes,
you know, I've said that we can't go as fast as we're currently going,
but assuming we slow, perhaps the whole project takes millions of years to come. I don't know.
But the same might be true for morality as well. And unlike with technology, there's always
incentive to build better technology, it gives you more power. It means you can do more things,
whatever your values are. So there's always strong incentives to do that. With moral progress, that's not
to do. If I'm, if I don't care, if I have all the power, and I don't care about having
a better moral point of view, there's no law of nature, there's also no real competitive
dynamics that force me to have get to a better moral perspective. And so what I'm really
doing is kind of sketching out this idea of the long of affection is just to be is to really say,
we need to keep working on getting better models, getting better values.
We don't want to just like jump into the first thing that we think is good, early 21st century,
Western, liberal-ish kind of,
of reality, it's like, no, let's re,
like, we can take our time before engaging
in kind of any of these big projects.
There's a less of a degree of urgency with that.
Although you can go as quickly as you want,
you can, you can luxuriate and take your time,
perhaps in a way that you couldn't do with
with technological change.
Yeah, I mean, I think for the world as a whole, I think we could
perhaps luxuriate with technological change, at least and if we get to
a point of safety.
So perhaps after the bio weapons, you know, within this un, perhaps in this unsustainable
state at the moment, but perhaps we, you know, early 22nd century, then maybe there aren't
any nuclear weapons now, or we've got great protections against them. And perhaps we've
not yet finished technological progress, but the particular level within is actually really
very safe and stable. Maybe we just want to hang out there for a little while,
while we figure some model and political stuff out. So this is kind of like
Bostrom's thing about how you can have, I can't remember the asymmetry in terms of technological
development, but you can also do this in terms of moral versus technological development that you
can kind of keep on idling while you continue to move on forward and then let's say that you have
your technology that allows you to catch up to that. And then let's say that you have your technology
that allows you to catch up to that.
I think it's Eric Weinstein that said
where gods were your shitty gods.
It's Bostrom that was talking about
where gods put for the wisdom.
And that.
Yeah, exactly.
We want to have, yeah.
We want to accelerate wisdom as much as we can.
It just seems good in general.
And then, like I say, I think that the particular circumstances
at the moment, the do mean that like, ensuring we keep tech progress going, like is good as
well. But if you could guarantee me, guarantee me that, look, even at a slower rate, just now
with that like, like, 0.5% growth per year, 0.1% growth per year,
but we're going to get moral progress along the same time.
Then that would and assuming tech
progress is not going to stagnate,
but just continue with that lower rate,
then I feel pretty good about that.
How do you speed up moral progress?
Have you thought about that?
I mean, one is by very hard questions.
So one is having like a kind of diversity of moral views
and even societies as well.
So there was this kind of great ideal behind the United States
that never really happened of all the different states
would be like this vibrant, different culture.
And they'd be like a laboratory of democracy.
And instead, it just like, United States became relatively uniform country.
But you could aim for a global society that was more like that.
I think we can also just invest more. I don't talk about this that much because it sounds
so incredibly self-serving
as a model. Do you want to be more philosophers to be invested in?
Yeah, I think we should, like, how much of what the world society of other's horses get spent on,
you know, and not just model philosophy, but other humanities,
you know, capacity for building capacity for empathy and so on. But really, the
amount we spend in a society is like vanishing these small, like 0.01% even on all of the
humanities as a whole. And that's like in your own life. Oh, how much time should I spend
figuring out where, how I spend the entire rest of my life as a teenager. Oh yeah, maybe I'll spend like a day on it and that'll do.
I don't know, just make sense.
You're talking about your entire life.
Similarly with humanity as a whole,
you know, if it were a case,
the kind of model of reflection and reasoning and progress in the model domain had a hundredth of the status of the
stage and investment that technology does, even though it's the model
progress that's contingent, not the technological progress. So actually, it's
the model progress that we should be trying to safeguard even more. Well, yeah,
then we would have to increase the number of mile philosophers, like a thousandfold.
And honestly, I think it would be a very good thing.
But yeah, perhaps there's some bias there.
It would make my job a lot easier on the podcast as well.
That would be a good extra.
Exactly, it would happen.
So, yeah, and also, through the quality as well,
you wouldn't be getting idiots like me.
You'd be getting the very best people
who otherwise going into maths and physics
would be working on improving our moral understanding.
Will McCaskill, ladies and gentlemen, if people have been compelled by the stuff that you've
spoken about today and they want to find out more or work out how they could contribute
to making our future a little bit less terrible, where should they go?
So obviously, I'd love you to lead the book that I just wrote,
what we owe the future, which I think will be out at this,
when the podcast is launched.
But then if you want to take action,
two big ways to take action.
One is through donations and I co-founded
an organization called Giving What We Can.
They encourage people to give at least 10% of
their income to the organizations that are most effective.
And you can to the organizations that are most effective.
And you can contribute to organizations that are trying to prevent the next pandemic,
or that are trying to safely guide the development of artificial intelligence, or prevent a third
world war.
And thereby, there is significantly impact to the, to the objective of human civilization. The second thing, big thing you can do is
try and think about how you can use your career to do good in the world. Maybe you're already
established, perhaps you have a podcast host and you should think about how can you get the
you know most important messages out there or you might be early on in your career and
important messages out there, or you might be early on in your career and trying to figure out, yeah, what should the subject of my life be?
Co-founded another organization, 80,000 hours, and it has enormous amounts of advice
all for free online to try to help people make the best career decisions so that
they can have the biggest impact. In particular, biggest impact on issues that
impact the long-term future and help
build a better world for future generations.
They have online advice, they have a podcast, 80,000 hours podcast, and they also offer
the one-on-one advice as well.
So yeah, check out both giving what we can and 80,000 hours, and my book, What We Are
the Future To. And take that all into account.
You can make a truly enormous difference to the world
for thousands, millions or even billions of years to come.
Well, I appreciate you, man.
I love the work that you guys do.
If this is the history of the future
or the beginning of the history of the future or whatever,
I'm glad that we've got people like you
that are guiding us in whatever way you can. Thank you.