Modern Wisdom - #306 - Thomas Moynihan - A History Of Existential Risk
Episode Date: April 10, 2021Thomas Moynihan is a historian and an author. Humans may have only had the ability to destroy ourselves for the last hundred years or so, but thinkers have been hypothesising about the potential end o...f existence for thousands of years. Today Thomas explains the history of how humanity came to realise its potential for extinction. Sponsors: Get 20% discount on the highest quality CBD Products from Pure Sport at https://puresportcbd.com/modernwisdom (use code: MW20) Get 10% discount on your first month from BetterHelp at https://betterhelp.com/modernwisdom (discount automatically applied) Extra Stuff: Follow Thomas on Twitter - https://twitter.com/nemocentric Buy X-Risk - https://amzn.to/2PzTYKx Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hello friends, welcome back. My guest today is Thomas Moynerhan. He's a historian and an author and we're talking about the history of existential risk.
Humans may only have had the ability to destroy ourselves for the last hundred years or so, but thinkers have been hypothesizing about the potential end of existence for thousands of years.
Today Thomas explained the history of how humanity came to realize its potential for extinction.
As far as I'm concerned, existential risk is the topic that no one is talking about,
which the entire planet should be focused on, and getting a backward-facing view that
actually gives us the foundations of how we've arrived at our understanding now is super
interesting.
I thought, well, what's the point of looking back given that the existential risks that we're encountering will be in the future, but actually we can learn an awful
lot. We learn about the principle of plentitude and about conceptual inertia, tons of really
interesting insights from intellectual history, which is basically the history of ideas.
And that's what Thomas looks at. He's awesome. I really hope that you enjoy this. And if you
are interested in getting into more of this topic, we give you some great book
recommendations throughout the episode, so feel free to delve into them.
But now it's time to learn about the history of our own extinction with Thomas Minerhan. time is mine hon, welcome to the show.
Thanks for having me.
If you're worried about existential risks and
annihilating our future, why spend any time studying the past?
That's a good question. I hope as we talk through this the true significance of
what I'm about to say will be elaborated further. But I think that it's so easy to focus on the risks coming
towards us, coming down track. And it's slightly harder to take stock and look backwards and
see just how far we've come. One of the things I mean by that is that the very ability for us to even be able to see
those risks ahead, the risks on the horizon, that's a massive achievement for humanity,
for our knowledge of the world, for our knowledge of what is best to do within the world,
knowledge of what is best to do within the world. That's a massive achievement. And again, I hope that as we speak through this, the truth of this might hopefully unfurl. But some of our
biggest achievements are almost invisible to us. Some of the most profound breakthroughs of human knowledge are often invisible to us.
So I often point towards the fact of, take slavery, for example, for most from the
majority of human history, people presumed that it was just part of the natural order of things.
It wasn't questioned. All of us these days kind of take it for granted that's inherently wrong.
Another example I like to use is perspective, right? So, you know, think back to being a kid in school.
You'd learn to draw your first cube or your first, yeah, prism or, you know, triangle,
pyramid, sorry. It's so easy. It comes to you so naturally. Rewind, you know, six
centuries, seven centuries. It wouldn't have come naturally at all. I was drawing cubes, I don't know why age, but pretty young.
And that's not because I'm a genius or a prodigy or some divin-cheater, you know, mega genius.
It's because of cultural osmosis, because the ideas that we just take for granted, we
inherit, but someone had to come up with them and well often lots of people have to come up with them and it takes
You know centuries decades of effort of hard work of error correction
Of finding out the ways in which we are so severely wrong about the world and yeah, so to tie up my point
Thinking being able to even notice these risks
The risks facing humanity or just the fact of how bad human extinction would be.
Those are really huge achievements, and they're quite modern ones as well.
So, yeah, I would say I would put it like this.
It's a cure for despondency, because like I said, it's easy to see the risks ahead, harder
to see how far we've come.
So it's easy to be despondent, it's easy to despair, but it's deceptively easy because
we have that kind of bias where it's easier to look straight ahead rather than look to
the past.
Are you trying to say that a book about the annihilation of humanity is somehow hopeful?
Yes. So I often get this when people read my book, they surprise that it has this,
well, it attempts to have this hopeful message. But yeah, nonetheless, it does.
Yeah, nonetheless, it does. And funnily enough, I came into all this way less hopeful.
And I'm not sure if it would be the perfect, the right word,
but more fatalistic.
And it was through actually tinkering
through, tracing through, the contours
of discovery and intellectual progress.
And just how far we've come in terms of you know even if we
know that we're not true or we have a hundred percent certainty of anything right now
just the about the ability to realize that we're wrong right to realize that we're wrong
and therefore realize that we can know better we can correct ourselves I find that fascinating so
yeah yeah I think it's a hopeful book I hope it's a hopeful book. I hope it's a hopeful book.
I think so.
You mentioned there, I think quite an important point.
You say the ability to grasp the prospect of our own extinction is a significant intellectual
achievement.
It separates us from other animals.
You also are trying to say that that's something that we should be thankful for.
This species-wide denial of death that Ernest Becker, like turned up to a million, would be proud of.
Yeah, I mean, so there's a Jonathan Schell, he was a guy that wrote a book The Face of
the Earth in the 1980s, and it was one of the first books, it's relevant to this discussion,
because it was one of the first books to really like crisply state how bad human extinction would be. So this was in the context of the Cold War,
you know, thermonuclear proliferation. And he amongst other people, we can talk about this later
if we want, but amongst other people pointed out this kind of asymmetry and how bad extinction is
compared to lots of other, you know, kind of of other disasters. It's the foreclosure
of the whole future. One of the ways in which Shell expressed this was that there are
these two deaths. The first death is the death that we're all familiar with. Our own death,
our individual death. That's the Ernst Becker denial of death, you know, a lot of culture is, in a sense, kind of, you know,
seems to be this, but humans have this unique awareness of our own mortality. I'm not too familiar
with the Becker thesis, but like, you know, say that, say that was clearly one of what, one of the
kind of foundations of culture, right, humans since they started using language became behaviorally
modern, have probably been kind of aware of mortality in that sense.
But yeah, then Charles makes this point,
there's the second death,
which is the death of the whole species
and the loss of its entire future.
And that's the more modern achievement.
So yeah, we've been denying death since day one, I guess,
but being able to think about this second death,
this ultimate fate, the
loss of the entire future, yeah, that's a lot more recent. And so I'm hoping that we can
level up our denial of death to that kind of civilisation scale.
That's so funny. Can you give, for the uninitiated, for the people that haven't taken the
existential risk red pill, what is the most compelling hammer
blow that you can give them about why there is an importance to existential risk? I'm already
one of the initiates, right? I pray at the altar, I wear the weird mask with the long cronos on it,
like, you don't need to convince me, but what to you is the hit in the existential soul example that you can give people?
Yeah, yeah. So, for me, the kind of penny drop, the best place I've seen it argued is
a philosopher, Derek Parfett, who actually was Nick Bostroms, one of his teachers, supervisors.
So, and this is around the same time as that Jonathan Schellberg that I just mentioned,
so in the 80s, he wrote this book called Reasons and Persons,
and it's this kind of voluminous, meticulous tone of, you know,
kind of very detailed ethical philosophy. And it's a masterwork,
but it's, you know, kind of deeply philosophical, deeply complex. Then the last couple of pages,
he makes this argument about the asymmetry that I just pointed to. And so he makes, I think this
is the best place this argument has been made, so I'll try and rehearse it.
So he says, think of the three scenarios.
The first one is peace.
The second one is a nuclear exchange
wherein 95 to 99% of humans are killed.
The third one is some kind of exchange
where 100% of people are killed.
And then he says, where is the biggest difference? is some kind of exchange where 100% of people are killed.
And then he says,
where is the biggest difference?
Is it between one and two,
so between peace and 95 to 99%
or is it between the 95 to 99% and the 100%?
Now, intuitively, and our moral intuitions are from wrong.
Inheritively, you might think instantly,
obviously, it's the difference between the first,
the first and the second, it's the difference between peace
and the 95% to 99% fatality.
Parfait makes the argument that that's absolutely not the case.
The larger distinction, the larger difference in severity
is between two and three.
And that's because, again, it's the loss of the whole future.
So he makes these points that, you know, the Earth is likely to remain habitable for another
billion years or so. Within that time, there will be vastly more generations of humans than they're
already have been. So, you know, civilization itself
has only existed for like something like 10,000 years. So, if we don't screw things up,
there's a lot of future ahead of us, a whole lot of future. And this is just constraining
it to the earth, right? There are other places we can go and other places where we can have even more future.
So basically, we're kind of in the daybreak of civilization of the human story.
And so since then, you know, other people have extended Parfits argument.
And it's so funny to me that Parfits kind of is like a throwaway in the last couple of
pages of this huge book.
But you know, namely it's Nick Bostrom, for example, he kind of transhumanist lens where it's
not just kind of the duration of the future, it's also how much more quality that could
be in it because, you know, should humans use their technology in the right way, there's
this whole kind of possibility space of other experiences above the human condition.
So, I think a way that people often put it is that mice probably aren't very good at
experiencing symphonies, but we are.
So there's presumably headroom above us, right?
There's kind of orders of magnitude potentially. So you add this extra transhumanist kind of, you know,
spin on it. And yeah, I think all the pieces are together there. You know,
Toby orders now kind of given an even more, even simpler and even more effective, possibly
definition of the whole thing. But yeah, for me, it's loads of arguments that really,
really drive that home, particularly the Parfit one.
It's easy to miss that fact that it's not just
7 billion lives at stake.
It's not just the current population of the world.
It's our whole future and our whole potential as well.
Didn't Boston come up with a number of 10 with 100 zeros after it?
Isn't that in your book?
If we do an okay amount of space colonization, this is a potential number of lives that we
could have ahead of us.
Yeah, yeah. So there are huge numbers out there. So,
Boston has this paper, it's called Astronomical Waste, where he makes this argument that there's
like a kind of opportunity cost for, you know, delaying space colonization, given that there is actually finite resource within the kind
of accessible universe.
It's appreciating the finite as well, right?
For every second that we don't, that's another bit of expansion that we can never any more
access.
Yeah, yeah, so he makes this argument that, you know, we could expand out and, you know,
create so many value structures. And yeah, that's his word for kind of people or, you know,
functional equivalents of people living happy quality-filled lives. And there, yeah, there are
these computations of the kind of upper bound of a mount of, you know, souls that we could spread throughout the universe.
One of Boston's colleagues, Milan Cirquevich, he came up with a figure before then. I think it's something, it's like a, it's a kind of crazy number. It's like a quadrelectal alien or something.
I can't even pronounce it. It's humongous, but yeah, you know, go to those papers for the precise figures.
But yeah, I mean, so there's a depreciating amount of this potential as well.
But more recently, there's been this kind of, so,
odd calculates that the kind of opportunity cost of delaying is potentially not too awful and so we should be patient and shouldn't
rush ahead if that can foreclose our potential if we are too hasty.
I'm going to jump ahead to something that I've been thinking about for ages.
And you sadly, Thomas, have the job of being the recipient of all of my bent pent up ex-risk ideas because I don't get to talk
about it to everyone. For some reason, not everyone wants to talk about the extinction of
permanent and ultimate extinction of the human race. So it's you and now thousands of people
that are listening. As far as I can see, it seems to be three main factors at play, right,
when we're talking about existential risk and how we should potentially move forward, the first one being the danger of
technological progress, as Bostrom calls it, the putting the
hand into the urn and pulling out a technology, this technology
could be good and improve human life, but every so often you
pull out one which is either gray or black. And if you pull out
a misaligned super intelligence, then you, you dead game over. And
if you pull out a nanotechnology, turns us all into gray goo, then you're dead. And if you pull out a nanotechnology turns us all into
gray goo, then you're dead. And if you pull out an engineered pandemic, then you're dead.
So that's first bit need, there is a danger that's associated with technological progress.
Secondly, there is a requirement of technological protection because there is a non-zero
amount of natural risk. There are volcanoes and there are asteroids and there is the inevitable
heat death of the sun. And we're going to, you know, we need to continue to technology
could progress or else we assure that we're going to have a limited future because we know that
there is a non-zero amount of existential risk that occurs naturally. So there's a balance
between those two and then the final part is the opportunity cost of delaying space colonization.
us to. And then the final part is the opportunity cost of delaying space colonization. Is that an okay framework to kind of view what we should be doing moving forward, that there
is an opportunity cost, there is a requirement for us to not move too quickly as to pull out
BlackBall and that by moving a little bit more slowly, that we reduce the risk of pulling
out BlackBall, but also we can't not move forward at all because there is the natural
risk. Yeah, I think that's a good way of tax-onymizing the major parts of the argument.
Yeah, precisely as you say, it's this mature acknowledgement of the risks of technology, but
can joint with a mature acknowledgement of just how good
technology could create, you know, could make the future.
But then also, yes, that also the other acknowledgement that
without technology, kind of background natural risks will,
you know, the probability will accumulate over time.
And, you know, it's a death sentence.
It's just a delayed one, right?
You know, so yeah, yeah, I mean,
and that's what's novel about it.
Historically speaking, and so, you know,
we can go into the long, the long run history of this stuff
because, you know, that's why I love.
But, you know, it's people have been talking about human extinction, like as a, you know,
as a natural possibility for, I would say, you know, kind of three centuries, two centuries,
somewhere in that ballpark. You know, then we're talking scientists during the Enlightenment,
they're kind of playing around with it as this interesting philosophical, natural philosophical
possibility, but it remains of distant very far off.
And then, you know, it's really around World War II
and the production of nuclear weapons
and then, you know, these humongous nuclear weapons
like the Sarbom and, you know and these kind of really significantly powerful ones in the 1950s,
that's the idea of human extinction, which would previously just be impossible,
becomes slightly more probable, plausible, and therefore a policy issue. So people have been
talking about this worst case scenario for decades.
But often it was constrained to one technology
that being nuclear weapons.
And often there were kind of these quite distinct polls
of thinking where it's often this kind of idea
that we need to just rush towards technology, have it all, that's what will save us, or, oh no, technology is bad, we should, you know,
kind of be careful with what we do with it. This is what's innovative around, you know,
bossrooms working, people in that kind of area is this, you know, mature
acknowledgement that it's, you know, the poison and the cure and therefore requires lots of care
and lots of careful thinking and, you know, what's really great is it's kind of given a shot
in the arm to philosophy because it's, you know, there are these philosophical questions that we need to figure out before
we have the technologies to wield the power in the world.
So yeah, that's how I see it, and that's why I say it's important.
As far as I can see, man, it is the most important conversation to have.
It blows my mind that we have Greta Thunberg rolling around, going on a
pedlo from fucking South America, back to Europe or whatever, to try and reduce her carbon
footprint, to talk about a problem which is going to affect us on an existential risk
scale in millennia. And we don't have any, I mean, in the nicest possible way, like Nick spends a lot of time
working.
He's not fantastic with media obligations.
I think he probably had to save up about seven, seven months worth of his allowance to
go on the Joe Rogan experience.
And also the last hour of that podcast was the most painful hour of any podcast in history.
For anyone that doesn't know what I'm talking about, just
listen to the first hour and a half and then please do not delve into the end of it.
So I want to get into kind of my thoughts around culturally the problems with existential
risk. But as you say, we've got this sort of wonderful research that you've done to do
with the history of ex-risk. You actually start with this kind of cool timeline thing
that's all drawn out of the landmarks
of the life of existential risk. You mentioned the existential risk in pre-history before
1600 BC was framed differently to how it is now. Why?
Yeah, yeah, so I would say that the concept simply didn't exist, right? People couldn't think about it.
And again, yeah, this goes back to what I was saying earlier,
it's one of the things that really drives me.
And so yeah, I'm just a historian.
I'm interested in this amazing work,
this super important work in existential risk.
But the experts in that are not me.
I go through and I try and tell the story of how we got here, which I, again,
like I said, I think that's important moving forwards because people so often fail to
notice the potential of humanity and part of that potential is looking backwards as we
were talking about.
Anyway, so, you know, there are times when there are new ideas,
you know, ideas that haven't occurred to anyone before.
So the obvious ones,
I think most people are familiar with things like,
Darwin's Theory of Natural Selection.
You can go back and you can pick through the ancient Greeks
and you can find someone here and there saying,
oh, maybe humans used to be fish. That's not a theory of natural selection.
So often the way that history is done when it comes to the history of ideas,
it's a case of people picking back through the past and going, oh, here's something that looks like this new idea.
And so that's absolutely the opposite of what I wanted to do.
I wanted to go, no, this thinking is new and that's why it's important.
So you can go back and you can find these kind of,
the greats of ancient Greek philosophy,
talking about things that deceptively look a bit like extinction events, right?
So humans have always loved massive catastrophes. We've always loved to narrativeize, to talk
about huge disasters, calamities, you know, pyro-technic volcanic explosions. You know, you name
it. You can find Plato.
So the myth of Atlantis, right? That's one of Plato's kind of,
you know, it's a thing that he talks about, and
he talks about these cataclysm that have wiped
civilization from the earth in the past.
But then he says, as humanity will be wiped from the earth in the future. So you can start to notice that there's a cycle occurring here.
So Plato, Aristotle, a lot of these kind of ancient authors, they would talk of these massive
catastrophes.
But the important thing is, they were presuming that after the catastrophe, humanity would
return or recover, a civilization
would kind of just happen again.
So you know, it's, that's not an existential risk because the important thing in existential
risk is the irreversibility.
It's the fact that our potential is lost forever, the human species has gone forever, therefore
it won't ever realize that potential.
The very idea that a species could disappear and never return is a really modern idea as
well.
Like, all of these ancient philosophers spent most of their time thinking that
if a species disappears,
it doesn't matter because it will continue
to exist elsewhere or it will just return it another time.
So you can find them saying stuff like this.
There's one in particular, Lucretius, a Roman philosopher.
And he talks about the earth is aging. It's kind of falling apart.
It's losing its life force is kind of the way he talks about it. And so again, you might think
this looks like, oh, he's talking about something like entropy, or you know, but he's not because he says nothing in creation is the only one.
Nothing in nature ever is ever destroyed because if it's destroyed here, it will reemerge at some
other point in the vastness of the cosmic infinity. And so this is a really important idea that took a long time to dismantle is this confidence
that nothing can ever really be lost from nature.
So be that species, be that the dodo, or be that basically value.
So the recognition, the acknowledgement that value and the potential to create value can
be irreversibly lost.
Yeah, it's a really modern one.
And like I said, that's super important because we often talk about these huge revelations
in the way we think. Darwin's theory of natural selection,
it completely changes how we relate to ourselves in this universe, you know, it completely changes
how we think about what we are and what we can do. This is another one that hasn't really been
written about or noticed yet, is, you know, this recognition that, yeah, if humanity has lost, if we lose our potential, if we destroy
it through our own folly or through insignificant, insufficient precaution, that's it forever.
That's a really important idea, and it's a really new one as well.
What was the first existential risk that humans faced as a species?
Was there something in Paleolithic ancestry
where we got down to a population of 12 or something like that?
Yeah, so there's this theory, and I stress theory.
There's been some work more recently that's questioned.
Put this into question, but there is this this
idea that at some point I think it's 75,000 years ago, there was this super eruption, it's
the tober super volcano.
What's up?
Tober, I think it's in Indonesia. So, we're talking ginormous.
There's a call graph if you Google,
tober, you'll be able to see it.
It's a graph of the size of the volcanic eruptions ranked.
And some of the ones are kind of more memorable, like,
crack a toe, they're just like,
pimples compared to toa, it was huge.
So yeah, this absolutely ginormous volcanic eruption, the theory is that it created a
population bottleneck because the climatic fallout seems to have, you know, this is, we're talking kind of early, you know, kind of early
behaviorally modern humans, so that's like, you know, when we were kind of, you know, talking,
doing culture, there's evidence that, you know, the population like really narrows down
through genetics, there's this kind of evidence for this, at this point,
and it kind of dates line up nicely with Tober.
So yeah, again, like I said, it's been put into question more recently, but there is some
evidence to show that humanity has come close to the knife edge before.
And that's not surprising because 99.9% of all species that have ever existed are now
extinct.
So extinction is the raw survival of the exception.
So yeah, I think that's an important thing to know is that potentially we have come
close before, this isn't something that is completely unprecedented.
Indonesia would have been a real hotspot as well because that's where an awful lot of
humans developed from, right?
And I think it was only 12,000 years ago that you had different homo species that was still existing.
You had this sort of pygmy-sized miniature human species that was still existing.
So 75,000 years ago, you'd have probably still had Neanderthroles, you would have had
homo sapiens, you would have had a few others.
So potentially would have caused this bottleneck for all of them, and then maybe we didn't
make it out the other side.
And I think that that probably highlights one of the real importance is with regards to space exploration and just setting
up a colony somewhere else. You know, we can look at Elon going to Mars and say, well,
it's spending a lot of money and it's taking up, we could be spending this on inequality or on world hungers and stuff like that. But from a civilization, God's
eye view, this is super important. We need to get ourselves off Indonesia, because if the
volcano decides to erupt and that can occur in any one of a number of different ways,
we also need to make sure that there is no internet connection between us and Mars.
But my line to artificial general intelligence is not allowed to get up there as well.
But my point is that really sort of quite nicely, I think, demonstrates how precarious, because we
look at that and we think, oh yeah, they didn't have technology, and there was only a small number,
but you may be talking, I don't know, 50,000 humans at most, I don't know, it's not going to be more
than a few million humans, definitely not.
Such a small number, how could that occur? And you realize that is just an inability to judge scale
correctly, that for every size that you go up, there is an equivalent catastrophe that could then
completely annihilate it. So what about the first record of a human thinking
about our own extinction?
Was there some philosopher in ancient Greece
that you found, Nostra, Bostradamus?
Yeah, so not so.
So all of the people from,
from, so kind of ancient classical,, ancient classical, so talking like ancient
Greeks, Romans, yeah, they're always, when they're talking about these big Catholicisms,
it's always the case, as far as, as far as, you know, in my opinion, that these are kind
of false friends.
So this is a, this isn't an idea that I take from linguistics. There are words
where in one language, they sound the same in two languages but mean very different things,
right? So I think a good example is Das Gift in German means poison. Not a gift. Not a gift, yeah.
So I think you get false friends in concepts as well.
So, you know, like I was saying earlier,
Plato talking about these huge,
the word he uses is conflagrations.
So this is like fire burning up the world.
He says the whole surface of the world is
suffered these conflagrations that have wiped out humanity. But then goes, as it all happened to humanity again in the
future, so it's just part of this cycle that I was mentioning. That's a false friend
because on the surface, if you just read that sentence, it's like, oh, Plato had this
kind of, you know, nascent theory, this inquate theory of conflagration as a risk, you know, a civilizational risk,
but it's not because, again, it's not irreversible.
Exactly, exactly, exactly.
So, yeah, all of these pre-modern thinkers, it's, yeah, there's always this strange sense
where they talk about these, you know, huge disasters, these global, you know, burnings or
freezings, those are the things that seem to attract our intuitions. We love
fire and ice. But it's always, yeah, within this confidence of, you know,
everything lost all later return. Who gets a hat then? Who's the first person that does it properly?
So it's very gradual. So we're talking the scientific revolution, that swings around
1,500, 1,600 people start to think scientifically. You might think that that would
instantly knock all of this kind of naive thinking
out of the way, but it really doesn't.
There's this thing, when you work with the history of ideas, there's this really persistent
thing called conceptual inertia, where old ways of thinking persist into new frameworks
and do so very stubbornly often.
So I'm sure we're suffering from a lot of it now
that hopefully our forebears, if we make it out of the precipice,
will look back on us and see how naive we were.
But yeah, so this conceptual inertia and the scientific revolution
is, you know, you have this big shift in worldview
to do with people realising that we're not the centre of the universe.
So this is the Copernican revolution.
The Medievals thought that, you know, that the sun goes around the earth.
Kepler Copernicus, these scientists, they completely changed that.
So you might think people are thinking differently now.
They might start to think that, you know,
a few months you disappear, that would actually be it, that would be really bad, because we're this
one planet in this, you know, cosmic, cosmic void, this massive expanse, they didn't. So, you know,
you get people, so there's some of the original, the first scientists, so Edmund Halley, Robert
Herc, these are the kind of pioneers of science. They started to think geologically about the
history of the planet. They started to say, oh, there are these huge catastrophes in the
past, like massive earthquakes that probably completely reorganize the surface of the Earth.
But well, actually, Halley is very interesting, because he says,
every time this will have wiped out the civilization, it will have re-emerged. It's just such a shame
that we've lost all of the achievements of that previous civilization. They might have had this
learned age where they reach peaks of knowledge way higher than us, it's just a shame that we'll have to catch up to
them.
So even though they're thinking geologically, scientifically, naturalistically, there's
still this obstruction here.
And the other way that this is expressed, and this is a really important one, and this
is to do with this Copernican revolution, this revelation of how huge the universe is.
So people would look at these other planets, well, not actually look at them, that happened
a lot later, but they would theorize about how all these other stars, these are pricks
of light in the sky. They are other stars and they must have plans like our own revolving
around them. And they thought, oh, it would be an awful waste of space if they weren't populated. So yeah, they probably have, all of them probably
have aliens on them. And people also presumed that these aliens were actually basically humanoid
or were interested in values that were like ours.
So you have this really high confidence
that humans pretty much exist everywhere.
Or if not humans, the values that matter to us.
So there wasn't really a sense of there being any kind of
a possibility for wasted opportunity for values or kind or vacuums where there is no value.
Value was thought to exist throughout the whole universe. It's basically fills it up.
This is this idea called the principle of plenitude. You had to dismantle that before you get this
person who gets the hat of being the first person to go, oh, maybe this would really matter. And yeah, it's not as simple as that, unfortunately.
You know, there's no one person who goes, oh, if humanity's gone, that's it forever.
You know, you find people kind of at the beginning of the 1700s, particularly towards the end
of the 1700s, starting to play around with the idea of human extension
and this idea that we might disappear the exact same thing, might not re-emerge.
So, this is when people started to dig up bones of prehistoric beasts
and started to realise that actually there are animals
that have disappeared and have gone forever. So again, this irreversibility starts to trickle
into the picture. But still, I mean, really good examples from Dennis Diderot, this massive
mind of the French enlightenment. And he was at this dinner party with his other
friends, they were probably talking about Regicide and Guilatine and the King. But during one of
these conversations, one of them asks Didoro, who had these kind of quite iconoclastic
materialist theories. So materialist there means that he was just being quite
mature and saying maybe spirits and supernatural things don't exist. So they asked him at this
dinner party, they said, can humanity go extinct? And Diderot said, yes, it but you know it would just re-evolve again in you know how
other many millions of years. So yeah you know it's it's it's it's this kind of
gradual process. I mean funnily enough and let's actually I'm going to give you a
definitive answer here. The person who gets the hat is one of Diderot's friends.
He was another French philosopher called Baron de Olbach.
And he actually, as said, we cannot be sure that all these other planets contain humans.
And the humans are therefore the natural end of all kind of all evolution, all natural
history.
Obviously evolution, not in the Darwinian sense back then.
But he said that and then said, you know, thus, therefore, if our planet was, you know, knocked
off its course, that could be it for humanity. So yeah, I think, you know, let's give him
the hat. He's the first person to, you know, say, yeah, a head of evidence, we can't just assume that humans are everywhere
and values, human values are everywhere.
B, we can't assume that they're the end of everything, the purpose of this whole cosmos
that we live in.
And see, therefore, we can't be sure, again, a head of good evidence that if we screw
it up, you know, something else will just revolve like us.
I think that that really explains nicely the answer to the first question that I gave you about why is it important for us to look back? And given that we're in the scientific post-scientific
world, utilitarian rationalists, I can scot Alexander way to an Elliott Eukowski blog, and I understand my cognitive biases,
and we believe that we have reality in our grasp.
But the intellectual inertia, cognitive inertia,
that you just mentioned there,
the principle of plentitude as well,
this presumption that everything will be okay.
When you combine that with, is it scope in sensitivity? Scotniklack.
Scotniklack, that's it, sorry. That big things are really, really hard for us to work out.
And as you scale up small things to big things, the death of a person, the death of two people
doesn't feel as bad as the death of one person. And the death of a million people doesn't
feel a million times worse than the death of one person. So when you combine all of that together,
I think it starts to get us to a place
where those of us who want to force
the existential red pill down everybody's throats
actually start to understand why it might be a little bit
of a big medication to swallow.
Wasn't it Thomas Jefferson?
Was famous for believing that some animals,
or that no animals ever went extinct?
And you think Thomas Jefferson's like,
he's modern history, you know?
There's like drawing to him and paintings
and stuff like that.
And he was part of a country that's still around now.
Yeah, yeah, definitely.
So, I mean, if there's anything that my daily work is, it's just cataloging the vast library
of how often we're wrong.
Not just, you know, normal people, but these huge, these huge, great minds.
So yeah, like, you know, Thomas Jefferson, yeah, he's a great example.
So, you know, this goes back to what I was just talking about with, you know, in the 1700s,
people started unearthing these huge bones of, you know, fossilized bones of unknown beasts.
And, you know, prior to this, I think this is an interesting context, prior to
this, scientists and natural philosophers presumed that fossils were, they didn't think that
they were the kind of impressions of prehistoric animals. That was, again, I think this is a great example of what we were talking about earlier.
It seems so obvious now that fossils are the evidence of prehistoric animals.
When you're a kid and you watch Jurassic Park, you understand that, but this took centuries
for people to figure out.
The medieval theory, during the Middle Ages, what people thought was that these strange
animalistic imprints in rocks were actually nature playing jokes on us.
So they had this idea of the scale of being, the great chain of being, so it's that everything
is in this kind of ordered hierarchy from rocks to plants to oysters to monkeys to man. And they
thought that fossils were evidence of where rocks had basically become upstart and wanted
to jump above their office basically. So this is how people thought, and you know, I find it really charming, and I love it.
But, you know, it's, yeah, again, people can be so wrong for them, you know, yeah, we're often very wrong about a lot of things.
So, this is how people thought for a long time about these fossils,
Da Vinci was one of the first people to go,
Hang on, maybe these are impressions of animals from, you know, the deep past. People used to study, it was mainly shells,
like small fossils, that was very easy to say, this animal, we might not see it in our
kind of vicinity alive.
We might only have fossil evidence of it,
but it probably exists somewhere else in the world.
And that was how people, for a long time,
got around to the fact that we had evidence of fossils,
but we don't want to accept species extinctions
or the possibility that nature could irreversibly,
again, irreversibly, lose any part of it.
And so this was this kind of, you know, tricky maneuver that all these very clever scientists
took to deny the possibility of extinction of anything.
You know, yeah, then people started finding mammoth bones, mastodone bones, and people started
to accept. Scientists started to accept that perhaps species extinction was a thing that has happened and will happen again.
And this was in the late 1700s, Thomas Jefferson, yeah, he still thought this, even up until the point when, you know, scientific consensus was reached. So it's like in the 1780s, he was still writing letters to people, very confidently claiming that mammoths still exist in the kind of unexplored regions of
the Americas. So we just need to go and find a mammoth and then, you know, we don't need
to worry about extinction anymore. So, so yeah, yeah, no, you're absolutely right to link
this up into, you know, the kind of prevalence of bias and wrong think.
Yeah, I want to give this hopeful message of how far we've come and how important all that is,
but at the same time, you only reach that by seeing how vastly wrong we often are.
Yeah. Every age has probably had its Cassandra's, right?
The people that were certain that the end was on its way, and you highlight some
differences between extinction and apocalypse and between prophecy and
prediction. Can you lay out how all of this works for us? Mm, so I guess listeners might have already thought,
you know, what's this guy talking about?
People have been thinking about the end of the world forever.
I often get...
Have you not read the Bible?
Exactly. I often get that comment.
Yeah, it's one of the more common comments.
What is...
Judgment Day, yeah, exactly.
Yeah, has this guy not read the book of revelations?
So, yeah, the claim I make is the apocalypse
is distinct from extinction.
And what I mean by that is that, you know,
you look at the end of the world as it's presented
in religious traditions and mythological traditions and often it's seen as the consummation or the fulfillment of the
moral order. So what I mean by that is think of think of judgment day, the
Christian version, it's just in the words judgment day. It's the revelation of how God thinks everything should be.
So it's the consummation of morality.
It's not like, you know, it's not anything bad.
Even though we might be very inscrutable to us
mere mortals, we might not be able to understand it fully.
You know, God's decree, that tribunal,
is ultimately the right decree.
So, you know, another image that pops up is this idea of sorting
the good from the bad.
That's what the end of the world judgment day
is kind of this point where the good from the bad
is all fully sorted, everything's in its right place.
And that's the end.
And then the curtains can close.
That's not actually a bad thing.
It might be the end of time, but it's actually
really great, like I said, morality is fulfilled in this instance. So in the modern naturalistic,
scientific idea of extinction, it's completely different. It's not just a new version of
that old idea of apocalypse, it is actually a contradictory concept because instead
of the ultimate fulfillment of morality, it's the irreversible frustration of it, at least
human morality and going back to this point about aliens and other planets, as far as
we know, we're the only animal that follows ethical argumentation
that's able to, you know, kind of think about moral reasons and what should be and what
shouldn't be.
So, you know, if we're gone, all of that is frustrated, potentially irreversibly.
And that's really important.
You know, there's a completely contradictory
ways of looking at this. So there's a pithy way that I like to put it is that, you know,
apocalypse supplies a sense of an ending, whereas extinction anticipates the ending of It's this idea that, you know, meaning and purpose are irreversibly frustrated within this vast physical cosmos,
this vast silent physical cosmos that continues quasi-purposely without us.
And yeah, you know, as a final point, that's also just a really simple, another point to make, a really simple one
is that often in religious apocalypse, the physical cosmos doesn't go on without us,
it ends with us. Or it's again nested in these cycles. So a lot of Eastern kind of ideas.
So the Buddhist apocalypse, it's cyclical, the world ends, it gets reborn, it burns, it gets recreated.
So yeah, nothing's at stake in apocalypse, whereas in extinction, everything is at stake.
It's like an egotistic metaphysical cappernican view of ourselves, right?
That the Earth is the center of the universe, but not only that,
we are the center of the universe, and not only are we the center of the universe,
we are the bookend to the universe because without us, what is the point of the universe?
But oddly, that line of reasoning, without us, what is the point of the universe, is still what
we're continuing forward now. The two potential answers that we have to the Fermi paradox of where
are all the aliens. One is there aren't any, it's just us. The other is they're out there.
Both of them are fascinating. Both of them are terrifying, but they do have slightly
presuming that the aliens that we're talking about also have morality and the ability to
step into the runethics and stuff like that. The differences are quite profound
because it is the difference between being
every other semi-sentient being,
bottlenose dolphins and bonobos and stuff like that,
isn't crew aboard spaceship Earth, their cargo.
We are crew.
We can affect the direction of what is going to happen. We can
save the other animals. We can allow them to exist at greater and greater levels of comfort
and of bliss and of happiness, right? And we can also do that to ourselves and then we
can scale that across the universe. It's so
interesting to see this apocalyptic approach. It is very much a very ego-driven
sort of cappernic and centric, this human-centric view of everything and why wouldn't it be that way?
When you told that God built the world and and universe in six days and the most important thing that he built was on the sixth day and it was the human etc etc.
All of your culture, all of your stories, all the narratives that you've been given are telling you this is why everything is here, you are why everything is here, This is how special you are. And I think this might also highlight
why I love existential risks so much,
that it's the same as looking at the night sky.
It provides an equal amount of awe and dread.
And it reminds me that the universe very, very much
is indifferent, whether or not we continue going on.
And it is our stone to roll up the hill
if we want to do it. And yeah, upon realizing that, I think that also probably highlights
why the denial of death thing from Ernest Becker kind of gets macro-aggregated across people
with this. It's a very uncomfortable topic to think about because it reminds you that no one's coming to save you. No one gives a shit. Nothing cares about whether or not we continue
to exist or not except for us, which means it's all on us. There is risk, there is responsibility
and the book stops with us. Yeah, yeah, I couldn't agree more. If there's one major theme of what I've written on this
history, it's breaking the spell on that kind of wishful thinking, where we allow what
we want the world to be to contaminate our theories of what the world actually is. So, really broad
brush generalization and being unfair to lots of very clever people that came before
us and whose shoulders that we stand upon. But the pre-modern worldview often, you know,
isn't, often doesn't really even think that there's a distinction between ethics and physics. What I've been by that is, you know, take, for example,
the medieval cosmos, you know, this is the pre-Capernaum one,
the idea of Earth in the center, there are these concentric nested spheres,
and not at the edge, there's this primimobile, it's the outer sphere,
that's where God lives, that's where all the value, the best stuff is.
All of those spheres is populated with hierarchies of angels, mobile is the outer sphere, that's where God lives, that's where all the value the best stuff is.
All of those spheres is populated with hierarchies of angels. The whole thing, there's values suffusing the whole thing, but the very structure of the cosmos, the whole structure is the moral order.
So that's what I mean by there's no distinction between ethics and physics there.
the moral order. So that's what I mean by there's no distinction between ethics and physics there. I perceive, you know, part of what kick-started science and, you know, not just science as this,
you know, disinterested objective endeavor of finding out, you know, how things hang together,
how the facts hang together in the broadest possible sense.
But this newer idea of how we can then knowing those facts
get our values to fit together with this picture
in the broadest possible sense.
So I guess I mean, what is to be done?
We've learned a lot about the cosmos,
the objective, physical, uninterested,
unresponsive cosmos,
an independence of us. We've learned so a lot about that, but now we're asking
this question of how do our values, how could our values fit into this? And yeah,
I see this as this is all part of this, this picture. So yeah, you have to kind
of basically shake off that wishful thinking of thinking that, you know,
in independence of what we do, the universe is just a great place and, you know, kind of all
aligns with our values, no matter what, which I think is the default way of human thinking. So, you know, there's this idea of in the philosophy of science of
folk psychology, where another word for it is the manifest image as opposed to the scientific
image. The manifest images, you know, are a picture of the world as filled with colors,
intentions, emotions, all the things that we're used to on our daily lives.
The scientific image is this really barren alien place that's made up of atoms,
electrons, you know, subatomic forces. You know, you have to realize that distinction
and the fact that the way the world actually is not the way we want it to be or it should be or it ought to be, to then reintegrate and think of, well, okay, we've realized that, we've woken up, we've broken
the spell. What do we do next? How do we make this world that, you know, just is the where it is,
independently of us? How do we make it into the one that we want or not just that we want but that would be worthy
in some, you know, meaty of moral sense. And yeah, this wide scale history that I try and tell
of us waking up to the possibility of extinction, it's this, you know, it's this kind of landmark
event in that, it's realising that, you know, know everything rests on us not because we're the
centre of the universe but because the universe simply doesn't care about us but strangely enough
that also kind of re-centralises everything upon us you know until we find evidence to the
country and I hope we do I'm very hopeful you know I I want setty to be, you know, a successful endeavour.
I hope there are wiser beings out there than us. But we can't just act as if there are
ahead of that evidence. So, yeah, it's this, you know, it's this strange historical dance between disillusion and that mature recognition of, yeah, I feel alienated
by the possibility of extinction in a way that the version of me that lived 500 years
ago just simply couldn't be. Yeah, I'm alienated by that, it terrifies me, but this is the thing
I often say is, if I'm hurtling towards a cliff edge,
if I'm driving, you know, 60 miles prior towards a cliff edge, I want to know where that
cliff edge is, rather than just wishfully thinking, oh yeah, the car will be fine, I'll
be fine, there's no cliff edge for whatever. So I see, yeah, another way you can put this
is, you know, waking up to extinction is kind of a, as sans-causes in real moment for the human race.
And, yeah, you know, it might be disillusioned, it might be upsetting, it might be alienating,
but it was something that we needed to do if we were to have a future.
So yeah, I think it's, again, like I say, it's a massive achievement and, you achievement and we need to keep that in view with the
future because I feel that it's so easy to be disillusioned with human potential to focus
on the kind of the atrocities of the recent past and think that that,
think that that, there's an inevitability
that that will color the whole future.
So, you know, we need to, I'm not saying that we don't,
I'm not saying that we should forget, you know,
these things, but yeah, I think that there's often,
there's often, particularly when we talk about
the space colonization discussion, there's often a kind of a refer to as a geocentrism about history,
this idea that we will repeat, just repeat everything. You know, all these kind of mistakes of the
past, that's just what spreading out we'll do. CS Lewis has this great quote where he's, you know, says,
I feel sorry for the aliens, is basically his sentiment because, you know, us horrible sinners are going
to go out and, you know, ruin the galaxy. You know, that's a form of geosantry. Just like
thinking that the earth is a center of the universe, thinking that our history colors
our future. And of course it does in an important sense, but the work is,
you know, keeping an eye on the places where real progress has happened and seeing the
capacity, because this goes back to the point of dolphins and, you know, mamasets and, you
know, all these other very incredibly intelligent animals that we share this planet with
um is one way of talking about you know humans are the only animal that responds to ethical augmentation that can sound really abstract and really um kind of up in the cloud. I think a good
way of making that concrete is we're the only animal that's ever corrected itself. Uh it's ever
thought oh what I previously thought was wrong about what I should do or what I think
about the world.
So yeah, we've been really wrong, but we have that capacity to correct ourselves and that's
the capacity to make the world the best place.
So yeah, I really like the cargo and crew.
This is a way that Toby Ord puts there, but animals are like moral patients.
We're moral patients, but also moral agents.
I see there's the crew cargo is another way of putting that.
History can be hopeful.
Yeah.
I don't know what it is about this time at the moment. Everybody that's listening in yourself knows
and you have very cleverly managed to evade
the social justice precipice of saying
that there aren't any problems right now
and that we don't have anything to fix
in your little last monologue there.
And it's because at the moment,
there are, to me, it would appear
that there is an obsession with injustices and many of
them are indeed injustices that need to be fixed.
But I think it discounts so much of how far we've come.
It discounts this view that you've given us of just how backward our views were when
only Thomas Jefferson was on the planet not long ago.
And yet, when we have these times of crises, when we have times of real, serious, bloodshed
and apocalypse and concern and catastrophe, people have to center their values. In the time
of a real crisis, we focus our values in the time of no crisis we create our
own.
And I think that the problem right now is that people think there is no crisis.
I don't think because the existential risk, the potential for human catastrophe and the
permanent and irreversible stop of the human pursuit, isn't there at the forefront of our minds? If we
found out that there was a meteor that was heading toward Earth and it was going to hit us within
the next 1000 years, and we had it with certainty and the news story said the same, I think everybody
would live life in a very different sort of way, some ways better, some ways worse, but I think it
would bind us together as a civilization. I think it would stop us from focusing on things that in the grand scheme of human civilization don't
matter. As Toby odds says in the precipice, we are shuffling currently along this cliff edge.
It's real precarious. And we only need to do this for a little bit of time. I'm not saying that
we need to completely dispense with trying to fix the wrongs that are going on in the world. What I'm saying is that right now there is a very,
very important job that needs to be done by all of human civilization, and it is a game
of whether or not we get past it or we don't, and if we don't, it doesn't matter how much
social injustice you've fixed. What I think that's led to and everybody will know this sentiment at the moment is this oddly homodepricating view that the world has where we talk about
listen to people talk about ecology and the environment. Like as if we go out of our
as if dropping a can on the floor is because you hate the world as if because the fact
that you don't drive an electric car or you don't guess the
bus to work or you don't get the cycle to work is because you actively hate Mother Nature
and she's rebelling against us.
And we just start personifying and adding narratives like it's like fucking 2000 BC.
You know what I mean?
Like we're just layering all of this real primitive thinking
that we've got about the sacred and the profane, meanwhile Cardi B's on stage singing about
a whap and non-cares. So all of this bizarre human hating that's going on both within culture
and within certain subgroups, within people, I think that a lot of that would be fixed
if we understood just how close we are to complete annihilation
of everything that you care about, of everything that your genetics down the line can ever care about
and of the ability to save the opportunity to step into programming.
I think that's what you said there.
The opportunity and the ability for humans to step into their own programming
to redirect the direction in which they are going is what matters.
And, um, man, I really, really hope that the work of the guys at the Future Humanities Institute continues to blow up.
We need, we need a Greta Thumburg of existential risk. And I mean that in, in a very, very real way, like we need a Greta Thumburg of existential risk. And I mean that in a very very real way like we need a
front-of-time magazine
social media savvy
great speaker that gets that the world behind them because it is the biggest question that we all have
One thing that I haven't heard you talk about and I'd love to know your opinion on this
Given the fact that you've done such a broad view, look over this subject, over the past
of history, just how important do you think this period of the last 30 years, with Parfit
and Bostrom and Oud, how much, if you were drawing a graph in terms of progress and understanding,
how much of a hockey stick has that been? Is it linear or is that really being a marked
jump in our understanding around existential risk? Yeah, I think if you wanted to,
yeah, if you had to, you know, depict it pictorially, I think it has been, yeah, there's a step change.
And again, you know, this is one of my senses from studying the history of ideas and history of intellectual progress is that that progress is often lumpy and kind of uneven and there's kind of punctuated,
right?
And that's often because these ideas require lots of smaller sub-ideas to come together.
So you know, you have it bubbling away, these sub-ideas, but they're not, you know, converging
and then bam.
And I think we've just gone through,
we've just gone through a point like that.
So, I mean, I'm hopeful about,
I'm not despondent about the fact that, you know,
not enough people are talking about existential risk.
I mean, there's one thing to be sensitive of is like,
you know, thinking about the long-term future
and how all of our potential vastly the outweighs, the present and all that stuff.
There's this issue of fanaticism that the community around this is very aware of.
So it's this idea that, basically, when you're thinking of these huge potential ahead of us, it can outweigh any sensible
decision at the moment because we'd want to vouch safe that regardless of anything.
So, something that the community is very self-conscious of is an issue within this technical
ethical discussion.
So, that's one thing to be aware of. Yeah, another thing is I'm hopeful because
it's new. That's why there aren't many people that are talking about it. It's because it's new.
These ideas haven't been around for long and it often takes a while for ideas to really catch on
and to start to trickle through the relevant streams, the relevant institutions.
So yeah, I think it's new. It's new and that's why.
I am hopeful that the discussion will continue. I don't think that people talking about other things like Greta Thondberg, she's not
taking resources from people talking about existential risk.
Climate change is actually, it might not be an existential risk in the sense it will destroy
homo sapiens, all of them, but it could actually irreversibly make our future
poorer and less well off. So, you know, it's, I think it has to, you have to, you have to
balance all these things, yeah.
I agree. Thinking about something that you brought up, which I'd read in a book ages
ago, and I'd totally forgotten about, and it's absolutely fascinating. Can you explain what the Doomsday argument is, please?
Yeah, I'll give it a try. So there's different versions of it and interestingly appear around
the same time independently. So this is another thing in the history of science. I'm going to know doubt about the history of science for a second is often really great ideas. People
independently arrive upon them around the same time. Why do you think that is? I think it's again
similar to what I was talking about earlier is there are all these subcomponent ideas that require, that are required to then reach the big theory. So natural selection
required an awareness of species extinction in the past, it required an awareness of population
dynamics blah, blah, blah, blah. So Darwin came up with it after Russell Wallace, almost
at the same time, completely independently. Anyway, Tuesday argument.
Basically, you should, applying the component
to principle, the principle of mediocrity,
we should assume that we're in a,
in an unexceptionable, unexceptional place.
And that applies, that principle can be applied very broadly.
And so when you apply it to
our position in human history, so imagine a reference class of all humans ever born,
ordered from, ordered temporarily. So obviously it's arbitrary, the first human ever is arbitrary in
a sense because, you know, evolution. But imagine first human ever,
last human humans ever, we should assume that we're in an unexceptional placement within
that broad reference class, right? So given one of the versions makes the reference to population and how population is expanded
and if humanity survives into the long term will expand a lot more, it's far more likely
that we are later on where there are more people.
So imagine this is a curve than we are very early on where there are barely any.
So the analogy that's often used is imagine that you have a bag of balls and someone told
you that this bag either has five balls in it or a hundred balls in it and you pick the number four
You would rationally presume that you've got the bag with the five balls, right?
So applying that reasoning to that you know long-scale human race
You know you have to, the conclusion is that we're probably living later
on towards the end. Now this gets into all kinds of thorny issues, super complicated,
kind of reasoning, and it's very interesting, and it's very controversial as well. So yeah, I think the best place to go
to find out more about it is actually the Wikipedia page.
I think Vox did quite a good explain on it as well,
but it's a really fascinating argument,
really technical, really complicated.
But it also made it better.
And also Reddit thread.
There'll be a Reddit thread.
So my Reddit's explained everything.
Reddit's becoming like the new Wikipedia, I think.
Thinking about sort of looking forward,
what lessons can we learn from looking at our extinction
that inform the enlightenment mission moving forward?
How do we carry this into the future?
Mm, mm, so I think that,
I think, and this goes back to what I was saying about disillusion, is we had this
enlightenment where some really brilliant ideas were invented, some pretty bad ones or implemented as well.
We had that, you know, 300 years ago,
began 300 years ago.
And then afterwards we entered into this period afterwards,
where certain other cultural forces have started,
you know, come into the picture, started to compete with that enlightenment
idea of progress, of human potential, of the capacity for reason to correct itself, to
basically supply reasons to everything. So, you know, remove the arbitrary, the unjust, the irrational from not just our picture of our
world, but our conduct within it.
We've had this kind of countering, the countering lightmen was actually a historical moment,
but I think that we're still, there's still a prominent cultural strain of these counter-in-litamin. It can be pessimistic and can be romantic.
There are these cultural strains around that.
The big philosophers like Nietzsche, some of the 20th century philosophers as well,
like from Sartre down to Derrero, they're all kind of playing with these anti-initement
ideas. Now, I don't want to make predictions about culture because there are so many degrees
of freedom that you'd be an idiot to do so, but I do feel that that disillusioned with
enlightenment and its capacity, so not just what it was,
but what it could be taking the good bits out of it
and critiquing the bad bits.
It's kind of growing up
and this is what the enlightenment,
the big dogs and the enlightenment said themselves.
Can't said that it is just humanity using it's own reason
to exit its infancy.
This is where the metaphor comes from actually where you see Sagan, Carl Sagan said that we're
in this period of technological adolescence defined by this misfit, this disjuncts between
our might and our wisdom. That's where the metaphor comes from. It actually comes from Canton, his Enlightenment forbearers. So anyway, what I'm saying is, you know, we had this period of
enlightenment, then we had the kind of critique of it, which is itself part of the Enlightenment,
is criticizing, you know, critiquing and unveiling the biases, unveiling the injustices.
And we're still kind of living through this point. I think it's, it isn't a sense to and unveiling the biases, unveiling the injustices.
And we're still kind of living through this point. I think it's, it's isn't a fence to use Sagan's term.
It is a kind of adolescent phase where it's like,
we've realized our capacities,
we've realized the shared damage we can do to ourselves
and this world that we live in.
And lots of people react to that with this,
this continuing dissolution of this is awful, humans are awful.
We can be, right?
And it's, you know, again, these metaphors can become cheesy
and they can become very broad brush and very over generalized.
But I think we are going through that, you know,
not just this technological adolescence,
but also this kind of adolescence in our image of ourselves
and what we can do within the world and what
You know what we should do what we ought to do
Yeah, where you know there's a lot of disillusion and
Taking this really broad broad long-term view of human history and culture and ideas
That I like to do I see that as just a necessary step and waking up and growing up and you know,
when you're a teenager there are times when you do something really awful and you know,
screw up and then you feel really bad about yourself. But then you learn that that's just part
of this process. And so yeah, you know, real broad brush, long-term talking.
I think that, you know, as a civilization, and obviously, you know, again, that's a big abstraction.
There are lots of parts, and there are lots of parts below that.
But I think we're going through that phase right now.
And yeah, so I'm hopeful as to what's to come next.
You know, we've realized how bad we can be.
But now we can focus on stopping that badness
from coming to the fore,
and we can focus on what we can do and our potential.
We are God's butth for the wisdom.
Who's that? That is Eric Weinstein, actually. and our potential. We are gods, but for the wisdom.
Who's that? That is Eric Weinstein, actually.
I write.
And Daniel Schmackenberg has said,
where gods were just shitty gods,
which is the same thing said in a different way.
One thing that's just popped into my head there, mate.
Do you think that we're further ahead in terms of ethics or technology?
I mean, that kind of, uh, uh,
uncomparable. I know that that's an apples and oranges comparison. But if you were to think about
personally, to me, it seems like we're further ahead in technology. It seems like our,
our technological power outstrips our wisdom. We've made a lot of progress.
We made a lot of progress with ethics,
with understanding what good is and how to act in the world.
But it seems like we're able to scale technological progress
much more quickly than that.
Yeah, yeah, yeah.
So caveat aside of incomparability,
broadly, yeah, we are definitely ahead in technological progress.
This is actually a point that Parfit makes to loop back to the beginning.
This is a point that Parfit makes in this argument about how bad extinction would be.
Is he points out that applied ethics,
so not just thinking about what value is,
and arguing about meta-ethics,
which is what value is, and how we can define it,
but arguing how we can actually implement
our ideas of value into the world.
So making ethics effective is actually really young.
of making ethics effective is actually really young.
And so as part of his wider argument that human extinction would be so awful
because we have this vast future ahead of us
if we get things right, he says,
you can be kind of aggregative about it
and say, you know, there will be so many 0, 0, 0, 0, 0,
future human lives. But he says there are also these intrinsic goods, these intrinsic
values, so like art, science, right, knowing about the world. But then he points to this
other one is ethics. And he says, this is the most immature field of them all, is it's only since the Enlightenment that people have been doing secular ethics. So
thinking about morality without God breathing down the back of your neck. And it's only
even more recently than that, the people like Parfit himself have been
dedicating their entire lives to the pursuit of this and trying to, you know,
create, you know, get that low hanging fruit when it comes to, you know, moral progress. So
that was the words that I had in my head the low hanging through. So I think I think yes
we are in this stage where our you know our might kind of outstrips our wisdom
we are in our technological adolescence or the precipice you know however you want to describe it
but that doesn't mean that there's you know know, again, you know, I think there's big
potential for moral progress. And, you know, people are working on these things. Like applied
ethics is booming. There's this effective altruist movement where people are, you know,
thinking how best can we use our resources? There's, you know, a kind of offshoot of that
more recently. Long-termism is this, you know, an idea?
And the precipice, Toby Orls' precipice,
is kind of the, I guess, the founding tags to this.
But there's lots of work coming out
to the Global Priorities Institute,
which is kind of the sister institute
of the Future of Humanities Institute,
on this long-termism.
Yeah, and it's about, you know, how can we,
how can we do applied ethics?
And so yeah, I think, you know,
our technology is outstripping our wisdom right now,
but there's low hanging fruit to be picked.
There's some good players about to run out
under the pitch for the applied ethics team as well.
I hope so, man, I really do.
I think
I think it's very timely for you to to release a book about the history of existential risk. I really do. The press piece by Toby Ord is a fucking magnum opus. I absolutely adore that book.
Superintelligence, fantastic. What is it? Human compatible by Stuart Russell, which came out last year. That's also fantastic.
But this Thomas Mining X-Risk,
how humanity discovered its own extinction.
Dude, you've done a really, really good job with this.
The number of footnotes is absolutely terrifying.
It's completely ridiculous.
And for anybody that's enjoyed this conversation today,
it will be linked in the show and up to below.
I've also added this to my Amazon reading list
because that's how much I enjoyed it.
So go and check this out if you want.
Anywhere else that you want to send people?
My website is tomasmoinhand.exe.
I keep that updated with like short essays that I do.
So yeah, I think that's it.
Perfect, man. The end of Civilization hasn't occurred during this podcast, so we have managed
to make it through. Thank you for coming on. Cheers, thanks a lot for having me.
you