Modern Wisdom - #229 - Mara Cortona - Why Is No One Talking About Existential Risk?
Episode Date: October 8, 2020Mara Cortona is the Executive Director of the Astropolitics Institute and Founder of Nöonaut. The world is going to end. Eventually. For most of us that concern is far away, but the smartest minds in... the world of existential risk put the likelihood that we survive the next century at 1 in 3. With this hugely terrifying statistic hanging over our heads, why is no one focussing on the big problems? Poorly aligned Artificial General Intelligence, Biotechnology and other risks are significantly more dangerous than global warming or the 2020 election but get paid far less lip service. Sponsor: Get FREE access to a supercharged calendar at http://bit.ly/wovenwisdom (discount automatically applied) Extra Stuff: Check out Mara's Website - https://www.noonaut.space/  Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Howdy friends, welcome back. My guest today is Mara Cortona, who is the executive director of the
Astropolitics Institute and the founder of Nuonaut. The world is going to end eventually. For most
of us, that concern is far away, but the smartest minds in the world of existential risk put the
likelihood that we survive the next century at one in three. With this hugely terrifying statistic
hanging over our heads, why is no one focusing on
the big problems?
Poorly aligned artificial general intelligence, biotechnology and other risks are all significantly
more dangerous than global warming or the 2020 election, but get paid far less lip service.
And today, Mara is going to put forth her case about why she thinks that that disparity
is happening, and also
to hopefully try and alert you to exactly where the main existential risks lie.
But for now, it's time for the wise and wonderful Mara Cortona. So what are we going to be talking about today?
I'd love to talk about existential risk and the way that we relate to it as individuals
and as societies.
Positive conversation for us today then. Everyone feeling good leaving this
leaving leaving this in fear of the world stopping. It's something that really
troubles me that people aren't concerned about more than they are both
existential risk and global catastrophic risks. To the point that it is something
that I tend to bring up in
casual conversation, which makes me very fun at parties. But it is a thing that I really think we
should be talking about more. So here we are. What do we need to know about to start then? What's the
glossary of words and key terms or whatever it is that we need to be aware of before we can begin?
So whatever is that we need to be aware of before we can begin.
Sure. Well, I initially, I think I'd like to draw a distinction between X risk or existential risk, which is a risk to the entire species, as we know it, and global catastrophic risks, which
perhaps wouldn't cause the entire extinction of our species, but would lead
to mass diodes and a really low quality of life.
And those types of risks are both more likely to happen and more likely to happen sooner.
Then the major types of ex-risk that are frequently modeled and talked about, so those are important
things to discuss.
Some of the main, some of the most critical and pressing
forms of ex-risk, obviously climate change
is the one on everyone's mind, though engineered pandemics,
bio-weapons, and nuclear war are right up there.
So it's really all, they're really all in thropo, I'm sorry, in the thropic risks. And those are distinct from sort of
this background rate of existential risk that's always there from like
asteroid collisions or perhaps natural pandemics or super volcanoes or the
lake. There's always this background risk of those happening, which is fairly low,
as we have been on this planet in some degree for, you know, 2000 centuries, and we haven't come
across anything like that yet. So we're at the point where those natural background risks are far outweighed by the anthropic risks that are being precipitated
and accelerated by our own activity. So those are some of the main terms that I use.
The interesting thing that I learned upon reading Toby Yord's The Pressapist, which is
going to contribute to a big chunk of my understanding for what we're talking about today. One of the things that I thought was really interesting was
technology is the cause and furthering civilization, well-being, further industrialization,
et cetera, is causing many of the anthropic risks. However, because the natural risk is non-zero,
i.e. if you ran this world, this planet for long enough, you would get hit by an asteroid
a super volcano would come and fuck you in the ass.
If that's the case, you have to have sufficient technology to be able to avoid that from
occurring.
So, if you were a militant ludite and said, well, all that we need to do, just stop all technology,
stop all industrialization. We existed as hunter-gatherers for ages. Let's just
go back to being farming people like 10,000 years ago. We'll be fine. That's not
even an option because from a civilization standpoint, eventually the
natural ex-risk is going to catch up with you and you don't have any of the
technology to save yourself.
It's true. Though there are schools of thought on that as well, there are many
types of organisms that have been around in an almost unchanged state for millions and millions and millions of years, mostly marine animals, but cockroaches and horseshoe crabs are doing far, far better than we are in terms of longevity and stability.
So on the one hand, they're rather simple way of organizing and maintaining themselves seems to be very successful.
And you're getting into it like basically eternity.
At some point, we will be extincted on this planet, but at the same time, you
know, 580 million years as some animals are currently, that's currently their record,
like sponges. That's a great run of it. And it's clearly far more than we have yet pulled
off, and it seems like we're likely to. So on the one hand, there is the possibility that
regressing to a more simple organizational structure could create the longest term payoff.
But that's not really in our nature, is it? I think even if we wanted to go back to that sort of
even if we wanted to go back to that sort of, you know, back to nature type of organization structure for that purpose, I don't, we wouldn't be successful. It's not, it's not what we do.
And I'm not convinced it would be best anyway because there is so much suffering in the natural
world and we have this constant drive to eliminate suffering. And so this kind of glorification or romanticization
of the animal world feels a bit misplaced to me. I think it's really unpleasant to be
almost every other animal besides a human. Ultimately, it's, you know, very unpleasant to
be a rabbit being eaten by a fox or, I mean, throughout most of human history, the vast majority of human history
and in the lives of most animals. It's just really, really hard to be alive and there's a lot of
suffering. And so what you're pointing to with these kind of parallel tracks or yes, our technological
advancement is really jeopardizing everything that we hold dear. At the same time, what it's making possible is something that, you know, a type of reality,
a type of world without suffering that has never, we've never been able to conceive of.
I mean, just, it's a funny thing to talk about the massive amounts of risk that we're facing
right now, given the fact that our lives are so much more
comfortable and pleasant and the possibility is so much greater. For nearly
everyone in the world than it has ever been by far. So it's, I think the way that
I relate to that sort of dichotomy, that dilemma is that suffering is
inherently like from an ebbsyque perspective more motivating than pleasure
or happiness. And so it's more, you know, we're more motivated to want to avoid a catastrophic
outcome than we are motivated by realizing this type of utopic future that we're hoping for.
Do you think that scales when we're thinking about civilization wide? I think people are quite
capable of being pain avoidant rather than pleasure seeking when it's themselves, but when you abstract
that to even your town or at the very least a civilization, if ever there was a year to show us
this as an example, it was 2020, like the number of people that are saying, you know, the death rates are so low, it doesn't matter, let's just get the economy started
again. That's a very short term is thinking. I appreciate that not everyone has the utilitarian
view that our goal is to reach our full potential as a space-faring galaxy colonizing civilization,
which on the biggest, biggest
sort of picture thinking, that's what we should be aiming for. We should be sacrificing
everything that we can in our lives right now in order to ensure that the trillions that
come after us are still able to be alive. But people can't think with that much abstraction,
not naturally, not without learning an awful lot.
Yeah, absolutely.
Gosh, it's so interesting how my thinking has evolved over the course of 2020 due to
watching responses to the pandemic.
It's a really interesting example because the COVID-19 pandemic, it isn't quite the level
of uncertainty about it, it shouldn't have been there, that
seemed to have been there.
I mean, we know that pandemics happen, we know that pandemics at this scale happen.
It's, you know, it's, it's shocking to us because we haven't experienced something like
this in our lifetimes, but we all knew, like, it was modeled, like, that something like
this was going to happen.
And we were completely unprepared globally.
And not only were we unprepared globally, but the fact that so many people in the world
are, I mean, just cut up in these concerns about whether it's a hoax or these conspiracies.
I mean, it's really very simple measures that need to be taken to control the spread of
this pandemic that we all knew was coming.
And I think prior to this year, I had more faith in the possibility of persuasion, mass persuasion and collaboration.
I would say that the number one ex-risk is really actually like communication, it kind of underlies our responses to all of the others. Collaboration is a global whole. And I think watching the global responses to
COVID is very clear that our communication and collaboration systems are very broken
and things are heavily politicized. And the way to solve, I believe, climate issues and pandemic issues.
I mean, you can only imagine if it's been like an engineered pandemic with a higher mortality
rate.
I mean, it would have been catastrophic and that very well could happen in the next 100
years.
It's like a one in 30, I want to say.
I think that was Ford's modeling, the Oxford guy, or the chances of an engineered pandemic
wiping us out within
the century.
So I think ultimately it's going to come down to technological development.
It's not going to be something where we're going to be able to sway the masses and get
everybody on board with being very concerned about X-Risk because like you're talking about
the heuristics that we use to relate to these massive problems
just don't work.
Our biology and our psychology is really evolved in keeping with the Dunbar number, which
is like 150.
It's the number of people that we are expecting to be able to form a relationship with and
historically in a tribal setting would have known.
And so that translates to our sphere of influence.
Our sphere of influence is effectively like 150 people except now it's not.
We had no, we never had any concept of force multipliers like we have now.
And so our actions and our inactions not only effect, can affect people all over the world. Like I can donate $5 to an organization providing Benets for malaria, which I think has been
shown to be like the most single most effective use of monetary donation or leaving a poverty
and suffering.
In the impact that I can have is huge, but I'm still operating in terms of like
expecting to see people, my community,
and build a relationship with them
and have a story and influence them on this one-to-one level.
So like you were talking about offline,
we see a story about a little girl
and we're so motivated and we might waste
massive amounts of resources
in a way that's ultimately not very helpful when we could have
done much more with those resources.
And so when you take it a step further and you talk about the infinite set of lives that
don't even exist, I can relate to my children.
It's a lot harder for me to start abstracting out relating to my grandchildren and my great-grand
children, much less like the trillions infinite possible lives that don't exist and
valuing those lives and giving them a spot in our policy discussions and giving them representation.
And so really, it's a really hard concept to get our heads around in.
At this point, I don't think it's reasonable to expect that of the general discourse.
I think you're right.
And so, of course, then, yeah.
Yeah, I think the only other solution
that is action by really the technological elite.
What does that look like?
I mean, it would depend on the ex-risk,
obviously, like in terms of climate change.
We talked about electric cars and the way that those have become mainstream and it hasn't
been by persuading a lot of people that this is the best thing to do in a moral sense.
It's had to just come in through from a technological standpoint and become a thing in the world
that people want to do for their own
intrinsic motivations. It's going to be the same for like animal cruelty or vegan foods and
you know things like that that are big issues for people but you're never you're never going to
convince the majority of people to be vegan even if you know whether they should or not as a
never question from a health perspective but it's just not going to happen until we get to a point,
factory farming is not going to be eliminated until we get
to a point where we have, the technology is there.
We've provided a cheaper, easier, better, superior way
of providing that value for people.
So I think with virtually every itch, especially every
existential risk issue, it's going to come down to the actions
of a few people on power and the way that they're
able to reorganize.
And so it feels a little gas lady with the climate change
debate, this huge onus that's put on the individual consumer.
And the way it's the fault of the average person
that this is happening, if we drove less or we ate less meat,
we personally could alleviate some of the...
Have you had a look at the stats
on how much industrial units and large factories,
versus factory farming, versus et cetera, et cetera,
contribute as opposed to whether I need to buy and large factories versus factory farming, versus et cetera, et cetera, contribute,
as opposed to whether I need to buy a light bulb,
which is three times the cost, but uses half of the energy.
Like have you ever looked into the differentials on that?
I have a bit, and it's really staggering.
It's like, fundamentally, it's not going to happen. We're not going to sway the masses.
And even if we did, that's not really where the power lies.
Like the average person, the average person globally is starving.
They are not worried about their contribution to climate change.
And then even in a wealthier Western nation,
the average person in America right now
is struggling through a pandemic and trying to feed their family. And so we have this conflict of
interest, which is always there. It's part of nature, even the most symbiotic relationship that
we know of, like a pregnant mother and a fetus, it's really not that harmonious.
That's the reason that pregnancy and birth are so fraught with dangers.
Like the fetus has an ultimate impetus of like completely draining the mother of all
nutrients and resources so that it can be very healthy and robust.
And then the mother organism has an ultimate end goal of giving the fetus enough to successfully
birth it, but to maintain as much as she can so she can then go on a bear more children.
So there's at every single level, there's always this conflict between being an individual
actor in a system, a cell in an organism, and part of this macro organism.
And so at every level, we see that.
What's your, if you were to do a rundown, Mara's top three most likely existential risks
to look out for over the next 100 years, what would they be starting at number three
and then working to number one, which would be the biggest risk?
Starting at number three, well, to number one which would be the biggest risk.
Starting at number three, well some of this has been modeled so.
Of course there's so much uncertainty that's the difficult thing with modeling it is.
It's always it's always kind of a guess no one's gonna hold you to it if it happens to all gonna be dead so if you get it right it doesn't matter right if you get it wrong if you get it wrong. They might have a problem but if you get it right, it doesn't matter. Right. If you get it wrong, if you get it wrong, they might have a problem. But if you get it right, doesn't matter.
Well, the one that I might put at number three would be the unknown, unknown.
So 50 or 100 years ago, the major risks that we see today, we hadn't even conceived of.
I mean, that we had no language to discuss some of the environmental issues we're today, we hadn't even conceived of. I mean, we had no language to discuss some
of the environmental issues we're facing as well as, you know, the idea of bio weapons
at the scale or, you know, like mass deployed autonomous drone bot that are, you know, nanotechnology.
I mean, that type of risk was not even in our parlance.
So that risk of the unknown, unknown,
I think is quite real within the next 50 to 100 years.
And that's something that is difficult to prepare for.
And the only way I think to take it on, head on,
is to look at our mitigation methods
and to invest in much in R&D as we can.
But then beyond that, it seems like... I want to say, I actually don't think that I... I don't know if I would
put climate change in the top three. I would say bio-weapons and what's called misaligned
artificial intelligence
might be the top three.
So I think the unknown and knowns
is a really clever answer that I didn't think of,
but that would be mine as well.
I guess,
I don't know, I'm not sufficiently familiar
with nanotechnology and whether how that or a worry of that
would be distinct from misaligned AGI,
like the Gregu concern that we have,
where does the line get drawn between that and AGI?
Does that make sense?
Does nanotechnology fall underneath?
Could it even realistically be deployed on mass
at that level of capability without
some sort of artificial general intelligence over the top.
But yeah, I think I think that's not a bad, that's not about top three.
Should we talk about what you would do if you were in charge of the world to make the public more aware of the impending existential risk?
Yeah.
existential risk. Yeah, it's interesting. I do think about this a lot. I'm curious. When I talk to people, it seems like there's some amount of awareness, but it's something that we don't want to
think about. And in my generation, I'm like older, Gen Z young millennial. There seems to be quite a
bit more. Like as someone who was not old enough to remember 9-11, like our whole,
our whole coming of age has been really dominated by this talk of existential threats.
So I do see in like the younger generations, it seems like there's more awareness, but at the same time,
it's almost like, I mean, it's such an abstract problem that's so hard to wrap our heads around and it feels so
helpless, it feels so fruitless, we feel so small.
And so one of my main focuss over the last year has shifted to how do we relate to these
risks on an individual level?
Because I have all sorts of prescriptions about ways we can mitigate each of these individually
and at a collective level.
But there's some amount of hubris in assuming
that my policy recommendations would ultimately
be the right thing to do.
And there's also the fact that I have very limited influence.
I can't actually enact any of these things,
but as a member of the human species,
I'm like deeply concerned for our future.
So how do I relate to that?
And how do I balance this need to live a fulfilling
and fully developed actualized life
with the need to safeguard our future?
So for some people, that's not even a given.
It's not even a given for many, many people
that I've come across that the future of humanity
is worth safeguarding. I have an inherent value on consciousness.
I believe it is a gift back to the universe in a sense knowing the universe as it is is a really beautiful thing.
And as far as we know existence is better than non-existence or at least it's more interesting.
So it's inherently, it's like an a-priori-good. And at the same time, there are many people who do feel that that's suffering, given that,
you know, it's more of an impetus.
It can out ways that happiness or that pleasure that we receive throughout the course of
existence.
And so is it even worth safeguarding?
And obviously, these people don't exist yet.
So it's kind of like in religious communities where they like have to keep popping out
more and more babies because inherently more life is better than less life.
That's like the ultimate end of that line of thinking.
So there's some balance where is it true that inherently more life is better than less
life?
How do we put a value on the unknown
amounts of suffering that might be
persisting in the future? So if we can create this sort of society that
seems to be on the horizon in terms of greatly diminished suffering and
greatly diminished poverty the world over and
technological advances. I mean like the poorest people in the world today are still considerably better off
than in many ways than, well, I don't know, I don't want to say it quite like that, but
they have, the poorest people in the world today still usually have access to things that
their richest people in the world didn't have a hundred years ago.
So there's this, there's still an asymmetry,
but the quality of life overall is so fantastic.
And it's only gonna keep increasing.
How do we get people to think about this?
Like how can we drive this home?
I'll give you my prescription after yours.
Oh, okay.
I'm so curious to hear yours.
Oh, okay. I'm so curious to hear yours.
I think it comes back to examining heuristics and examining the way that we relate to all
the information that's coming in.
So I have all these motivations for the things that I do.
And some of them are conscious and some of them are unconscious.
Some of them are conscious and some of them are unconscious, some of them are conditioned.
Most of them have been actually been given as much thought
as I think that they have.
Even when I believe I'm acting altruistically,
I'm usually acting altruistically in a way
that creates a positive sense of feedback for me
and that often is not the most effective way to be.
So our idea of stiliness or goodness or morality are still really caught up in that Dunbar number
type of society where I see a person, I make them smile, I get a feedback loop, my mirror
neurons respond and I feel like a good person and now I'm contributing to the world. And that's, we need to like, I think my prescription would be to completely
re-examine our idea of morality. So for instance, like Bill Gates very well might be the most,
might have done more good than anyone in history up to this point just by this year. And that's
not to say that he's a particularly saintly person, but just that he has had more reach
and more influence and he's relatively strategic
about the way that he allocates funds.
So actually looking at outputs and efficacy
and responding and assigning moral value based on
real outputs and the way that they affect problems in the world
is considerably more fruitful than responding from any other place.
It's hard, though.
We talk a lot on this show about living
a consciously designed life, trying
to get rid of the genetic predispositions
and the ways that you've dealt with past trauma and the
path of least resistance and everything, like trying to deprogram all of the programming and be as
conscious as possible with your actions and your thoughts and your words. But the problem is, I think we
often do that the heubristic tendency of humans is to believe just how fucking smart we are. And we're not. We're very, very, very
primitive. And if ever there was a time to see that it's in the response to the pandemic,
like if you were, if everyone had the capacity to think on a civilization wide level,
everyone would have happily locked themselves in the house until every last drop of COVID left.
But we don't. We still got a lot of personal motivations
that are these like archaic hangovers
from a time where we needed to be tribal.
We needed to fight over mates and resources
and whatever, you know, pick whatever it might be.
I don't think that the vast majority of people, myself and you included
are anywhere near actualized enough to properly, properly know exactly what it is that we're
supposed to be doing in order to be able to do that. Your idea was much more abstract
and fun than mine. Mine is just to continue this conversation with guys like Toby Ord, with guys like Nick
Bosterham and Sam Harris who are sufficiently charismatic, the correct corner section of
charisma and understanding and that they can, like give them a TED talk, give them 10 TED
talks, give them like every TED talk from now until people believe that existential risk
is a big deal. And that, for me, is in 2020, it's how an idea pathogen really transmits.
It's by finding someone who has some social equity with sufficient visibility and or reach
your cloud and then just distribute it. But at the same time, there was that clip from,
I'm sure, that you saw of Bill Gates at the beginning of this year, where in like 2014,
he was like, yeah, the next big sort of risk that everyone's going to come up against
is global pandemics. And this video had been on like a documentary, really big documentary
with a production budget and people were sharing it around going like, well, I didn't
anyone know. And it's like everyone knew,, the only reason that you don't see that
is because there wasn't sufficient reach on it.
So just getting more charismatic people talking about it
is like my solution.
But I know that really individual actors,
it's the same, the same as us talking
about how to change climate change.
Like, you could have probably 90% of the population be concerned
about existential risk, but the top 10% that the ones that actually influence policy in
the direction of civilization. If they're not on board or can't be bothered or it doesn't
make sense to them, the entire population below them trying to enact change is not going
to make any difference.
Right. Yeah. I think what you're pointing at, there's like these two separate things that
I need to be in place for real social upheaval or cultural upheaval. And one of them is sort
of the public substrate, which can be influenced by getting more visibility in the ways that
you're describing with these prominent figureheads.
But a great example would be like what we're seeing in the US right now with the BLM protesting.
And that was precipitated by George Floyd's death. And that particular death was not the most
gruesome or the most offensive or the most anything. So why did it spark off this summer of
protest, which is still going on like in Denver right now? I mean every single night, it's still
happening months later. And there was like fire outside of the police station down here the other night.
But there's this, a microwave was the Occupy Wallster guy is talking about this recently.
There's not, no matter how carefully you plan like a rebellion or a revolution or an
insurrection or a demonstration, like it's not, there has to be some sort of natural
cataclysmic event that happens right around the same time that precipitates it.
So it can kind of feel like talking about these issues, whatever your pet causes,
like you're, you know, you're just grinding the wheels and you're not getting anywhere.
And there's some amount of chance, there has to be something that really shakes everything
up. And that's just luck, is that going to happen in time?
Well, why, like, we're in the middle of a pandemic. Why is everyone not thinking about X-risk now? Right. Well, to some degree, they are. I've seen quite a bit more, quite a bit of
arise and concern. The word apocalypse, I've heard like 3,000 more times than I ever heard before
and my life up to this point. I think people are getting more and more interested in concern.
Of course, this is one particular issue that I worry is not representative of how some
of the bigger risks that we're facing might take hold.
But it does seem like it's a good time to be talking about this.
There's more receptivity for sure. It feels it's palpable in the air. People feel
like the world is on the edge to collapse. There's major natural events and
there's you know threats of war and major political issues going on and the
pandemic. It's kind of a kind of a confluence of a lot of risk factors.
But I do want to circle back around to that concept that there's existential risk.
And then there's global catastrophic risk, which is much more likely, which is not a situation
like with climate change, the likelihood of all the humans being killed in the near term
is not really that high, but the likelihood that the majority of the world is going to become uninhabitable lots of people will die in the remainder
will have
really low quality of life is much higher and that seems to be more of a motivating for people like we can't abstract out conceive of the lives of people that don't exist and might never exist
lives of people that don't exist and might never exist. But if we can think about our children's suffering in a really unpleasant world and think about the likelihood of that, it's quite likely,
and it's quite unpleasant. And I think some people would probably prefer non-existence to
some of the types of dystopian futures that we're facing anyway.
Well, if it's just everyone mad maxing around wearing leather, with a lot of stuff with spikes on, like riding around in an old Ford focus or something
driving across the desert. Yeah, I am, I have an interesting sort of, I sent you a video
earlier on, David Attenborough, we used trending on Twitter today, talking about how important
climate change is and how don't waste anything, don't throw away food, don't throw away packaging,
don't do the planet is on the edge of collapse.
How correct is David's science there?
I didn't watch the whole video.
I didn't see the science that he cited.
I did watch, I did see him talking about their urgency and his critical impetus to
get this out to the masses.
And it was interesting how I related to it.
I really appreciate Sir David Attenborough and his approach and he is definitely trending
with that.
And at the same time, it again brought me back to that, you know,
that question of how much impact is this really going to have? And in what way? And in a
really direct, straightforward way, I doubt it's going to have much impact, like the majority
of people who were already aware of their environmental impact are going to continue to be and the
ones who aren't, whether because they are unconcerned. And unfortunately, a lot of those people have a very big impact.
Or they're just in survival mode, and it's not a priority for them.
It's probably not going to shift that much.
So my question is about who he's influencing and what the intent is there.
Because the people with the real capacity
to make a change are in tech.
How would they make a change?
So on the climate side, actually,
I would say more research.
Like there are still so many unknowns
about what the biggest risks are.
And then in terms of switching to clean energy,
that's really gonna come from tech.
It's not gonna come from the individual,
the average individual consumer.
So I have curiosity around what the best way
to affect and influence that is.
It does seem at every level though
to start with the individual,
which is interesting.
It's like fundamentally, and this is why so much of my focus is on
personal kind of self-development. I mean, it's this really the only thing we ultimately have
that much control over, but there's this way of relating with self-development as like an
internal growth thing, and then there's a way of just completely moving past
that idea of self and seeing ourselves
as a part of a giant macro organism,
like ants in a colony, we're like one being.
And the more we can relate to that,
the more that we can align all the choices
in our lives around that.
So the more I'm operating from like,
it's like a marriage of two critical pieces that,
this is what I try to convey.
When I talk about it mostly,
it's alignment with the macro organism
and then it's also like a rigorous use of reason,
which of course is not to get into hubris
and to assume that we can predict all the outcomes
of all of our actions,
but it's a commitment to really charting what are my impacts and my career
and the way that I choose to live and how do they relate to the biggest risks that we're actually
facing. So I think there are people who are in much better positions to make major change
in the world than others and the more that those people can be reached, I think, the better.
But I don't know that that sort of messaging now is the time to recycle more and now is
the time to consume less is really going to make the biggest impact.
It seems like getting that point of self-development in real alignment and real rigorous examination
of impact to the people who actually have a big impact in the world,
CEOs and tech leaders and researchers and people working in policy, those seem like the
really critical actions to take. How does that land?
I think so. I came up with an idea a few months ago called the Improvement Imperative,
which was that it is your duty to be everything that
you can, the reason being that you can impact the lives of the people around you, and if you
raise them up, then they raise the people that are around them up, kind of like a positive
sum effect, a positive pathogen, I suppose, that spreads, and for every one person that's
able to do that, for instance, you know, Joe Rogan, how many people's lives, how many eyes is he opened,
say what you want about some of the stuff that he comes up with, like, don't like what
he says about transgender athletes, like, okay, mate, he's done 4,000 hours of online
programming, which has reached billions and billions of sets of ears. Like, how many
people's eyes have been opened to a different way of viewing the world, to be more reasonable,
to be more nuanced, to be more complex, to have their self-development improved.
That is him contributing at around about as close to a highest cadence as I can think
he could.
And the opportunity for everyone to do that, whether it be a single mom who's raising
two children, okay, I'm going to raise these children to be as actualized and happy and independent and positive and did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did-did- inherent tribalism and laziness and desire, all of the stuff that we've carried over from
a time where we really needed it, but it's not fitness, it's no longer fitness enhancing,
it's no longer adaptive. And as the environment that we're in continues to change as quickly
as it does, any adaptation that evolution did manage to look out would be completely
passed its sell by date within 15 years in any case. Pointless, adapting yourself to the,
let's say that you adapted a way to not become addicted
to a device that was in your hand
and that you wouldn't be able to have your dopaminergic system
manipulated by that.
20 years time, you're not gonna have a device in your hand,
it's gonna be attached in your brain.
And then in 150 years time,
the transhumanism movement will be with us, and you
won't even be walking around in any case, and you'll just have electrodes plugged in, and
you'll just be sat, like that Bruce Willis movie, where everyone was just like a weird, at
home fat week, a guy playing a computer game with their model version of themselves floating
around.
But one of the things that keeps on coming up, here is climate change. What
has made climate change such a high priority, highly visible, existential risk category issue
for people to be bothered about. Why is it not that everyone is thinking about the concerns
with biotechnology or nanotechnology or artificial general intelligence, which by pretty much everyone that I know standards,
AGI is the risk to be concerned about over the next 100 years.
And Greta Thunberg isn't driving to go and see the deep mind people and talk to them
about how you considered your alignment and control problem.
Do we have a machine extrapolated volition here so that we can actually try and wrangle this thing back?
Like Greta Thunberg shouting at adults about the fact that they're not that they're wasting money on bottled water.
Right. Yeah, it is interesting. I've been so curious about that as well. We have this whole
Well, we have this whole specter of risk assessment and response that has had varying degrees of success.
So to put them on a spectrum with climate change towards the center, on one hand, there's
like the response to asteroids.
That has been a very successful response, existential risk, granted the risk of
being wiped out by an asteroid is fairly low.
But still, we had a great response to it.
We didn't know until the last 30, 40 years
that the asteroids were likely to have been
wiped out the dinosaurs.
That's like really new knowledge.
And that quickly, I mean, I didn't realize it was so new, because my entire life, it's been something that, that was, that's like really new knowledge. And that quickly, I mean, I didn't realize
it was so new because my entire life, it's been something that, you know, the government has
responded to. And now it's like, it's pretty negligible. It was already fairly negligible. And now it's
like, you know, microscopic risk because we have charted everything around us and we have a really
good, we're really great situational awareness in that regard. But that didn't become politicized. It was something that like, it hit at the right time.
There was, there was a few things that happened all at the same time. It was like,
we gained more knowledge about the impact of an asteroid hitting the earth.
There were a bunch of movies that came out around that time because it was
It was a hot topic and then there was the Shoemaker Levy comment hitting Jupiter, so and that was like
Created a really big impact for people and so it wasn't politicized and everybody was able to get behind it and it was funded and you know
this one country basically took it on and
Kind of solved that problem
for the rest of us.
And then on the other extreme end of the spectrum, there's these concerns around exponentially
advancing tech that are pretty fairly neglected.
There's not very much work being done.
And then in the middle, there's I think there's
perfect storm that's created all the fervor over the climate change debate, which is that
on both ends of the spectrum, it hasn't been heavily politicized.
And so we were either able to like effectively respond or it's kind of left alone.
But then when something gets politicized for whatever reason, it becomes just there's
this fervor over it. And it, it becomes a tool and a weapon
for people with completely, you know, unrelated aims and goals. And I think that it seems
to be what I kind of do.
Like, don't, don't forget that there's people talking about climate change who don't
have the first idea about what the actual stats are behind it. Now there's not many people who are discussing the control problem for AGI
that don't actually know what's going on, but sadly as you get a problem which
has more social signaling behind being associated to it, people jump on board
without having done their research. Right. Right. Virtually everyone has an opinion on climate
change. So it's quite natural. I think that it's exponentially compounded into this massive
debate, but it does seem to be I'm much more concerned about the next pandemic,
personally, than I am about climate change and seeing our
response to COVID-19 and I'm even more so. So I'm curious about, and also, you know,
tensions escalating as we're coming up against our election in the US. I'm very curious what's
going to happen, but there's there are a lot of things that are not really being addressed at a global level.
And I think what would be really useful would be to have more of a more of a holistic like global agency focused on mitigating ex-risk and observing it and like an independent agency
really focused on that. So I work in space, I work in a space sector in Astro
politics. So a lot of what I observe is the way that different space agencies
in the world communicate around astro politics. Astro politics, right? Which is what it sounds like, the politics of space, which is a fairly, it's kind of like
the Wild West.
There's a really small degree of well-thought-out collaboration for say, actually going on
in space law.
It's a commons.
It's like, how do we, how do all these different players, governments and
organizations and private companies that are trying to utilize this commons do so in a way that protects both our shared and our disparate goals and make sure that it's not weaponized and it's really done. It's kind of a ship show right now, quite honestly. I saw a, and so that has, I saw a, an article that was decolonized
Mars. Did you see this? I didn't see it. Oh, wow. This is so, so, up your street. So,
James Lindsay shared it. Basically, the concern was that colonizing Mars is an echo of the colonization of the West by Europeans and the subsequent
destruction of the native peoples that were therein. We need to have a discussion about
who is going to colonize Mars and James Lindsay, quote, tweeted it saying, we're having a
discussion about decolonizing a planet that we haven't even colonized yet.
Right.
Oh, that's so funny.
I have heard that that debate quite a bit.
It's interesting.
I'd love for you to send that to me so I can read it.
Yeah.
Yeah.
There's so much concern around our technological advancement as well as like moving out into
the universe and colonizing other planets because of these concerns that will take human
nature with us and that will just reenact these atrocities everywhere we go.
And I do actually have more faith in human nature than that.
It seems like the more that we are able to collaborate towards a shared
goal, I always say the ends are the means. It's like as we collaborate towards these sort
of massive goals that surpass, that we can't really achieve as these individual boring factions,
the more that things tend to smooth out all the way down. I agree completely. I think as well,
the people who are being selected for that as well are going to have had
some pretty rigorous psychometric evaluations.
You're not just going to get someone who happens to become a neo-Nazi halfway to Mars.
They've been through it and they've dedicated decades of their life to this one purpose.
And for all that Hollywood might decide to dramatize
the guy who goes crazy in space, we get to see that.
I don't think there's any examples of that.
Now, yet we haven't done a trip to Saturn's moons,
but we've been spent a fair bit of time
as people that have done like 100 plus long stents up there
and have held it together pretty well.
The signs are encouraging, but the thing which struck me the most upon hearing chatting to you over the last few weeks in preparation for this and then also reading
Toby's book is that we are at the perfect junction between having enough power to be
able to do something that could severely
neuter our ability in future and having nowhere near enough wisdom to corral that power.
It's Eric Weinstein.
We are gods, but for the wisdom.
Yes, yeah.
It's so interesting.
It brings me back to the prisoner's dilemma, really. How do we, we are these individual actors and will be much more successful the more we
are able to collaborate towards a whole, but we don't currently live in a world where
that's entirely possible.
I struggle a lot to keep, you know, like everything going in my life.
And I have it significantly better
than the vast majority of the world.
And so being able to have,
it's a luxury to be able to focus on
the overall well-being of the greater organism.
But the, oh, what was the name of the guy
who came up with Tiff Retat
that way of resolving the Christmas dilemma.
Rappaport. I know the guy that you mean.
Yeah, Rappaport can't remember his first name, but yeah, it's
essentially like start from the one algorithm that's so simple
that is the most successful at removing the present.
Emma is to start from a place of assuming the best intentions.
And then to mimic whatever you received from the person
that you're working with, whatever their last move was
to mimic that.
And that's the best chance we have of being successful.
Obviously, we're in a very complex world, so it's not quite so simple.
But we are all responding.
You know, am I in a world where I need to really fight to get ahead?
Then that's what I'm going to do, and that's what I feel like I need to do to stay alive.
And so I can't consider the long-term health of the rest of society.
There's some amount of anti-social behavior that can be tolerated in a society before
it starts to collapse.
And once it gets over that line, there's massive collapse.
And so that's why I think I keep coming back to,
yes, massive personal responsibility
in terms of aligning ourselves with our impact,
exercising full agency over everywhere in our lives
that we have impact,
but also really holding accountable
the people who have the power
because it's highly concentrated.
And those are the people who set the tone for everyone else and can help create this world
and I think and we are moving towards it, we're moving towards it swiftly.
This moving towards this world where people can relax into greater possibility and begin
acting in more pros-social ways, it seems to be what works.
And so we will tend to default to what works.
Toby odds numerical values that he gives
for society's chances in the future
make for pretty stark reading.
So he says that the chance that we go extinct
or that we permanently mute our ability to reach our full potential as a civilization within the next century is one in three.
And within forever is one in two.
So the vast majority of our ex-risk is front loaded over the next century.
One in three. So only a two in three chance that we decide to make it.
And then a one and two and even like a toinkoss that we do after that.
What's your opinion?
What would you put your numbers if you were doing the same equation?
Yeah, I don't know that I would differ significantly.
It does seem to be heavily front loaded.
It does seem to be an absolutely critical juncture.
And that's one of the questions that becomes difficult when modeling X-Risk 2 is just as
it's impossible to account for the unknown unknowns.
It's also impossible to account for the ways that are, you know, we are adapting. So for instance, a lot of, we're very vulnerable because of our dependence on the sun.
And if we are able to devise new ways of nourishing ourselves without needing agriculture, you
know, a lot of our major concerns will go down quite a bit.
Like, life still might be very unpleasant in case of volcanic dust and ash, filling the atmosphere
or massive global warming, but we could survive it if we were more able to develop these different
sorts of adaptations. So that should be more of a focus, I think, than it is, but it also skews
our ability to model a bit. So it does seem like Ord is really on point
as far as any modeling that I've seen.
And there's always some element of uncertainty,
which is that element of uncertainty
is more of a stressor for me than it is soothing.
So yeah, it's interesting.
I think that's where I come back to also like
chop wood, carry water.
Ultimately, we're at a very critical juncture.
And, you know, things are in play.
There are things we can keep having these types of conversations
publicly and get as much focus on these issues as we can
and hope that the right people end up receiving
the right messages.
But fundamentally, we don't ultimately have that much control as individuals.
And so there's some amount of getting in right relation with that and accepting it that
I think needs to happen in order to face these issues.
We have to be able to like eyes wide open except what does it mean for the interiority of humanity, the
interiority of consciousness as we know it to not exist.
It's huge and that becomes really a metaphysical kind of almost spiritual practice and that's
kind of a whole other conversation coming back to like accepting that I am, no I am not
really a self and so when there's no self left,
maybe it'll be okay. Because only when you only when you really understand that, can you start
making really conscious decisions about how to relate to it?
Yeah, it is a, it's a blessing and a curse to be able to step into our own programming.
I've been thinking about this a lot recently because I've been spending too much time with dogs,
which there is no such thing as spending too much time with dogs.
You're one time with dogs.
Yeah, it's impossible.
But everyone's looked at a dog and thought, I would be so brilliant if I was that dog.
You know, I'd just lie on the floor and he'd look so happy all the time and life would
be simple and this and the other.
But the fact that we're the only animals that we know of in the entire universe that can
step into our own morality puts such a huge burden on us.
Like the fact that in the words of mutual friend Daniel Schmacktonberger that we're not
just cargo on spaceship earth but we're crew, it is our job to try and move the direction of the only conscious
beings in the entire universe that we know of that can step into the room early and decide
to girdle their future in a direction. Like the lions aren't making it to space. And as
much as I'm terrified of cephalopods and as clever as they are, they're not built in rocket ships either. Like for all the Adrian Chikovsky and children of Ruin might want that to be the case.
He genetically modified some octopuses, octopi, and then they're flying around in big
orbs of what a wonderful book if anyone wants to read something that'll make you brain
explode.
But that's not a concern.
It's us.
It's on us.
It's just on us.
We have Fermi paradox.
Got no answer to that yet.
Don't really know where they are.
If it's on the hands of
a seven billion
previous apes recently
found electricity within the last couple of hundred years. I don't know, it feels
a precipice isn't enough of a violent term for Toby to use. It should be like
plank length knife edge. Right, right. And it's so funny to relate to it that way too.
When you look at the way that we wield this responsibility, we think in terms of quarterly
profits and election cycles.
These utter blips in time, and then we plot our goals around the next four years or the
next three months.
And maximizing returns in those
time periods, it really reminds me most of a toddler and the way that a toddler relates
to the world and this very short term thinking, unable to make decisions or a teenager.
Maybe that's, I think, I think, or it actually makes that analogy as well that we're kind
of like in our adolescence and the way that we are,
the way that we're wielding that power really shows that, you know, we're like smoking a cigarette
and we don't, you know, we have no concept of what the long-term impact of that is going to be,
because we're so, we're so focused on the here and that and we suddenly we have all this power
and we have these lofty ideals and we're using it in these different ways but we're just really lacking this wisdom and this systems thinking
ability.
So the question is how to quickly grow up, how to quickly up level that as a cell, not as the organism itself, because each of us
is only an individual cell.
And it's interesting and it's amusing and kind of sad.
It is melancholy, isn't it?
Like when I find myself feeling very melancholic after I read stuff like super intelligence
by Nick Bostram or the press
press by Toby Ord.
It's weird.
It makes me feel very grounded and very down to earth in the same way that looking at
the night sky does.
Like it's kind of reassuring to know how small you are and how limited your impact can
be. But then also gets me very agitated
at seeing both my own and everyone around me's wasteful use of their consciousness.
Like you have this second and this second and this second and this second and how are
you spending it? You're spending it thinking about how amazing it would be if that hot girl
in work would come and ask you out or Or like, I need you to just think,
oh my, is this really the best that I've got?
Like, you know, it is.
It's an interesting blend.
If you were to...
And...
Go on.
I was gonna, I'm not sure if you're familiar with
in a girl theory in Ken Wilvers work much, but the way that he relates to that is,
um, to transcend and include. So we're not into, we're not attempting to move beyond
our base instincts. We're attempting, we, they're a part of us, and everything that's a part of us
serves a function. So we want to transcend it and include it and bring it up to that next level
of consciousness, but not like reject it. So yeah, these, it and include it and bring it up to that next level of consciousness,
but not like reject it.
So yeah, these really base sort of things that we deal with like jealousy or insecurity,
hubris, lust, they seem so pointless.
And yet they're a part of what drives us. A lot of, I would bet that a lot of people working at the forefront of some of these
fields and having a really big impact.
At some level, one of those motivations is they want to get laid, and that's a thing.
And that motivates a lot of people to achieve really highly, or they want money.
And there's usually not just one driver for why we do what we do. motivates a lot of people to achieve really highly, or they want money.
And there's usually not just like one driver for why we do what we do.
It's kind of about transcending and including and channeling those drive toward something
that's aligned on every level.
So if we can just bring it out of the shadow, like, why am I doing this?
Why do I want this?
Is it really aligned with what's ultimately going to serve?
As long as we're yeah, I think it's better not to like repress it and just trying force our way into being this like super enlightened being who only cares about the whole interior of the human
race. It's likely to be less successful. Have you seen the Futureurama episode where they create perfectly realistic human-sex robots?
And everyone, everything on the planet grinds to a halt.
All the scientists stop working, all the bankers stop working, all of the road cleaners,
because like the subtext is, anyone is just doing everything in an effort to get laid.
So like Futurama decided, like, Matt Grooney had a crack at that.
It's like, it's the same as every Rick and Morty episode,
like the painful, the funny thing is the fact that it's true.
Mm-hmm, great, exactly.
Oh, that's hilarious.
Yeah, yeah, I definitely, I think if any of us are honest
about what's driving us, we'll find that there's a lot of things
we won't find because they're too unconscious.
And then there's a lot that we'll find that maybe feels silly or feels not super aligned
with our higher order selves, but they can be brought into alignment.
Well, the genes are genes are bastards.
They're so, like, if you were able to bottle whatever a gene has and then turn it
into a spy, you would, it would just be the best information agent on the planet.
Like, James Bond wouldn't have shit on your genes if you were able to manifest it into
a human because what you think you're doing and why you think you're doing it, you don't
have the first idea.
The more and more I read about self-deception, the more that I realize that that's true. So I agree, I think, there's an upper bound
on the level of self-insight that we can have. And being aware of the fact that there
is an upper bound on it as well and not becoming hubristic about your ability to deal with
complexity and self-development is also important,
but that's a further stage of self-development, I suppose, the ability to understand how little you can develop.
It's a challenge.
To let everyone go away with something that they can think of or continue to rely on as a maximum or just a concept.
What are the things that you wish people would be more aware of?
Or what is something useful that you think people can take away from this conversation so
that they are able to think with this more systems-wide mentality? I think the most useful thing that I've done, and it might seem a bit counterintuitive
to this conversation since we're kind of focusing on these major systemic external issues.
But the main way to really understand them and have an influence that's positive is to be constantly
assessing everything that goes through your head, every all the information that
comes in and the responses that we have to and the way that we want to
respond in the world, the way that we want to act on it, and making sure that
we're really rigorously analyzing them. I think we have a huge responsibility to the rest of humanity
and all those lives they have to come to be doing so
because our impacts are so huge right now
and the time is so critical.
We don't have a choice.
If we had been born in a different time period
where our impact was much more limited, we might have more
of an excuse to kind of put along and remain more in shadow and in some amount of unconsciousness.
But as difficult as it is, that work of rigorous assessment is really critical right now.
I hope that that's gonna inspire many people
to go and do that.
Consciously designed life is a term
which continues to come up in what I talk about
with whether it's James Cleared talking about habits
or James Altichard talking about jumping the queue
or Daniel Schmacktonberger talking about sense-making or yourself talking about existential risk.
The ability to step into our own programming and consciously choose the things
that we do and understand our motivations for doing them.
It seems like there's very few downsides to that
and the more that we can focus on doing that on an individual level, the more that
the improvement imperative that I discussed earlier should expand out and influence more and more people.
Do you feel is a last question for you actually?
Do you feel given your particular area of expertise and skill set and the fact that you're still
young and this time of our civilization? do you feel a particular level of pressure,
or do you notice other people within your industry feeling a particular level of pressure to think,
yes, shit, like it is on me, you know, if something bad's going to happen within the next century
and you're going to probably be around for another sort of 70 or 80 years of that, like,
do you feel, holy shit, quite a bit of, quite a bit riding on my head?
Right, well I heard two separate questions,
which have two separate answers.
One is, do I feel that way?
And one is, do some of the people that I work with feel that way?
I work in aerospace and defense.
And primarily, I don't see too many of my colleagues feeling that way,
both in academia, in the policy
world, and then I work also in the startup in BC worlds, in aerospace, and I don't, there's
a lot less focus than I would like on what our ultimate impacts are.
There's still this short term thinking there's pressure but it's the pressure of quarterly returns not of the ultimate end of humanity. So but then do I feel that
pressure I do? I very much do and it does seem like more of the younger people inhabiting
this space feel more of that is what I'm observing anyway anecdotally. So I do feel that pressure and at the same time as I learn how to relate
to it, I've become more and more at peace with it. It's the chop wood carry water way of
relating to ex-risk. It's like, I have, there's this pressure that's so massive. All I can
do is, all I can do is really transcend my self to be able to influence it. And then in doing that,
in doing that, I self-actualize. It's so interesting. It's like the one thing that is actually the
best predictor of physical health, mental well-being, all around just thriving as a person is your ability to live in community with
other people and be a part of something bigger than yourself. So this ends
are the means. It's like as I take on this massive thing that I feel so much
pressure about the more I've become a piece with myself and my life and I hear
that experience from others as well. I think so Jordan Petersonism the purpose
of life is to find the biggest weight that you can bear and bear it
and we don't get our fulfillment from
rights we get them from responsibility and you take on responsibility
you can pile on that weight and some people have big shoulders like you and some people
might have different shoulders and
yeah, it's the duty of all of us to try and find where that weight lies
Well, I hope that we haven't given anyone an existential crisis talking about existential
risk for so long today.
But if there is anything, where should people go?
They want to check out some more of your stuff.
Hopefully.
They want to read a little bit more about this.
Where should they go?
Yeah.
Noonaut.org is a think tank that I founded two years ago, and I do some amount
of writing there.
We also do some consulting.
And then at the Astropolitics Institute, the executive director there, and we published
the scholarly journal of Astropolitics, which focuses quite a bit on a lot of these issues
from a space-based perspective.
You know what you need to do?
You need to speak to the people at Apple and get them to auto-ad astropolitics to the
dictionary because it's not in the dictionary.
I've just written it out.
I know.
And I'd note there.
That's when you know you've made it.
I know.
But when your industry has made it into the dictionary on this...
It's still check, that's cool.
I know.
That's how you know it's done.
Look, Mara, thank you so much for coming on.
I really, really enjoyed this.
It's been a conversation I've been looking forward to for a long time.
And hopefully we'll have a reason to loop back and say,
look at all of the progress that we've made.
I hope so.
Yeah, thank you so much. Offends, be at the end of the day