Making Sense with Sam Harris - #208 — Existential Risk
Episode Date: June 23, 2020Sam Harris speaks with Toby Ord about preserving the long term future of humanity. They discuss moral biases with respect to distance in space and time, the psychology of effective altruism, feeling g...ood vs doing good, possible blindspots in consequentialism, natural vs human-caused risk, asteroid impacts, nuclear war, pandemics, the potentially cosmic significance of human survival, the difference between bad things and the absence of good things, population ethics, Derek Parfit, the asymmetry between happiness and suffering, climate change, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Transcript
Discussion (0)
Welcome to the Making Sense Podcast.
This is Sam Harris.
Just a note to say that if you're hearing this, you are not currently on our subscriber
feed and will only be hearing partial episodes of the podcast.
If you'd like access to full episodes, you'll need to subscribe at samharris.org.
There you'll find our private RSS feed
to add to your favorite podcatcher
along with other subscriber-only content.
And as always,
I never want money to be the reason
why someone can't listen to the podcast.
So if you can't afford a subscription,
there's an option at SamHarris.org
to request a free account.
And we grant 100% of those requests.
No questions asked.
and we grant 100% of those requests, no questions asked.
Okay, well, the last episode was controversial.
Episode 207 on racism and police violence.
We have since released an annotated transcript to that episode, with links to relevant videos
and articles and data.
I've seen some response, some of it quite effusive in praise, and some of it outraged,
which of course I expected.
Many people also contacted me privately to convey their gratitude and full support, all the while making
it clear that they can't take such a position publicly. And this is definitely a sign of the
times that concerns me. I'm talking about people who, in any sane society, should be able to have
the courage of their convictions. And some people thought it ironic, even hypocritical, for me to trumpet
the value of conversation in a solo podcast. But the truth is, that podcast was just my way of
starting my side of a public conversation. I'm sure I will have proper conversations on this
topic in future episodes, and I welcome recommendations about who
I should speak with. But given what I perceive to be the desperate state of public irrationality
at the moment, I wanted to say something at full length that was relatively well-formulated and
comprehensive, rather than just lurch into a conversation with someone
and just see what came of it. Anyways, I make clear in the podcast that wasn't the final word
on anything, apart from my sense that intellectual honesty has to be the basis for any progress we
make here. And to that end, I will keep listening and reading and having conversations.
To that end, I will keep listening and reading and having conversations.
Another thing to clarify here, there are now two formats to the podcast.
And actually, there's three types of podcasts that fall into two categories.
The first is the regular podcast, which is generally an exploration of a single topic, and that is usually with a guest,
very often based on a book he or she has written, but sometimes it's a solo effort,
like my last podcast was. And the aim in this standard format is to say something
of more than topical interest. These are podcasts that I hope if you listen to them two years from
now or even further in the future, they would still be worth listening to. And if you're seeing
these episodes online, you'll see that they have a unique photo or piece of artwork associated with
them, and they're titled in some way to reflect their theme. And the second format, which I've
piloted with Paul Bloom and Caitlin
Flanagan, but which I've also used for other guests recently, David Frum, Jonathan Haidt,
Andrew Yang, Yuval Noharrari, this format aims to be more topical. It's not that we won't say
anything of lasting interest, but the goal is certainly to cover some events that
are in the news and to not linger too long on any one topic. And these episodes are titled
just with the date of the broadcast. So I hope that clarifies any confusion out there.
Once again, if you want to get full episodes of the podcast, you need an account at samharris.org.
And as there are no sponsors
for the show, the fact that people subscribe is what allows me to do this. So thank you all for
your support. Okay, and now for today's podcast. Today I'm speaking with Toby Ord. Toby is a
philosopher at Oxford University, working on the big picture questions
that face humanity. He is focused on the ethics of global poverty. He is one of the young founders
of the effective altruism movement. I previously had his colleague, Will McCaskill, on the podcast.
And he created the online society, Giving What We Can, which has gotten its members to pledge over $1.5 billion
to the most effective charities.
And his current research is on the risks that threaten human extinction
or the permanent collapse of civilization,
otherwise known as existential risk.
And Toby has advised the World Health Organization,
the World Bank, the World Economic Forum,
the U.S. National Intelligence Council, and the U.K. Prime Minister's Office.
And most important, Toby is the author of the new book, The Precipice, Existential Risk and the Future of Humanity.
And it is an excellent book, which we cover only in part in this conversation,
but we cover a lot. We talk about the long-term future of humanity, the moral biases that we all suffer with respect to distance in space and time, the psychology of effective altruism,
feeling good versus doing good, possible blind spots in consequentialism.
Natural versus human-caused risk.
The risk of asteroid impacts, nuclear war, pandemics.
The potentially cosmic significance of human survival.
The difference between bad things and the absence of good things.
Population ethics.
Derek Parfit.
Derek Parfit was Toby's thesis advisor
the asymmetry between happiness and suffering
climate change
and other topics
Needless to say, this is a conversation that
stands a very good chance of being relevant
for many years to come
because our capacity to destroy ourselves is only increasing.
So, without further delay, I bring you Toby Ord.
I am here with Toby Ord. Toby, thanks for joining me.
Great to be here.
So, I'm very happy we finally got together this has been a long time coming
and I knew I wanted to speak with you even before your book came out
but your book has provided the perfect occasion
the book is The Precipice, Existential Risk and the Future of Humanity
and it couldn't be
better timed in some way
except one of my concerns in this conversation
is that people have,
without even thinking about it in these terms, something like existential risk fatigue, given
that we're dealing with this global pandemic, which is not in and of itself an existential
risk, as we'll talk about. But I've had a bunch of podcasts on topics related to this, like nuclear war and other big picture concerns that I felt have been
sort of mistimed in the current moment. And so I delayed this conversation. I feel like people have
acclimated to, if not the new normal, a long emergency of some kind. And this now strikes
me as the perfect time to be having this conversation because,
as I'm sure we'll talk about, this really seems like a stress test and a dress rehearsal
for much bigger problems that may yet come. And so it's really an opportunity for us to learn
the right lessons from a bad but ultimately manageable situation.
And perhaps to start here, you can just introduce yourself, and I will have introduced you
properly before, but how do you describe your work as a philosopher and what you have focused
on up until this moment? And perhaps, how do you see the current context in which to think about these ideas?
Yeah, I'm a philosopher at Oxford University, where I specialize in ethics.
Although I didn't always do philosophy.
I used to be in science, specializing in computer science and artificial intelligence.
But I was really interested in questions, big picture questions, which is not that fashionable in ethics, but questions about
really what are the biggest issues facing humanity and what should we do about them?
Thinking about humanity over the really long run and really global issues. So I found that
within philosophy is a place where one can ask these kinds of questions. And I did quite a bit of work
on global poverty in the past as one of the really big pressing issues facing humanity.
And then I've moved in more recently to really be specializing in existential risk, which is the
study of risks of human extinction or other irrevocable losses of the future. For example, if there was
some kind of collapse of civilization that was so great and so deep that we could never recover,
that would be an existential catastrophe. Anything in which the entire potential of humanity
would be lost. And I'm interested in that because I'm very hopeful about the potential of humanity. I think we have potentially millions of generations ahead of us and a very bright future, but
we need to make sure we make it to that point.
Yeah, and I assume you do view the current circumstance as, in some sense, despite the
obvious pain it's causing us and the death and suffering and economic
problems that we'll endure for some time, on some level, this is almost as benign a
serious pandemic as we might have experienced.
And in that sense, it really does seem like an opportunity to at least get our heads around
one form of existential risk.
Yeah, I see this as a warning shot, the type of thing that has the potential to wake us up
to some even greater risks. If we look at it in the historical perspective, it was about 100 years
ago, the 1918 flu. It looks like it was substantially worse than this.
That was an extremely bad global pandemic, which killed, we don't really know how many,
but probably a few percent, something like 3% of all the people in the world, which is
significantly in excess of where we are at the moment.
And if we go further back in the Middle Ages, the Black Death killed somewhere between about
a quarter and a half of all people in Europe and significant numbers of people in Asia
and the Middle East, which may have been about a tenth of all the people in the world.
So sometimes we hear that the current situation is unprecedented, but I think it's actually
the reverse.
What we'd thought was that since it
was 100 years since a really major global pandemic, we'd thought that that was all in the
past and we were entering an unprecedented era of health security. But actually, it's not. We were
actually still vulnerable to these things. So I think it's really the other way around.
So before we jump into existential risk, I just want to talk about
your background a little bit, because I know from your book that Derek Parfit was your thesis
advisor. And he was a philosopher who I greatly admire and was actually, I was in the middle of
an email exchange with him when he died. I was trying to record an interview with him and really consider it a major missed
opportunity for me because he was such a beautiful mind.
And then I know some of your other influences, Peter Singer, who's been on the podcast, and
Nick Bostrom, who's been on as well, have you single them out as people who have influenced
you in your focus, both on effective altruism and existential risk.
I guess before we jump into each specifically, they strike me as related in ways that may not be
entirely obvious. I mean, obviously they're related in the sense that in both cases,
we're talking about the well-being and survival of humanity. But with
effective altruism, we're talking about how best to help people who currently exist and to mitigate
suffering that isn't in any sense hypothetical. It's just that these are people, specifically the
poorest people on earth, who we know exist and we know are suffering the
consequences of intolerable inequality or what should be intolerable inequality in our world.
And we can do something about it. And the effective piece in effective altruism is just how to
target our resources in a way that truly helps and helps as much as possible.
But then with existential risk, we're talking rather often about people who do not yet exist
and may never exist if we don't get our act together. And we're also talking about various
risks of bad things happening, which is to say we're talking about hypothetical suffering and death
for the most part. It's interesting because these are, I mean, in some sense, very different by
those measures, but they play upon the deficiencies in our moral intuitions in similar ways. I'm not
the first person to notice that our ethics tends to degrade as a function of
physical distance and over any significant time horizon, which is to say we feel less of an
obligation to help people who are far away from us in space and in time. The truth is we even feel
less of an obligation to prepare for our own well-being when we think about our future
self if we discount our concern about our own happiness and suffering fairly extremely over
the time horizon. Let's talk about the basic ethics here and feel free to bring in anything
you want to say about Parfit or any of these other influences, but how do you think about proximity in space and time
influencing our moral intuition and whether or not these things should have any moral significance?
Mm-hmm. So in terms of physical distance, Peter Singer was a big influence on me when it comes to that. He has this brilliant paper,
Famine, Affluence, and Morality, where he asked this question about, you know, if you're walking
on the way to work and you passed a child drowning in a pond, you know, and in order to go in and
help them, to save them, you would have to ruin your shoes or your suit or some aspect like
this, which is significant value. Say you're going to give a fancy lecture. And most of us,
without really much hesitation, would go in and do this. And in fact, we might think it's wrong
for someone if they just looked at their suit and their shoes and then kind of thought, oh,
actually, no, I'm not going to do that, and walked on by. And he made this analogy to what about people in distant countries? There's some question about exactly how much it costs to
save a life in poor countries. And it may actually cost more than a fairly nice suit, maybe about
$1,000 US. But he kind of asked this question about what's really different in those cases.
And could the physical distance really matter? Could the fact that they're a stranger matter? He came up with a whole lot
of ways of thinking about these differences and showing that none of them really could matter.
So yeah, he's really helped challenge a lot of people, including me, about that.
Now, effective altruism is more general than just thinking about global poverty. It could apply to existential risk
as well. And in fact, many effective altruists do think in those terms. But it's about this idea of
really trying in our lives to be aware of how much good we could do with our activities, such as
donations or through our careers, and really trying to think seriously about the scale of it.
So I got really interested in this when I looked at a study called Disease Control Priorities
in Developing Countries 2, catchy name, DCP2.
And it had this table in it where they'd looked at over 100 different ways of helping people
in poor countries with their health.
over 100 different ways of helping people in poor countries with their health.
And if you looked at the amount that you could help in terms of health, like in terms of healthy life years for a given amount of money, say $1,000, there was this really striking difference
where the best interventions were about 10,000 times more effective than the least good ones.
And in fact, they're about 100 times better than the middle intervention. It was a log-normal distribution. This was something where I did a bit
of technical work on this and found a whole lot of interesting stats like that. It obeyed almost
exactly the 80-20 rule, where if you funded all of these ways of helping people in poor countries,
80% of the impact would happen from the 20% most effective interventions. And also,
if you had a choice between two interventions at random, and on average, the more effective one
would be 100 times as effective as the less effective one. So this is something where it
really woke me up to this fact that where you give can
be actually even more important than whether you give.
So if you're giving to something that, say, for a certain amount of money is enough to
save a life, there may well be somewhere you could give that would save 100 lives.
And that choice, how you make it, 99 people's lives depend upon you making that right.
Whereas the difference between you giving to the middle
charity or nothing is only one person's life. So maybe it could be even more important where you
give than if you give in some sense, although obviously they're both important. And so it was
really thinking about that that made me realize this. And within moral philosophy, there's a view
utilitarianism or consequentialism is a kind of
family of views that take doing good really seriously. They're not just focused on not doing
things that are wrong, but also on how much can you help. But it made me realize that the people
who support other ethical views, they should still be interested in doing much more good with the
resources that they're devoting to
helping others. And so I set up an organization called Giving What We Can, trying to encourage
people to give more effectively and to give more as well. So it was based around a pledge to give
at least 10% of your income to the most effective places that we know of, initially around global
poverty and global health, although
we've broadened that out to include anything.
For example, it could be animal charities or any way of helping others as much as you
can.
And in fact, we've now got more than 4,000 people have made that pledge.
They've given more than $100 million to the most effective charities they know of and
have pledged more than a billion dollars.
So it's actually a pretty big thing in terms of the number of people who've embraced this message
and are really trying to really make their charitable giving count.
Yeah, well, your colleague, Will McCaskill, who put us together, was on the podcast a while back.
And that conversation was very influential on my thinking here, because one thing you
both have done in your thinking about effective altruism is you have uncoupled sentimentality
from a more hard-headed concern about just what actually works and what saves the most
lives.
So much of philanthropy in its messaging and its tacit
assumptions and in the experience of people giving or deciding whether or not to give
is predicated on the importance of feeling good about giving and finding psychological reward
there. And I'm convinced that's still important, and I think we should figure out
ways to amplify that. But at the end of the day, we need to correct for our failures to
be maximally rewarded by the most important contributions we can make. This is just a
kind of a domain-wide human failing, that the worst things that can happen are not the
things we find most appalling, and the best things we can do are not the things we find most rewarding.
And surveying this landscape of moral error, we need to find ways to correct for the reliable
failures of our intuitions. And so in talking to Will, it occurred to me that one way to do this
is just to automate it. I've now spoken about this several times on the podcast, but it was
such an instructive example for me because at the time, Will was saying that the most effective or
certainly one of the most effective ways of mitigating human death was to give money to
the Against Malaria Foundation.
At the time, that was number one, I think, on the GiveWell site.
It might still be.
And I recognize that, in myself, that that was a cause which struck me as deeply unsexy. It's not that I don't care about it.
I do care about it when you give me the details.
right? It's not that it's, I don't care about it. I do care about it when you give me the details,
but, you know, buying insecticide-treated bed nets and giving them out, it's neither the problem nor the intervention that really tugs at my heartstrings. And it's just obvious that shouldn't
be the priority if, in fact, this is the way to save a life at the lowest dollar cost.
And so, yeah, so I just decided to automate my giving to that one charity,
knowing that it was vulnerable to my waking up in a month
not being able to care much about malaria.
And so that's the kind of thing that you and Will
and the movement you guys have inspired
has made really salient and actionable for people. That's the kind of thing that you and Will and the movement you guys have inspired has
made really salient and actionable for people.
That alone is a huge contribution.
And so thank you for doing that work.
Oh, no problem.
That's exactly why we did it.
I should say it's also the question of how much you give is another thing that to try to automate as you put it yeah
so i used to like when i was a grad student i i used to because i was aware of these these numbers
and like how how much further my money could go abroad basically around about i could do around
about a thousand or ten thousand times as much good with my money by giving it to the most effective
places abroad than I could by spending it on myself. I worked this out. That meant that
I became very pained when I was at the supermarket trying to work out whether to
buy the absolute cheapest cereal or the second cheapest cereal. That's not really a good pathway
to go down because
you're not that productive if you're spending all your time stressing about that. So I took an
approach instead of working out how much to give and committing to give a large amount of my money
over the rest of my life, and then just living within my reduced means. And then you just
basically just pretend that your salary is a bit lower. Maybe pretend that you took a job in the charitable sector or something with a smaller
salary in order to do more good.
Or pretend that you're being taxed a bit more because it would be good if some of our money
was taken to help people who are much less fortunate than ourselves.
And then just live within that reduced means.
Or you could pretend that you're working one day a week or one day out of every 10
for the benefit of others.
Yeah.
That's another way to think about it.
Yeah.
And it turned out that I made a...
The pledge it's based around is to give at least 10% of your income to where it can help
the most or where you think it can help the most.
We're not too prescriptive about that.
But ultimately, I've given a bit over a quarter of everything I've earned so far.
But the way I think about it is to think about actually what Peter Singer suggested, which
is to set an amount of spending money on yourself and then to give everything above it.
And I set that to an amount which is about equal
actually to the median income in the UK at the time. And a lot of journalists would say,
well, how on earth could you live on less than 18,000 pounds per year? And yeah, it's kind of
weird. I was trying to point out that actually half of the population of the UK do that. So
people would lose a bit of touch on these things. And that makes it clear that it
is doable if you think about it in those terms. But it is useful to use techniques like these
to make it easier. So you're not using all your willpower to keep giving. Instead, you make a
lasting commitment. That's the point of making a long-term commitment on this, is to tie yourself
to the mast and make it a bit less onerous to be
re-evaluating this all the time. And we found that that worked quite well. Initially, people said,
well, no one's going to do this. No one's going to make this commitment. Forgetting, of course,
that there have been traditions of giving 10% of your income for a long time. But it's something
where we found actually that there are a lot of people who would. And as I said,
more than $100 million have been given and more than a billion dollars pledged,
because it really adds up. And it's one of these things where if someone kind of shakes a can at
you on the street corner, it's not worth spending a lot of your time trying to work out whether to
give and also whether this is the best cause you could be giving to, because there's such a small
amount at stake.
But if you're making a choice to give something like a 10th of your income over the rest of your life, that's something like more than $100,000. And it's really worth quite a few evenings of
reflection about where to give it and whether you're going to do it and make such a commitment.
But if you do, there's a lot at stake. So we found that thinking in these bigger chunks, really zooming out on your charitable giving over
your whole life and setting yourself in a certain direction on that really showed and made it
worthwhile to do it right. Yeah. And one of the ways you cut through sentimentality here is around
the question of what people should be doing with their time if they
want to benefit the most number of people. And it's not that everyone should be rushing into
the charity sector and working directly for a cause they find valuable. You argue that
if you have a talent to make immense wealth some other way, well, then that is almost
certainly the better use of your time. And then you just give more of those resources to the
charities that you want to support. Yeah. So my colleague, Will McCaskill,
really, I mean, we'd talked about this right from the start, but he really took that a step further
when he set up this organization, 80,000 Hours with Ben Todd. And they were going deep on this and really thinking,
okay, we've got a theory for what to do with your charitable giving. How can you make that
more effective and really actually help more recipients or help those recipients by a larger
amount? And 80,000 Hours was about this huge amount of time over your whole career
and really trying to spend, you know, if you're going to spend 80,000 hours doing your job,
it kind of makes it obvious that it could be worth spending, you know, 100 hours or more
thinking seriously about where you're going to devote that time. And one of the things they
considered was this idea of earning to give, of taking a deliberately high paid job so that you could donate a lot more. And in some cases you could do a lot of good with that, particularly if you're someone who's well suited to such a job and also kind of emotionally resilient.
but really wouldn't last if they went into finance or something and everyone else,
all of their friends were always off at the golf course or something and this person was scrimping and saving and couldn't socialize with any of their colleagues and so on and saw them
live into excess. It could be pretty difficult. But if you're someone who can deal with that or
can take a pretty sensible approach, maybe give half of what you earn in finance and still live a very good life by any normal standards. And some people have taken that up, but that wasn't
the only message. We're also really interested in jobs in areas where you could do a lot of good.
For example, working on a charitable foundation in order to help direct their endowment to the
most effective things to help others. Also, we found that people, we were very interested in
a few different areas. There were kind of a few clusters of work, which were on global health and
global poverty. That cluster was really to do with the fact that the poorest people in the world
live on about a hundredth of the median US wage. And it means, therefore, because there's diminishing returns on our income,
that our money can do roughly 100 times more good to help those people than it can here.
And if we do leveraged things, such as funding the very most important healthcare that they can't buy themselves,
then we can get even maybe a thousand times more effectiveness for people abroad than we can for ourselves. So that's one way to do good. Another way that there's a cluster around is animal
welfare, noting that there's a market failure there where animals don't have a natural
constituency, they can't vote. It wouldn't be surprising if there
were massive amounts of pain and suffering which were being neglected by the general
capitalist system that we're in. And indeed, when we look at it, you know that there are.
So that was another approach, although you have to go out on a limb a little bit about
how on earth would you understand animal welfare compared
to human welfare in order to think about that. But you can see why it could be a really neglected
area. And then there's a kind of branch of people really interested in the long-term future of
humanity and noting that only a tiny fraction of all the people who have ever lived are alive at
the moment. And it's probably an even tinier fraction when you consider all the people who have ever lived are alive at the moment. And it's probably an even tinier fraction when you consider all the people who ever will live after us. That, you know, this is just
one century. We've had 2,000 centuries of humanity so far. We could have thousands of centuries more
after us. If there are ways that we can do something now to have a lasting impact over
that whole time, then perhaps that's another location where we can do really outsized amounts
of good with our lives. So we've often been thinking about those three different areas.
Are there trade-offs here with respect to the feeling good versus being effective calculus?
Because if you take a strictly consequentialist framing of this,
well, then it seems like, well, you should just cut through the feeling
or the perceived reward and salience of various ways of helping
and just help the most people.
But the situation does strike me somewhat as morally analogous
to the failure of consequentialism to parse why it makes sense for us to have a preferential love for our family and, in particular, our kids.
shower more attention and resources and love and concern on your child than you could on two strangers. And obviously, the equation gets even more unbalanced if you talk about 100
strangers. And that has traditionally struck many people as just a failure of consequentialism.
Either we're not really consequentialists, or we can't be, or we shouldn't be. But I've always seen that as just a, on some level, a failure to get as fine-grained as
we might about the consequences.
I mean, obviously, there's a consequence to, if you just think it through, there's a consequence
to having a society or being this sort of social primate who could, when faced with a choice to help their
child or two strangers, would just automatically default to what seems to be the consequentialist
arithmetic of, oh, of course, I'm going to care more about two strangers than my own child.
What do we mean by love and the norm of being a good parent if that is actually
the emotional response that we think is normative? And so it's always struck me that there could be
something optimal, and it may only be one possible optima, but at least it's a possible one, to have
everyone more focused on the people who are near and dear to them and kind of reach some collective
equilibrium together where the human emotion of love is conserved in that preferential way.
And yet, in extreme cases, or even just at the level of which we decide on the uses of public
funds and rules of fairness and justice that govern society, we recognize
that those need to be impartial, which is to say, when I go into a hospital with my
injured daughter, I don't expect the hospital to give us preferential treatment just because
she's my daughter.
And in fact, I would not want a hospital that could be fully corrupted by just answering
to the person who shouted the loudest or
gave the biggest tip at the door or whatever it was. I can argue for the norm of fairness in a
society even where I love my daughter more than I love someone else's daughter. It's a long way of
saying that. That seems to me to be somewhat analogous, or at least potentially so, to this
condition of looking to do good in the world and noticing
that there are causes, the helping of which gives a much stronger feeling of compassion and
solidarity and keeps people more engaged. And I think we do want to leverage that, obviously not
at the expense of being ineffective, but I'm just wondering if there's anything to navigate here,
or if you just think it really is straightforward, we just have to just strip off any notion of
kind of the romanticism and reward around helping and just run the numbers and figure out exactly
how to prioritize our resources. I guess I would say, here's three levels at which to think about this. So one approach would be to
say, yeah, just look at the raw numbers, let's say from some study on how much different ways of
spending our money could help people, and then just go with what that says. A second approach
would be trying to be a bit more sophisticated, to note that there might be a whole lot of people
who just kind of, yeah,
who aren't getting enough back from, they don't have enough feedback perhaps in their lives about
the giving and the effect it's having on people, such that they, if they were to try to do the
first one, that they couldn't really sustain it, which could be a really big deal because I'm
hoping that people can make a commitment and keep it to give, you know, for the next 30 years.
And if they get burnt out after a couple of years and stop, you've lost almost all the value that they could have produced,
especially as they're probably going to earn more money later in their life and be able to give
even more. It could be that you lose 99% of the benefit if they give up after the first couple
of years. So you at least want to go this one step further and have some idea or some sensitivity to the idea that if it's more appealing or it can,
you know, it could be more sustained than that matters. And I'm thinking in that sense,
quite instrumentally in that, that it's just trying to take account of the, the, the fallibility of,
of the humans who are the givers. It's not about flattering them or kind of like stroking their
ego or something like that, but it's the way I think of it. A lot of people, when they think about giving in particular,
have a focus that's very focused on the giver. I think of it as giver-centric or donor-centric
kind of understanding of it. For example, norms against being public about your giving,
I think are very donor-centric. They're about,
well, that would be gauche to be public about it. But from my perspective, I'm very focused on the recipients. And it seems to me that all of this focus on the donor is misplaced.
If the recipients would benefit more if the donors were public about it, such that they
help to encourage their friends to be giving, for example, by talking about some of these causes,
such that they help to encourage their friends to be giving, for example, by talking about some of these causes, ideally in a non-annoying way, then that could be good for the recipients.
And similarly, if there are aspects where, you know, maybe if the donor somehow could follow
through on a very difficult, dry program of giving, they would be able to give more. If in fact many
donors fail to achieve that, or they get burnt out, then that's bad for the recipients. So this
approach is still kind of recipient focused. Or you could go a step further than that and build
it into the structure of what it means to be good at giving and to say, you know, fundamentally,
for example, people in your community, or it matters more to give to people who are close to
you or something like that. I wouldn't want to go that extra step, although I understand that that is where the kind of intuitive position
perhaps is. And you do run into troubles if you try to stop at step two. You run into some of
these challenges you're mentioning about how do you justify treating your children better than
other people. So I don't think that this is all resolved. But I also want to say that
the idea of effective altruism, yeah, really is to be broader than just a consequentialist or
utilitarian approach. The people who are non-consequentialists often believe that there
are side constraints on action. So there are things that we shouldn't do, even if they promote
the good, because it would be wrong or be treating people wrongly in order
to do them. For example, that you shouldn't kill someone in order to save 10 people.
But since none of the ways we're talking about of giving or of the careers that we're
recommending people take, none of them involve really breaking such side constraints,
it seems like we should all still be interested in doing more good in that case. As philosophers,
we often focus on the interesting conflicts between the different moral theories, but this is a case where I think the
moral theories tend to run together. And so that's our focus, going beyond the kind of
just what would utilitarianism say or something like that.
Okay. Well, let's talk about the greatest downside protection we might find for ourselves and talk
about existential risk, which again,
is the topic of your new book, The Precipice, which is really a wonderful read. And it's great
to have the complete picture pulled together between two covers. So I highly recommend that.
We won't exhaust all of what you say there, but I'll flag some of what we're skipping past here. So you break the risks we face into the natural and the anthropogenic, which is to say human
caused. And it might be surprising for people to learn just how you weight these respective
sources of risk. To give some perspective, let's talk about just how you think
about the ways in which the natural world might destroy us, you know, all on its own, and the ways
in which we might destroy ourselves, and how you estimate the probability of one or the other
sources of risk being decisive for us in the next century? Sure. I think often when we think about
existential risks, we think about things like asteroid impacts. I think this is often the first
thing that comes to mind because it's what we think destroyed the dinosaurs 65 million years ago.
dinosaurs 65 million years ago. But note that that was 65 million years ago. So an event of that size seems to be something like a one in every 65 million years kind of event. It doesn't sound
like a once a century event, or you'd have trouble explaining why it hasn't happened many, many more
times. And I think people will be surprised to find out how recent it was that we really understood
asteroids, especially people of my generation, that in 1960, that's when we conclusively
discovered that meteor craters are caused by asteroids.
People thought that maybe they were caused by some kind of geological phenomenon, like
volcanism.
That's amazing. phenomenon, like volcanism. And then it was 20 years after that, 1980, where evidence was
discovered that the dinosaurs had been destroyed in this KT extinction event by an asteroid,
about 10 kilometers across. So that's 1980, that's 40 years ago. And then action, things moved very
quickly from that. In particular, it was around
about the same time as Carl Sagan and others were investigating models for nuclear winter. And they
realized that asteroids could have a similar effect, where dust from the asteroid collision
would darken the sky and could, in that way, cause a mass extinction due to stopping the plants growing.
So this is very recent, and people really leapt into action, and astronomers started scanning the skies, and they've now tracked what they think is 95% of all asteroids one kilometer or more
across. And one kilometer asteroid is a tenth the size of the one that killed the dinosaurs,
And a one kilometer asteroid is a tenth the size of the one that killed the dinosaurs,
but it only has one thousandth of the energy and a thousandth of the mass.
So we could very likely survive that.
And they've found 95% of those greater than one kilometer across, including almost all of the ones which are really quite big, such as five kilometers across or 10 kilometers.
And so now the chance of a one
kilometer or more asteroid hitting us in the next century is about one in 120,000. That's a kind of
scientific probability from the astronomers, but it also wouldn't necessarily wipe us out,
even if it did hit us. And that's a probability that we really, is very unknown. But overall,
I would guess that it's about a one in a million chance that an asteroid destroys us in the next hundred years. And other things that have been talked
about as extinction possibilities, when you look at the probabilities, they're extremely low.
So an example is a supernova from a nearby star. It would have to be quite a close star within about
30 light years. And it's extremely unlikely. It's unlikely that this
will happen during the lifespan of the Earth. And it's exceptionally unlikely it would happen in the
next 100 years. I put the chance of existential catastrophe due to that at about one in a billion
over the next 100 years. And these are quite rough numbers, but trying to give an order of magnitude
idea to the reader. And ultimately, when it comes to all of these natural
risks, you might be worried that supernovas and gamma ray bursts and supervolcanoes and
asteroids and comets, actually, it's very recent that we've discovered how these things work and
that we've really realized with proper scientific basis that they could be threats to us. So there's
probably more natural risks that we don't even know about that we're yet to discover. So how do you think about that? But there's this very comforting
argument from the fossil record when you reflect upon this fact that Homo sapiens has been around
for 200,000 years, which is 2,000 centuries. And so if the chance of us being destroyed by
natural risks, in fact,
all natural risks put together, was as high as, say, 1 in 100, we almost certainly wouldn't have
made it this far. So using that kind of idea, you can actually bound the risk and show very
confidently that it's lower than about 1 in 200 per century, and most probably below about 1 in 2,000 per century.
You also take it a little further than that by reasoning by analogy to other hominids
and other mammals that would have died in similar extinction events as well.
Yeah, that's right.
I give quite a number of different ways of looking at that in order to avoid any potential
statistical biases that could
come up. In general, it's very difficult to estimate the chance of something that would have
stopped the very observation that you're making now of happening. There are certain kinds of
statistical biases that come up to do with anthropic effects. But you can avoid all of that,
or most of it, by looking at related species. And you get a very similar result. They tend to last around about a million years before going extinct. And so since Homo sapiens is a species that is
much more widely spread across the surface of the earth and much less dependent upon a particular
species for food, we're very robust in a lot of ways. So that's before you even get to the fact that we can use
our intelligence to adapt to the threat and so forth. It's very hard to see that the chance of
extinction from natural events could be more than something like one in 10,000 per century,
is where I put it. But unfortunately, the same can't be said for the anthropogenic risks.
But unfortunately, the same can't be said for the anthropogenic risks.
Yeah, and so let's jump to those.
You put the likelihood that we might destroy ourselves in the next century by making some colossal error
or just being victim of our own malevolence
at 1 in 6 rather than 1 in 10,000, which is a pretty big disparity.
One thing that's interesting, especially in the present context of pandemic, you put pandemic risk
mostly on the anthropogenic side. Maybe we should talk about that for a second. What are the
anthropogenic risks you're most concerned about and why is it that
you're thinking of pandemic largely in the terms of what we do or don't do?
Yeah. Well, let's start with the one that started it all off with nuclear war, just briefly.
I think it was in 1945, the development of the atomic bomb, that humanity really entered this
new era, which I call the precipice, giving the book its name.
Explain that analogy. What's interesting here is that the anthropogenic risk,
existential risk, is really just the shadow side of human progress. It's only by virtue
of our progress technologically, largely, although not entirely. I mean, just the fact
that we have crowded together in cities and that we can jump on airplanes and fly all
over the world and that we have cultures that value that. And you take the good side of globalization and culture sharing and cosmopolitanism and economic
integration, that is perfectly designed, it would seem, to spread a novel virus around
the world in about 15 hours.
And all of the things that we've been doing right have set us up to destroy ourselves in a way that we absolutely
couldn't have done even a hundred years ago. And so it's a paradox that casts a shadow of
swords on the work of my friend Steve Pinker, who, as you probably know, has been writing these
immense and immensely hopeful books about human progress of
late, saying that things are just getting better and better and better, and we should acknowledge
that. We should only have the decency to acknowledge that. But he's been criticized
rather often for things he hasn't said. He's not saying that there's a law of history that
ensures things are going to get better and better. He's not saying we can't
screw these things up. But because of his emphasis on progress, at the very least, he can be convicted
of occasionally sounding tone deaf on just how the risk that we will destroy everything seems also
to be increasing. I mean, just the power of our technology, the fact that we're talking about
a time where high school kids can be manipulating viruses based on technology they could have in
their bedrooms. We're democratizing a rather Faustian relationship to knowledge and power.
And it's easy to see how this could go terribly wrong
and wrong in ways that, again,
could never have been accomplished a few generations ago.
So give us the analogy of the precipice to frame this.
Yeah.
If we really zoom out and try to look at all of human history
and to see the biggest themes that
unfolded across this time, then I think that two of them, one is this theme of progress in our
well-being that Steven Pinker mentions. And I think particularly in that case, over the last
200 years since the Industrial Revolution, that it's less clear over, you know, it was the
second 100,000 years of Homo sapiens better than the first 100,000 or something. I'm not sure.
But in the last 200 years, we've certainly seen very marked progress. And I think one of the
challenges in talking about that is that we should note that while things have got a lot better, they could
still be a lot better again. And we have much further to go. There are many more injustices
and suffering remaining in the world. So we certainly want to acknowledge that while at
the same time we acknowledge how much better it's got. And we also want to acknowledge both that
there are still very bad things and that we could go much further.
But the other major theme, I think, is this theme of increasing power. And that one, I think,
has really gone through the whole of human history. And this is something where there
have been about 10,000 generations of Homo sapiens. And it's only through a kind of massive intergenerational cooperation that we've
been able to build this world we see around us. So from where I sit at the moment, I can see
zero things, well actually except my own body, which were in the ancestral environment.
It's something where we tend to think of this as very recent, but we forget that things like clothing is a technology that was massively useful technology that enabled us to inhabit huge regions of the world, which would otherwise be uninhabitable by us.
You could think of it almost like spacesuits or something like that for the earth.
Massive improvements like this.
So many things that we developed before we developed writing, which was only about 5,000 years ago. So this time, like 97% of human history, we don't have any record of it.
But that doesn't mean that there weren't these great developments happening, and these sequence
of innovations that have really built up everything. When I think about that and how we stand on the shoulders
of 10,000 generations of people before us, it really is humbling. And all the innovations that
they passed on in this unbroken chain. And one of the aspects of this is this increasing power
over the world around us, which really accelerated with the scientific revolution, where we discovered
these systematic ways to create knowledge and to
use it to change the world around us, and the industrial revolution, where we worked out how
to harness the huge energy reserves of fossil fuels and to automate a lot of labor using this.
Particularly with those accelerations, there's been this massive increase in the power of humanity to
change the world, often exponential on many different measures, and that it was in the power of humanity to change the world, often exponential on many
different measures. And that it was in the 20th century, and I think particularly with the
development of the atomic bomb, that we first entered this new era where our power is so great
that we have the potential to destroy ourselves. And in contrast, the wisdom of humanity has grown only
falteringly, if at all, over this time. I think it's been growing. And by wisdom, I mean both
wisdom in individuals, but also ways of governing societies, which for all their problems are better
now than they were 500 years ago. So there has been
improvement in that. And there has been improvement in international relations compared to where we
were, say, in the 20th century. But it's a slow progress. And so it leaves us in the situation
where we have the power to destroy ourselves without the wisdom to ensure that we don't,
and where the risks that we impose upon ourselves
are many, many times higher than this background rate of natural risks. And in fact, if I'm roughly
right about the size of these risks, where I said one in six, a die roll, that we can't survive
many more centuries with risk like that, especially as I think that we should expect this power to continue to increase if we don't do anything about it, and the chances to continue to
go up of failing irrevocably. And because our whole bankroll is at stake, if we fail once on
this level, then that's it. So that would mean that this time period where these risks are so elevated can't last all that long.
Either we get our act together, which is what I hope will happen, and we acknowledge these risks, and we bring them down, we fight the fires of today, and we put in place the systems to ensure that the risks never get so high again.
Either we succeed like that, or we fail forever.
Either way, I think this is going to be a short period of something like a couple of centuries,
or maybe five centuries. You could think of it as analogous to a period like the Renaissance or
the Enlightenment or something like that, but a time where there's a really cosmic significance ultimately, where if humanity
does survive it and we live for hundreds of thousands more years, that we'll look back and
that this will be what this time is known for, this period of heightened risk. And it also will
be one of the most famous times in the whole of human history. And I say in the book that school children will study
it and it'll be given a name. And I think we need a name now. And that's why I have been calling it
the precipice. And the analogy there is to think of humanity being on this really long journey
over these 2000 centuries, a journey through the wilderness, occasional times of hardship and also times of
sudden progress and heady views. In the middle of the 20th century, we found ourselves coming
through a high mountain pass and realizing that we'd got ourselves into this very dangerous
predicament. The only way onwards was this narrow ledge along the edge of a cliff with a steep precipice at the side.
And we're kind of, you know, inching our way along and we've got to get through this time.
And if we can, then maybe we can reach much safer and more prosperous times ahead.
So that's how I see this. Yeah, there's a great opening
illustration in your book that looks like the style of an old woodcut of that
precipice, which, yeah, that's an intuition that many people share just based on extrapolating the
pace of technological change. When you're talking about suddenly being in a world where anyone can potentially order DNA in the mail,
along with the tools to combine novel sequences or just recapitulate the recipe for smallpox or
anything else that is available. It's hard to see how even 500 years seems like an order of magnitude longer than the period here that we
just crucially have to navigate without a major misstep. It just seems like the capacity for
one person or very few people to screw things up for everyone is just doubling and doubling and doubling again within not just
the lifetime of people, but within even the span of a decade. So yeah, and it's given cosmic
significance, as you point out, because if you accept the possibility, even likelihood,
that we are alone in the universe, I don't know how... Honestly, I don't have strong intuitions about that.
I mean, both the prospect of us being alone
and the prospect that the universe is teeming with intelligent life
that we haven't discovered yet,
both of those things seem just unutterably strange.
I don't know which is stranger,
but it's a bizarre scenario
where either of the possibilities
on offer seem somehow uncanny.
But if it is the former case, if we're alone, then yes, what we do in a few short years
matters enormously if anything in this universe matters.
Indeed.
Ultimately, when thinking about this, I see a handful of different reasons to really
think it's extraordinarily important what we do about this moment. To some extent,
it's just obvious, but I think it can be useful to see that you could understand it in terms of
the badness of the deaths at the time, if it meant that in a catastrophe,
7 billion people were killed, that would be absolutely terrible.
But it could be even much worse than that. And you might think, why does it need to be worse
than that? Surely that's absolutely terrible already. But the reason that it can matter
is because we're not saying that there's a 50% chance of a particular event that will
destroy us.
The chances for some things could be lower.
For example, I just mentioned the chance of an asteroid or comet impact is substantially
lower, but still important, still really important because if it did happen, it wouldn't just
be a catastrophe for our generation, but it would wipe out this entire future that humanity could
have had, where I think that there's every reason to think that barring such a catastrophe,
humanity could live surely at least a million years, which is the typical lifespan of a species.
But I don't see much reason to think that we couldn't live out the entire habitable span of the Earth's life,
which is about 500 million or a billion years, or even substantially beyond that if we leave the
Earth. The main challenges to things like space travel are in developing the technologies and in
harnessing enough energy. But ultimately, if we've already survived a million years,
that's not going to be such an issue. We will have 10,000 more centuries to develop our science and
our technologies and to harness the energies. So ultimately, I think the future could be very long
and very vast. So for me, the most motivating one is everything we could lose. And that could be understood in, say, utilitarian terms as the well-being of all the lives that
we would lose.
But it could also be understood in all these other forms.
And Derek Parfitt talks about this very famously near the end of his magnum opus, Reasons and
Persons, where he says that also, if you care about the excellences of humanity, if that's what
moves you, then since most of our future is ahead of us, there's every reason to expect that our
greatest artworks and our most just societies and our most profound discoveries lie ahead of us as
well. So whatever it is that you care about, there's reason to think that most of it lies in the future. But then there's also, you could think about the past. You could think about the fact that human society is necessarily this intergenerational partnership, as Burke put it.
have built up this world for us and have got 10,000 generations and then have entrusted it to us so that we can make our own innovations and improvements and pass it down to our children.
And that if we fail, we would be the worst of all these generations and we would be betraying
the trust that they've placed in us. So you can think of it in terms of the present,
the deaths, the future that
would be lost, the past that would be betrayed, or perhaps also in terms of this cosmic significance.
If we're the only place where there is perhaps life in the universe, or the only place where
there is intelligent life, or the only place where there are beings that are influenced by moral
reasoning. So the only place where there's
this kind of upwards force in the universe pushing towards what is good and what is just.
If humans are taken out, for all the value that there is in the rest of the natural world,
and I think that there is a vast amount, there's no other beings which are trying to
make the world more good and more just. If we're gone, things will just meander on their
own course with the animals doing their own things. So there's a whole lot of different
ways of seeing this. And Derek Parfit also pointed out this really useful thought experiment, I think,
which is he imagined these three different scenarios. There's peace, there's a nuclear war
in which 99% of all people die, and there's a nuclear war in which 99% of all people die,
and there's a nuclear war in which 100% of all people die. And obviously the war where 100%
of people die is the worst, followed by the war where 99% of people die. But he said which of
those differences is bigger? And he said that most people would say that the difference between
peace and 99% of people dying is the bigger difference. But he thought that most people would say that the difference between peace and 99% of people dying
is the bigger difference. But he thought that because with that last 1%, some kind of discontinuous
thing happens where you lose the entire future. And that thus, that was the bigger difference.
And there's this reason to be especially concerned with what are now called existential risks.
concerned with what are now called existential risks. Yeah, so obviously that final claim that the difference between two and three is bigger than the difference between one and two, that is
going to be provocative for some people. And I think it does expose another precipice of sorts.
It's a precipice of moral intuition here, where people find it difficult to think about the moral significance
of unrealized opportunity, right? So, because on some level, the cancellation, mere cancellation,
if you'd like to continue listening to this podcast, you'll need to subscribe at samharris.org.
You'll get access to all full-length episodes
of the Making Sense podcast
and to other subscriber-only content,
including bonus episodes and AMAs
and the conversations I've been having
on the Waking Up app.
The Making Sense podcast is ad-free
and relies entirely on listener support.
And you can subscribe now at samharris.org.