Tech Won't Save Us - Don’t Fall for the Longtermism Sales Pitch w/ Émile P. Torres
Episode Date: October 20, 2022Paris Marx is joined by Émile P. Torres to discuss the ongoing effort to sell effective altruism and longtermism to the public, and why they’re philosophies that won’t solve the real problems we ...face.Émile P. Torres is a PhD candidate at Leibniz University Hannover and the author of the forthcoming book Human Extinction: A History of the Science and Ethics of Annihilation. Follow Émile on Twitter at @xriskology.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Émile recently wrote about the ongoing effort to sell longtermism and effective altruism to the public.Peter Singer wrote an article published in 1972 arguing that rich people need to give to charity, which went on to influence effective altruists.NYT recently opined on whether it’s ethical for lawyers to defend climate villains.Nathan Robinson recently criticized effective altruism for Current Affairs.Support the show
Transcript
Discussion (0)
They've already established connections with major governing bodies or agencies like the
United Nations.
They've fostered connections with tech billionaires like Elon Musk and so on.
As a result, they're in a position to change the world in really significant, very non-trivial
ways.
And yet, again, that theoretical foundation is just
pretty weak. And I think that's a big problem.
Hello and welcome to Tech Won't Save Us. I'm your host, Paris Marks, and this week my guest is Emil Torres.
Emil is a PhD candidate at Leibniz University in Hanover, Germany, and the author of the forthcoming book,
Human Extinction, A History of the Science and Ethics of Annihilation.
Now, you might remember Emil from our previous episode back in May, episode 116,
called The Dangerous Ideology of the Tech Elite, where we talked about long-termism. This particularly concerning worldview held by a lot of people who are quite powerful in the tech
industry, that we really need to be focused on the very long-term future of humanity, and that
comes often at the expense of taking actions that would
address real material problems in the present. So instead of really going after climate change or
trying to address human poverty and suffering and hunger, we should be focused on colonizing
the cosmos so that we can extend the light of consciousness for millions and millions of years,
for example. You may have also noticed that these ideas have
been getting a lot more attention in the past few months because William McCaskill published a
recent book called What We Owe the Future, and that has been very effectively sold and marketed
around the world, or at least in Europe and North America, to the degree that many major
publications have published
articles and interviews with McCaskill effectively endorsing this idea that he is putting forward
and that he is presenting in a very approachable manner in the book, you know, without some of the
aspects of this ideology or of this worldview that would really quickly turn people off and show people what it
is at its core. And so as a result of that, I wanted to have Emil back on the program so that
we could dig into long-termism a little bit more, so that we could talk about this promotion
campaign that has been happening over the past number of months to get these ideas into people's
minds, into people's heads, to get them more open to them, but also to
talk about effective altruism more generally, this kind of idea that helps to justify the actions of
these really rich people to accumulate a ton of wealth so that they can then deploy it in
philanthropic ways to donate to various causes. And this makes it seem not so bad that the way
they earn this money is incredibly terrible,
often very harmful to society, to many people in our society.
But that's okay because as long as they can then donate that money to causes that are
supposedly doing good in the world, then we shouldn't be so concerned.
Or at least they shouldn't be so concerned because they are not on the receiving end
of the harms of how they make this money.
So I think that this is a really important conversation.
I think it's a good follow up to the conversation that I previously had with Emil.
And I think that you're really going to enjoy it.
If you like the show, make sure to leave a five star review on Apple Podcasts or Spotify.
You can also share the show on social media or with any friends or colleagues who you
think would enjoy it.
And if you want to support the work that goes into making the show and putting together these interviews on really critical tech topics, you can join supporters
like Ian from Edinburgh and Kiara in Oakland by going to patreon.com slash tech won't save us
and becoming a supporter. Thanks so much and enjoy this week's conversation.
Emil, welcome back to tech won't save us.
Thanks so much for having me. It's great to be here.
You know, I'm like conflicted, like I'm happy to have you back on the show. But I also hate the topic that we're talking about.
And I've read William McCaskill's new book, it's really this argument for this long termist
philosophy that we were talking about last time you were on the show. And so I wanted to have you
back on because since we had that conversation, longtermism has really experienced, I think it's
fair to say, this real kind of increase in attention, right? It's kind of everywhere all
of a sudden. There's a lot of arguments in favor of it. There's a lot of really positive pieces
about it in the New York Times, in Time Magazine, in a whole load of these big publications.
You know, I'm not sure what it's been like in the US, but I was in London recently and there were ads for the book all through the tube. So it's kind of all over the
place. There's this big push to get people to buy into this notion that the book is selling and to
make people believe that this is some kind of positive vision for the future. And so, you know,
I wanted to have you on because certainly we talked about this ideology of long-termism before, but I think that there are some more aspects of this to dig
into, especially as it has gained this prominence. And so to start that discussion, there are really
two topics I feel like that we're hearing a lot about and that we should probably be knowing more
about. And the first of those is effective altruism. And then the other one, as I said, is long-termism. So I'm wondering to start, could you talk a bit about
what these two concepts are and how they relate to one another?
Yeah, sure. First of all, maybe it's worth mentioning that the promotional push for
McCaskill's new book, I mean, it has millions of dollars behind it. There is no shortage of funds
to buy advertisements in the London Underground or whatever. And the movement from which long-termism
emerges, effective altruism, itself has just an enormous quantity of money that wealthy donors,
tech billionaires like Sam Bankman-Fried, co-founder
of Facebook, for example, have been willing to just give to this community. Right now, they have
$46.1 billion in committed funding. And in addition, there are various organizations,
companies, and so on, like OpenAI, that are aligned more or less with the EA or long-termist worldview
that have been independently funded by tech billionaires. So the community is just awash
in money, so much money, they don't know what to do with it. Literally, they're giving out
$100,000 prizes, five of them, for blogs promoting or discussing long-termist ideas or effective altruist ideas.
So it's just a huge amount of money. So it's not surprising that Will McCaskill has been able to
get all of this attention for his book. It's not the result of merit. It's the result of money.
So basically, the effective altruist community was sort of born
in like around 2009. The first organization that was motivated by effective altruist ideas was
Giving What We Can. And that was founded in 2009, in fact, by Toby Ord, who's at the University of
Oxford and was sort of co-founded with Will McCaskill. And the idea behind effective
altruism is sort of inspired by the global ethics of Peter Singer. So famously, Peter Singer wrote
this article about famine and affluence. I think it was published in 1972, if I remember correctly.
But basically, his argument was, it shouldn't matter where in the
world someone is suffering. So, you know, imagine yourself walking down the road and you see
a child who is drowning in a lake. And you just bought some new shoes or, you know, a new suit
and so on. If you were to go save that child, you would ruin your shoes and suit. Should you do it?
A lot of people would say yes. And he
says, well, what's the difference between the child drowning, you know, 15 feet away from you
in a lake and somebody starving on the other side of the world, like in Bangladesh at the time,
I believe. There really shouldn't be any, you know, fundamental kind of moral difference between
these two situations. And therefore, insofar as we care about helping others, which is a sort of basic definition of altruism, then we should be willing to give a considerable amount of our money or at least a minimal amount of our money to help people around the world.
And so once you have that idea, there is a further question, which is actually if you are convinced that you should give away some of your income
to help other people, which charities should you give it to? And there have been various,
you know, rankings of charities in the past, but the effective altruist said, actually,
maybe there's a better way to discern which charities are the best ones. And so they wanted to use, you know, science and evidence
and reason to pick the best charities. And, you know, so for example, one of the conclusions that
they've stuck with for many years now is that giving to the anti-malarial foundation, I believe
it's called, which then would manufacture and distribute bed nets to prevent, you know,
individuals in regions of the world that are susceptible to malaria from getting malaria,
from being bit by these little flying hypodermic needles called mosquitoes.
You get a much bigger bang for your buck than, for example, if you donate to disaster relief.
Oftentimes that money just kind of gets lost.
Or if there's an autocrat in rule, they'll end up taking a lot of money.
So far, I mean, this sounds pretty good.
If you look at the details, it turns out that there are some methodological problems, like the notion of quality-adjusted life years, qualities, which we could discuss later if you'd like, as well as if you take it seriously, there end up
being these rather repugnant conclusions, like maybe you should actually support sweatshops.
One of their main ideas is earning to give. So maybe the most good you could do is not,
for example, joining a charity, becoming a doctor who then goes to some place in the global south that is somewhat
impoverished and needs better health care. What you should do instead is go work on Wall Street,
and then you can make a whole lot of money, take that money, donate it, and that ultimately,
if you crunch the numbers, you could do more good that way, or even working for a petrochemical
company. Will McCaskill has argued that in the
past yeah there was an article in in the new york times recently asking like is it ethical to
defend or work for like a major oil company or something like that and like their verdict
apparently was that it can be but what you're saying also brings to mind sam sam bankman
right you know cto of ft. And his argument is that he's
engaging in all this crypto stuff. So he makes a lot of money that he can give to these causes to
like make the world a better place. Yeah, exactly. I mean, he is one of the great success stories
within EA of somebody who was convinced to earn to give and he thought, well, how, you know,
could I maximize the amount of money I get to then donate it to supposedly the best
causes out there?
And so he decided then to go into cryptocurrency.
And he himself, as you are very aware, has described it as more or less a kind of Ponzi
scheme.
And there's a huge carbon footprint.
I know FTX has tried to address that a little bit.
I think it's inadequate. But there's a huge carbon footprint. I know FTX has tried to address that a little bit. I think it's inadequate.
But there's a huge carbon footprint to cryptocurrencies. There are a lot of people
in the global south who get completely screwed over by it. A lot of people in the global north
who get screwed over by cryptocurrency as well. Funnily enough, he's an individual who embodied
that EA ethos, this idea of earn to give, and then ended up becoming, you know, this multi
billionaire crypto kingpin. So, yeah, it's very troubling to see, I guess. And, you know, just to
pick up on what you're saying about the approach to this, that, you know, you make this money,
and you make these donations, and this is how you make the world a better place. Like, it really is
pushed to promote philanthropy. I feel
like in this moment, especially when there's a growing kind of questioning of the role of
philanthropy and whether this is actually making like the kind of the positive changes in the world
that we've been told over long periods of time, you know, questions about the Gates Foundation
and things like this. And in McCaskill's book, you know, this notion is promoted really heavily, right? You know, he's very explicit that it's far better to put your money into effective nonprofits
than to change your personal actions or things like that, right?
You know, he says at one point, like, why are people getting rid of plastic?
This makes no sense when they could be donating to effective nonprofits that would make a
much bigger difference in the big scheme of things, right? Yeah, I think that's exactly right. A critique
that one could make and philosophers have made is that the whole EA kind of approach,
in general, certainly in the past, has taken for granted the various systems that are in place.
The idea is, you know, assuming that these systems will continue to exist
and that maybe even they're good, maybe they're even beneficial. You know, capitalism has
resulted in all sorts of material, you know, progress and so on. A lot of them draw from,
you know, Steven Pinker in his book, you know, The Better Angels of Our Nature,
where he argues essentially that... Yikes. Yikes. I know. You know, neoliberalism has like kind of been
very much a net positive.
And so ultimately you're trying to figure out ways
as individuals within this system
to maximize your impact,
your positive, hopefully positive impact in the world,
which then neglects the possibility
that many of the most significant global problems
are the result of the systems themselves. So like Nathan Robinson in Current Affairs,
you know, had this really good recent critique of effective altruism, where at the end, you know,
he made the case that perhaps the most effective altruism there is, is socialism. It's just revamping in
fundamental ways the system that is currently in place and is the result of sort of an underlying
cause of climate change, of global injustices, the wealth disparities, and so on. So yeah,
it's McCaskill. For example, the idea that we should go work for petrochemical companies, work to charities that are trying to alleviate the
suffering caused by climate change is kind of mind-boggling and a bit maddening.
No, I completely agree. And I think it's really interesting that you say that, right? And there
are a ton of things that I could pick up there. But just to mention one piece of it,
I think that we'll return to something like this a bit later in our conversation, but also how this notion of earn to give and the way that it is argued for has
changed over time as they have wanted effective altruism to be open to a wider range of people.
In the book, 80,000 Hours, which is this group or this movement that McCaskill's associated with,
you know, he talks about it in the book. He talks about his previous book arguing for things like
this. And he doesn't talk at all about going to work for a petrochemical company or a crypto
company or any kind of other like terrible organization that's doing terrible things in
the world. You know, his whole argument in the book that he's putting out there for the mainstream
public is that you should be doing your work or having your experiences. And then
you can like found organizations that promote effective altruism, or the only instance that
he talks about where people are working in an industry and then puts money into some organizations
or whatnot, is a programmer at a tech company who still does his programming job and then gives a bit of money
to some effective altruist organization or something like that, right? So it's a real kind
of, they really want to downplay that. And I believe in one of the articles you wrote,
you said that they like to say how people like to draw from these older comments where we said
people should go work for petrochemical companies and stuff like that, but that doesn't represent
us anymore, right? So I think that's really interesting.
Yeah. So initially, so I believe McCaskill co-founded 80,000 hours named 80,000 hours,
because that's the average number of hours that somebody will spend throughout their career
working. And yeah, so as part of their marketing strategy, they initially focused, uh, foregrounded
this idea of earn to give like, okay, it's, it's this counterintuitive idea, but you know,
if you crunch the numbers again, sort of assuming that the system can't be changed or shouldn't be
changed, then if you crunch the numbers, maybe this is really good way to actually maximize the
amount of good that you do in the world. And later on,
they realized that not only was that sort of a bad strategy, because a lot of people found it
absolutely abhorrent that you'd go and work on Wall Street, like Matthew Wage, who was one of
the early effective altruists, a philosopher at Princeton who gave up his opportunity to go to
Oxford to get his PhD in order to work on
Wall Street to donate his money. So yeah, they sort of realized that actually a lot of people
find this to be a really off-putting idea. So it was a mistake. And I think also perhaps they
did a bit more research and realized that the earn to give idea is a good suggestion
for a much smaller percentage of young people
than they initially thought. This actually gets at one of the main problems I have with
effective altruism and its long-termist offshoot, is that very often the research has trailed behind
the activism. They've been so excited to go out and change the world. To some
extent, they failed to properly interrogate the underlying philosophical ideas that motivate
their prescriptions for what people should, in the world right now, should actually go and do.
And so, yeah, initially they said a lot of people should go and earn to give.
Then they took a step back and thought a bit more about it and realized, actually, this
is not such a good idea.
Again, not only just for marketing reasons, but maybe it's not the best way for a lot
of people to maximize the amount of good that they do in the world.
And so you find a similar thing with long-termism, where these bold claims about what we ought to do right now
in order to improve the long-term future of humanity that actually are just based on
really flimsy, highly contentious, some would say very implausible, sort of deeper philosophical
views. So yeah, maybe just a very sort of high-level criticism that I would
have of these movements is that they have jumped the gun. They're out there trying to change the
world in really significant ways without having a really robust theoretical foundation for their
views. I mean, the long-termist offshoot, again, that's one of the main three cause areas of effect of altruism, in addition to eliminating
factory farming, which I think is very good, and alleviating global poverty, which I also
very much would get behind. But the community in general has kind of shifted away from those,
over time, over the past five years, away from those two other cause areas
and towards long-termism. And they've already established connections with major governing
bodies or agencies like the United Nation. They've fostered connections with tech billionaires like
Elon Musk and so on. As a result, they're in a position to change the world in really
significant, very non-trivial ways. And yet, again, that theoretical foundation is just
pretty weak. And I think that's a big problem. And it's one reason I'm trying to, you know,
within that sort of public arena, to push back on some of these ideas and to let people know
that the long-termist view is much more radical and much less defensible than a lot of the most
vocal advocates and champions of this worldview would have you believe.
Yeah, I think that's put really well. And you can see it in the arguments that McCaskill makes in
the book, right, near the end, where he's saying, like, how can you get involved? What can you do? It's all about how can you get
involved in promoting effective altruist organizations, the movement of effective
altruism, you know, how you can promote long-termism. It's not like how you can get
involved in these causes that are going to make the world a better place. It's all about how do
you spread long-termism and effective altruism further to more and more people. And just on your point there about
long-termism, maybe you can give us like a brief definition of what it is. But one of the things
that stood out to me in the book as McCaskill was making this argument for long-termism that
just kind of blew my mind was he really kind of presents it as an extension of the civil rights
movement. You have this expansion of rights to indigenous people, to black people, to gay people. And now we are expanding rights to
the unborn, the people of the future. Like, it's just kind of a wild framing to me that,
you know, is presented as something that just makes like total sense. But maybe you can give
us a brief idea of what long-termism is. Yeah. Also, just with respect to the word unborn,
there was a study I was reading about just the other day. I believe it was conducted by
some of the long-termists. And they found that the way you frame questions about the value of
future generations depends on the wording. That's unsurprising. A lot of studies find that. But if
you talk about future generations,
the percentage of people who are moved by that drops consistently. But ultimately,
what they're talking about is the unborn. I mean, on the view that McCaskill defends in his book,
this is called the total view. And it was named that way by a philosopher, Derek Parfit, who is
sort of the grandfather of the whole long-termist movement.
He was the supervisor of Toby Ord.
On the total view, there is no intrinsic difference between an individual who dies and a possible person who is not born.
So maybe there are other reasons why the death of somebody might be worse.
You know, it might affect loved ones and so on. But if you bracket
those, there is no difference between the death itself and the non-birth of some person who could
possibly exist. The easiest way to understand that is that on this view, people are understood to be
the containers of value. So we're just these vessels. We can be filled with value, which you
might take to be happiness, a certain quantity of happiness then, or maybe even a negative quantity
of happiness. And the total view says that a universe that contains more net total value or
happiness is better than a universe that contains less. And if you then derive an obligation from that,
as the utilitarians would do, they would say, well, then we have a moral obligation to create
a universe with as much value, as much happiness as possible. One way to do that is to increase
the happiness that's experienced by all the people who currently exist. But another way to do that is
to create new people, i.e. value containers, that contain net positive amounts of value. So if you double your
population, and if everybody has the exact same amount of value, say a happiness level of 10,
you double the population, you get twice as much value. And so the universe then becomes twice as
good. If you triple it, you know, it becomes three times as good, and so on and so on. So behind it, there's this really controversial idea about the intrinsic badness of
death versus non-birth, and consequently this kind of moral duty, which may not be absolute,
but there's still a kind of like moral, you know, push then to encourage people or to engage in
activities that will maximize the total number
of people in the future. So all of that said, maybe it's useful then to actually define
long-termism. So it's basically just the idea there's a weak and a strong version. A lot of
the people in the community, as far as I can tell, are most sympathetic with the strong version.
Some of them, like McCaskill and Hillary Graves, have sympathetic with the strong version. Some of them, like McCaskill and Hillary
Graves, have explicitly defended the strong version. The strong version is definitely what
you find in Nick Beckstead, who wrote one of the founding documents of the long-termist ideology
in 2013. It was his PhD dissertation, as it happens. But nonetheless, McCaskill discusses
in an article that he posted on the Effective Altruism Forum,
that for marketing reasons, they should go with the weaker definition. So the weaker definition is just that ensuring that the long-term future of humanity goes well is a key priority. And the
stronger version is that this is the key priority. So it should be, you know, it tops the list.
It's more important than anything else. Global poverty, no. Animal welfare, no.
Any kind of problem that, a contemporary problem that's, you know, facing humanity
that isn't going to significantly change how much value comes to exist in the very far future
is just not one of our top priorities. Nick Bostrom, who is sort of the father of long-termism,
has made this more explicit,
said, you know, for utilitarians, our top four priorities should be mitigating existential risk.
And on this view, existential risk is basically anything that would prevent us
from creating, you know, astronomical amounts of value in the future. So if you dig a little
deeper, what does it mean to say that ensuring that the long-run future of humanity goes well, what exactly does that mean? And the meaning is at least one way to understand it
draws from the total view. So the future will go better if we not just survive for a really long
time. At least we have another billion years or 800 million years on Earth before Earth becomes uninhabitable as the sun
turns into a red giant and its luminosity increases, the oceans boil, and so on.
But if we colonize space, we could potentially increase the human population by many orders of
magnitude. There could be 10 to the 23 biological humans in the Virgo supercluster, our local group of galaxies.
And even more, if we create planet-sized computers in which we simulate digital people,
these would be basically digital value containers, digital vessels that would realize some kind of happiness.
Then we could even more vastly increase the future population. And so behind the
long-termist view is this vision of what could be that involves space colonization, the creation of
computer simulations, and the simulation of enormous numbers of digital people,
all for the aim of maximizing the total amount of happiness that exists in the future,
within our future light cone. That's the region of the universe that's accessible to us
in principle. And there are also other reasons too. I mean, they might say, well, you know,
there are great works of art that will be created in the future. You know, there's ever more just
societies that we could create. But a lot of this is just the foundation is maximization. More is better. I mean, McCaskill actually has a section in his book called Bigger
is Better. We should make our civilization as big as possible. The future should be big, yeah.
The future should be big, as big as possible. And so this behind the very kind of, you know,
approachable, even appealing sort of way that
they advertise it, like future people matter, you know, we can affect them, how the longer
run future of humanity unfolds matters is important, is this particular vision, which
is radical and bizarre.
And I think a lot of people who first like encounter it in its details find it very off-putting, especially when they consider
the fact that there are real, actual people who are suffering in the world today, and that these
individuals' pain and discomfort and misery and anguish might end up getting neglected or sort
of brushed to the side because what really matters on the long
termist view is how things go over the next million, billions, even trillions of years from now.
Yeah. And I want to pick up on that more in just a second. And I would say if people want to know
more about long-termism, they can, of course, go back to the last episode we did back in May,
where we discussed this in much greater depth, right? But you were talking there about
the value, right? And how people are seen as value containers and that value is associated
with happiness or well-being in McCaskill's book. And the thing that I really took away from it
when I was reading the argument that McCaskill was making was very much like, look, there can be
a ton of people today and they are very happy,
or we can have like way more people in the future. And maybe they're not all as happy,
but as long as they're like slightly above the threshold for, you know, having a positive life
and not being neutral or whatever, then this is, you know, in the long run, a better outcome. And then what that communicates to me,
even though it's not like explicit in the text of the book, is that why would you significantly
increase the life expectations of people today if that would take away from being able to realize
all these other people in the future when you have limited
resources and you know especially as us people who are interested in philanthropy and giving
money to particular causes like okay we should get people up to a level where they are marginally
happy or or fulfilled or or what have you um and that is of course based on subjective
interpretations of what happiness is not not a kind of objective take,
or we want to raise people to this much income or what have you. But as long as people feel
in their lives that they are slightly happy, even if they are very poor and live in kind of
abject conditions, then this is acceptable. And we shouldn't want to significantly raise them up
because we need to think about where we're putting our resources.
And if we are putting all of our money into, you know, raising the global south to the
incomes of the global north or something or the living standards of the global north,
then that takes away a lot of our resources that we could be putting into, you know, ensuring
that we have this great long term future that is going to be fantastic.
And we lock in the values that ensure that happens
and blah, blah, blah, right? It's a very kind of troubling way to approach the future, how we think
about people, how we think about society. And just on your point about the people that he's quoting,
throughout the book, he's constantly quoting people like Nick Bostrom and Toby Ord as inspiring
this thinking or talking about extinction in these
particular ways. And like, you really don't find out like the core of what these people are
thinking, which is incredibly troubling, as you described in our last episode. And just finally,
like when you think about this approach, one thing that stood out to me was that McCaskill said his
supervisor was an economist turned philosopher, right? And so this kind of base kind of economic thinking
is at the core of what he's considering when he is denoting or considering the value in an
individual human being. And in the same way that think of these kind of abstract notions of
economic growth and how we should be promoting that and like not really thinking about the
material consequences of that growth,
like who would actually benefits or whatnot, because as long as this like abstract value
is increasing, then that is a net positive, we assume, then it's similar with this, right? As
long as the net value that we are measuring in like the total lifespan of human history is going
up, then this is like a positive thing. And we don't need to drill down into what that actually
means for people's lives.
Yeah, yeah, exactly.
I mean, it is very economic.
I mean, it's almost like, you know, morality is kind of a branch of quantitative economics.
It sort of assumes that, for example, happiness can be quantified. You know, there are these units of well-being or welfare out there or happiness.
Some have called them utils, a single unit of
utility. And yeah, so a lot of these individuals, I think, because they realize the importance of
marketing, they are really careful about how they present their views and which parts of their views
they conceal. And they don't want people to think about too much because most morally normal people
will find them to be really, like I said before, abhorrent. Because it's so quantitative,
one of the criticisms of utilitarianism, which by the way, historically utilitarianism sort of
emerged around the same time that capitalism did. And I don't think that's
just a coincidence. What a surprise. And so one of the criticisms that has been made of
utilitarianism, which is very influential within this sort of long-termist community,
in fact, an overwhelming number of effective altruists are utilitarians. Their own surveys
show that. I think it's something like 80% are utilitarians.
Utilitarianism is not sensitive to numbers. And so by that, I mean, imagine a universe that contained only one individual, i.e. value container. And that individual realized 100
units of happiness. You could imagine a second universe in which there are 100 individuals,
and each of them have one unit of happiness. Which universe is better on the total view,
on the total utilitarian perspective? They're the same. And so this gets at your point,
that it may be better to have an enormous number of future people who have very low
kind of happiness levels than a universe that has a
much smaller number of people that have really high amounts of happiness. If you crunch the
numbers, you know, if you have 1 billion trillion trillion with, you know, units of five amounts of
happiness versus a universe that has, you know, just 10 people with a thousand units of happiness,
you know, the former is better
because what matters is the total quantity. That is the bottom line. The view that the first universe
is better than the second was labeled, in fact, by Derek Parfit himself as the repugnant conclusion.
And he took it to be a major point against the total view. It's a big problem. Otherwise, he wouldn't have
called it the repugnant conclusion. But the thing is that since then, a lot of people have tried to
make the case, including many long-termists. And McCaskill himself.
McCaskill himself, exactly. I believe in the book. The idea is that, well, okay, maybe it's not so
repugnant. Why would that be? Well, because
one thing we know about human psychology and human cognition is that we're, you know,
a professor of mine used to say we're qualitative geniuses and quantitative imbeciles. And, you know,
we're really good at like qualitative things like recognizing faces, but not good at understanding,
for example, the vast difference between 10 to the 20 and 10 to the 21.
It's like just an enormous number difference between those two figures. And so perhaps it's
because we're so bad at thinking about big numbers that we come to see the repugnant conclusion as
repugnant. But if we were just better at, you know, cogitating these large figures,
then we'd see that actually a universe with enormous numbers of people with low levels of
well-being really is better than one with just a much smaller population of people that are very,
very happy. A lot of philosophers absolutely do not accept that and think that that's total nonsense or bullshit.
Pardon my language.
But nonetheless, I mean, they have over time become more and more open to just accepting this implication of the total view.
So as a result, like you were saying before, another point you were making is that, yes, when you sort of focus on the very long-term future of humanity,
millions, billions, trillions of years in the future, a lot of our sort of contemporary problems
do end up sort of shrinking to almost just points, just almost invisible specks, you know,
on the cosmic timeline. And that is deeply problematic. And part of that arises from this idea of people as just containers of value.
So if somebody can exist in the future with a net positive amount of value, then they
should exist.
Again, we have this on the utilitarian view, we have this moral obligation then to bring
them into existence in order to maximize a total amount of value in the universe.
You know, they like to use expected value as a way of determining which actions we should take.
In other words, like, for example, which charitable causes we should prioritize.
And as soon as you include these merely possible people that might exist millions and billions and trillions of years from now,
perhaps in these vast computer simulations that are just spread
all throughout the universe, crowded with digital people that for some reason are happy. I don't
really know why. But as soon as you include them in the expected value calculations, then the long
term future wins every time. So Nick Boster, for example, has calculated that there could be 10 to
the 58 digital people in the universe in the future.
That's just a really, really enormous, absolutely incomprehensible number.
When you compare that number to, for example, the mere 1.3 billion people who are in multidimensional poverty today,
the question then of, well, which action should you take? Should you help to lift these people out of multidimensional poverty today. The question then of, well, which action should you take? Should you
help to lift these people out of multidimensional poverty? Or should you try to focus on ensuring
that 10 to the 58 people come into existence in the far future? Well, the second option,
the far future option, absolutely wins by an enormous margin. So there's just no question. You know, Bostrom
himself has said, if you were to decrease the probability of an existential risk, which again,
is any event that would prevent us from creating all of this future value by ensuring these digital
people come into existence. If you were to reduce the probability of existential risk by just a really, really tiny percentage point, you know, 0.000000 and so on, 1%, that is morally
equivalent to saving the lives of billions and billions and billions of actual human beings.
So on this framework, if you found yourself in front of two buttons and there
was a forced choice situation, you can choose one of these two buttons. Do you push it to increase
the probability that these 10 to the 58 people come into existence in the far future by a tiny
amount? Or do you save billions of people or help to lift 1.3 billion out of multidimensional
poverty and so on.
The Bostromian is going to push the first button every time.
I mean, there's just no question about it.
I think it's really interesting that you say that because when you think about McCaskill's book as well, one of the things that is interestingly absent is this discussion of
the digital people in the far future, right?
He'll talk about how people themselves, there can be a ton of people in the far future, right? He'll talk about how people
themselves, there can be a ton of them in the far future, but there's not so much mention of like
the digital beings, right? Even though at one point in the book, he says that if we would all die,
but we had invented artificial general intelligence, then civilization will still
continue as long as those computers continue to operate, right?
So all of us like fleshy human beings can die, but civilization will continue because we've
created these digital beings. So like the hints of it are in there, but he won't actually dig
into it in the way that they will in some of these other writings that are not presented for
the mainstream audience, right? And in one of the articles he wrote, you noted that he even said
that in a Reddit Q&A or what have you, that this is still something that he was interested in,
he just didn't have room for it, apparently, in this book. So this is another piece that I wanted
to talk to you about, and certainly feel free to pick up on the digital being thing. But there's
been a real campaign to sell long-termism to the general public. And this book is very much part of this campaign
or like a spearhead for it, right?
In trying to present these ideas
in a way that can appeal to a more mainstream audience,
to a more general audience,
so that you even have people like Bill McKibben
or like the actor,
Jesus, I can't remember his name off the top of my head,
Joseph Gordon-Levitt,
who is giving positive blurbs to this book,
Rutger Bregman as well. He called me out for being critical of long-termism because he was like,
this book is great, which is very worrying to me. But these people who many people would otherwise
think are real trustworthy on particular issues, I don't know about Gordon-Levitt, but at least
Bill McKibben and Rucker Bregman are people that, you know, I think people generally feel are trustworthy individuals who have, you know, some good ideas are then kind of providing positive
blurbs for a book like this. And then as you're saying, you know, there's this big marketing
campaign being built around it in order to say, you know, this is a real thing that we should be
thinking about. This is concerning and this should be a mainstream cause that people get
concerned about, that people adopt,
that people get invested in. How does this process of selling long-termism to the public take place,
and how effective do you think it's been? Yeah, good questions. With respect to Bill
McKibben blurbing the book, the long-termist and existential risk frameworks, they really have roots in transhumanism. And McKibben has been,
to some extent, a vociferous critic of transhumanism. So it's really perplexing
that he actually blurred the book. And I spoke to a number of people, including people in the
community, who themselves are somewhat quietly critical of long-termism. And pretty much
everybody was just utterly bewildered by the
fact that, you know, a lot of people said, oh, he must not have actually read the book.
Which happens. Yeah.
Yeah. Which happens. Of course. Yeah. I think anybody who's written a book has experienced
something like that, where somebody just says, oh, why don't you write it and I'll put my name
to it? Which is really bizarre, but that's just the way it happens sometimes. So yeah, I mean, the rollout of the book, you know, there was a lot of anticipation,
a lot of planning that went into ensuring that this reaches the maximum number of people.
I'd mentioned earlier, I mean, there are millions of dollars.
I've been told that there's, you know, something like a $10 million budget just to promote
this book.
And the PR firm that McCaskill's hired, you know, gets something
like $12,000 every month. So yeah, they really wanted to, you know, I think saw this as the
moment to go out and evangelize for the long-termist worldview. And in fact, my guess is that there probably was a vote, more or less, at some of the
institutions, you know, based around Oxford, that are the hubs, you know, the epicenters of the
long-termist view. They probably took a vote and picked McCaskill, you know, because he's,
I think, you know, Zoe Creamer recently described him as, I don't know,
just the most approachable, the straight guy, I think is the word she used. You know, sort of
just the normal guy who's, you know, affable and isn't too peculiar, like some of the other figures
like Eliezer Gietkowski is like frequently mentioned as a moral weirdo and so on. And
the reason I think that is because I know that when Toby Ord
wrote his book on existential risk, so McCaskill's writing about long-termism,
Ord in 2020 published a book on existential risk, which are just sort of companion books. I mean,
they're meant to dovetail each other. The reason Toby Ord wrote it was because people at the
Future Man Institute took a vote. And he decided that, well, you know, he has a wife and kids, so he looks nice and wholesome. You know, he got his degree from
Oxford. It's a prestigious institution. So that's really good. From one perspective, it looks very
kind of slimy. And, you know, I mean, that's the nature of marketing. It's a bit slimy. It's all
about manipulation. And so anyways, yeah, I suspect they took a vote on McCaskill and then made sure
that he had a lot more money than Toby Ward had.
Toby Ward had, I think, a total of something about $38,000 to promote his book.
Will McCaskill has something like $10 million.
And yeah, I think so far, I mean, they've had a lot of successes.
As you sort of gestured at early on, I mean, there were articles either by or about McCaskill in New York Times, The New
Yorker, BBC, The Guardian. And a lot of these articles were really quite positive. New Yorker
was a bit mixed. They did actually talk about some of the people who have concerns about long-termism,
such as Zoe Kramer, who I just mentioned. So I think that it's so far, it's been fairly successful, at least getting the word out.
And my whole take on this is that the underlying ideas, like the total view, I mean, that's sort
of widely seen as deeply problematic by a lot of professional philosophers. But nonetheless,
it's a legitimate idea on the marketplace of ideas. And it's
something that I personally would be willing to engage with and to critique within the confines,
within the milieu of academia, sort of just debating ideas and so on. But as I mentioned
earlier, the activism oftentimes has come before the research and sort of outstripped the research. And, you know, my main push right
now is to try to meet them in the public square and to do what I can to at least inform people
of just how radical this worldview is and just how potentially dangerous it could be as well.
And in doing that, to perhaps undermine to some extent their efforts
to evangelize, to convert people. Convert is a word that some long-termists themselves have used
to convert as many people to the long-termist religion, I would say, as possible.
You know, there's a big focus within the book as well with MacAskill kind of saying,
like, we need value lock-in so that people have these values for the long-term. And he makes an explicit connection to religions
and the fact that as religions took over and grew, they inculcated these values in people that we
still see, you know, hundreds or a thousand years down the road, right? So that comparison is quite
explicitly made within the text of his book. Yes, there are a lot of parallels,
troubling parallels between long-termism and religion. And I mean, I could talk about that for 10 minutes. I mean, there are so many. But yes, it's very disconcerting. You know, I think
along these lines, exactly, they see the upcoming 2023 Summit for the Future, hosted by the United
Nations, as potentially another key opportunity
to really mainstream these ideas. I mean, McCaskill has been explicit about that in a
podcast interview with UN Dispatch, which in fact, the introduction to that podcast, a short little
article, mentions that long-termism is really being embraced to a significant extent by the
foreign policy community and indeed the United
Nations itself. There is a lot of success so far. In fact, I mean, in terms of lock-in, I mean,
you could sort of use that idea against the long-termist view itself and say, maybe the
Summit for the Future might mainstream these ideas. It perhaps might even take some of the underlying long-termist values
and codify them in some kind of official document. And as a result, those values,
those long-termist values might be locked in for a very long time. And therefore,
for critics like me, there's a certain urgency to getting out there right now,
sounding the alarm,
saying actually these ideas are potentially really dangerous or would have implications that would
exacerbate the plight of the poorest and most disadvantaged individuals in the world today
right now before the long-termism gets locked into some UN document. And furthermore, just going back
to another point you made, McCaskill did say in a
Reddit Ask Me Anything that the reason he didn't mention digital people and the possibility,
which is brought up by the person who asked the question, that there could be enormous numbers of
digital people in the future and that ensuring that this actually comes to pass is very important
is because he ran out of space. But my own guess is he probably understood,
there probably were conversations behind the scenes that, you know, if long-termism becomes
linked too tightly with this particular normative futurology, you know, that we should go out and
create all these digital people, that that might actually be really bad for long-termism. I mean,
in the same way that effective altruism got, you know, its reputation was damaged by being too tightly coupled with the idea of earn to give, long termism's reputation might be
damaged by being too closely associated with, you know, the notion of digital people. He sort of
goes out of his way, I suspect, to not mention them too much. But ultimately, if you just dig into the papers that they are writing within the community, oftentimes for others who already have subscribed to the long-termist view, this notion of digital people in the far future and the possibility that they're 10 to the 45 in the Milky Way.
That's one calculation that McCaskill himself has used in papers.
Or 10 to the 58, Bostrom's calculation. I mean, this is just very central
to this whole picture of what the future ought to look like. It's very worrisome that long-termism
has become so influential in the world today and seems to have a certain kind of momentum towards
becoming even more influential in the future. And there is a kind of time sensitivity
to critiquing these views because, yeah,
I mean, once tech billionaires
have really fully embodied them
and the UN has published documents
that encapsulate these ideas,
then it may become just really difficult
to alter the trajectory of the future of humanity.
Which is exactly what they want, you know? Like,
I guess to try to start to wrap up our conversation, like, you know, obviously I've been
reading your work, I've been paying attention to how these things have been developing for the past
number of months, but like really reading McCaskill's book, seeing what he writes in there,
but also knowing the sorts of things that he doesn't talk about, right, that he leaves out of it, it really does feel to me like a kind of technocratic wet dream, right? This idea that
you're not only trying to shape the present and what's going on now, you're not only trying to
plan what's going on in society at this moment, but you're literally trying to, as he says in the
book, lock in these values that will shape the future, not just for thousands
of years, but for millions of years to come. And these are the values not of the collective public,
right? These are not values of compassion and, you know, needing to look after like the least
well-off among us, you know, the poorest people trying to help them, but rather these values that we need
to ensure that the maximum number of people with the most value exist in the future. These are the
values of people who are very disconnected from those real struggles that people face around the
world today, you know, values that are held by rather well-paid people at these particular
institutes at universities like Oxford, but also
the tech industry that very much agrees with, you know, higher up people in the tech industry who
very much agree with this outlook, you know, people like Elon Musk, people like Peter Thiel,
who are very much associated with these movements. And so it seems particularly troubling and really
kind of like a red flag, something that we really should be paying attention to, that these people are trying to push this particular set of values on us and are trying to lock that in as society's values and how we think about problems and how we distribute resources for many years to come. So a lot of these individuals think that we're on the cusp of creating artificial general
intelligence.
And many of them also accept this argument that was most extensively delineated in Nick
Bostrom's 2014 book, Superintelligence, which is that as soon as we get artificial general
intelligence through human-level AI, then we will very quickly get artificial superintelligence.
Because any sufficiently intelligent cognitive system,
whether it's biological in nature or artificial,
is going to realize that one way to better achieve whatever goals it has
or has been programmed to have, being smarter is going to be useful.
So consequently, as soon as you get AGI, that system is going to realize,
well, if I'm smarter,
then I can do whatever I'm supposed to do much better.
So it will have then an incentive to try to modify its source code in order to increase its cognitive abilities, problem-solving abilities, essentially.
And so the whole reason I mentioned that is once you get ASI, artificial superintelligence,
the future may be completely out of our control.
There may be no way to influence its decisions and its behaviors once it exists.
And exactly the now hackneyed analogy is, well, the future of the gorilla sort of depends
on human actions.
And we're just sort of superior than it in terms of our intellectual or problem
solving abilities. And there's just no way that it can control what we do. So there might be the
same kind of dynamic that ends up occurring between us and this super intelligence. So
as a result, it's really important that we load in certain values to the AGI or the ASI early on,
because those values, if we're the only intelligent creatures
in the universe, those values might not just shape the future millions and billions of years
from now, but the entire future of the cosmos within our light cone, the accessible region,
the entire future will depend on what values we include into it. So it's really important that
the values we select are ones that will ensure the realization of our vast and glorious, as Toby Ward puts it, long-term potential.
And what does that mean? What's our potential?
Well, at least one big part of it has to do with what we were talking about earlier, what you just hinted at, which is maximizing the total amount of value or happiness in the universe. So this is part of their vision that we're just AGI, artificial general intelligence, is right around the corner.
And it's crucial that long-termists play a part in shaping the first AGI systems because, you know,
then that will ensure that the long-termist ideology ends up determining the way the
entire future of the cosmos ends up looking like.
I mean, again, maybe this gets back at the sort of religious parallels, because it's a very
apocalyptic kind of view. The end is near. Some fundamental rupture in human history,
some fundamental transformation, in fact, they call it transformative AI, is right around the
corner. We live in this time of perils is another term they use, where, you know, existential risk is particularly high. And once we get artificial superintelligence,
then the risk will significantly decrease. We'll be safe from, you know, extinction or whatever.
It's a very worrisome situation. And I think many of these individuals are motivated to create AGI
for this reason that, you know, if AGI doesn't destroy us, then it's going to usher in a techno
utopian world. And that's reason then to not just ensure that we understand the potential risks of
artificial super intelligence, but that we create it maybe sooner rather than later.
I think that's really well put. And I think that the comparison to, you know, the kind of
apocalyptic doomsayers is really important as well,
right? And really draws out some of what the thinking is there. And I would also say,
talking about the artificial general intelligence or artificial super intelligence really shows the
connections there to the tech industry as well, and the influences of the kind of ideas that come
out of some of the particularly worrying tech circles, I think it's fair to say.
But our conversation has gone on for a while.
We certainly could have talked about far more
because there's so much to dig into on this topic.
There's so much worrying shit,
both in the book and beyond the book
that we could talk about.
But I wanna end with this question.
You said how they are really making a big push right now
in order to try to get these ideas accepted by the
mainstream, right? Try to get them better accepted by a more general public, by people beyond their
circles, to get them to believe that long-termism is something that we should be pursuing and
dedicating resources to. We've just gone through a period where the tech industry tried to sell us
Web3, right? and these kind of ideas
of crypto and that this was going to take over and there was a pretty significant backlash to those
ideas and to those plans right and i think that many people would acknowledge that that backlash
did help to restrict the ability of those companies and those ideas to really expand in the way that
they wanted to certainly there were other factors as well now, as we see higher interest rates and these projects collapsing and a whole load of
other things, right? But I wonder, what do you think about our chances to actually stop this,
right? To actually push back the tide of long-termism that these people are trying to sell us?
I think it's a formidable challenge. It's going to be really difficult because
they already have, as I mentioned
before, infiltrated major governing bodies like the UN. There are people in the UK government who
are listening to individuals like Toby Ward and so on. I mean, there are many other examples. So
it's going to be very difficult. They've already sort of established a kind of infrastructure
in the world that is the foundation of powerful institutions. And the tentacles of influence have
reached around much of the globe already. Now is their appeal to the average individual,
the general public. But I don't think the situation is hopeless. I do think it's possible
that if enough people understand that long-termism could be dangerous, that it's built on faulty
philosophical foundations, or at least dubious philosophical foundations, and that if it's taken
seriously by individuals in power, it will end up minimizing a lot of the harms being caused
to, for example, to people in the global South, which is a result of climate change, there could be a kind of, you know, maybe sufficiently large pushback from the general
public against this idea. And that may ultimately vitiate its kind of impact on the world. So for
example, a colleague of mine who works in the community, so I won't mention his or her name, but, you know,
was talking to them about the impact of McCaskill's book. And their view was that this is either going
to result in long-termism becoming widely accepted, as McCaskill hopes, or it could really be, it could
completely backfire. And it result in, you know, all sorts of
backlash against the long-termist worldview that really defangs it, you know, and really kind of
robs it of a lot of its, the momentum that it currently has. So my hope certainly is that
people will understand that long-termism is not the same as long-term thinking and that we
absolutely need more long-term thinking in the world today. But long-termism goes so far beyond
long-term thinking in adopting all of these bizarre views about the importance of creating
10 to the 58 digital people in the future. This is not the right worldview at the moment. I mean,
our societies are shaped by short-term thinking and myopic perspective on the future. Quarterly reports, you know, four-year election cycles and so on. So we desperately need more long-term thinking. But long-termism just swings that pendulum so far to the other side and casts our eyes on the future millions, billions, trillions of years from now. So it's really, it's not the antidote to short-termism that's ubiquitous in our society today. Yeah, so I do think that there's
hope. And the key is not even to present arguments for why long-termism is flawed. It's simply to
reveal the underlying ideas that long-termists don't want you to see. Because again, the average,
morally normal person will look at those underlying ideas and say, that's too bizarre.
I can't accept that. That's my mission right now. And my hope is that people will,
yes, be properly informed about what long-term is really is all about.
No, I think that's really well put. And, you know, hopefully this episode will help to inform some more people about what
is wrong with these ideas, you know, the crazy ideas that are associated with long-termism.
We didn't even get to all the stuff around increasing the population, having more kids.
That's very closely aligned with what Elon Musk has been talking about as he continues
to reveal new children that he's had.
Emil, it's great to speak again.
Thanks so much for taking the time.
And of course, I'll link to a bunch of the stuff that you've noted in the show notes
as well.
Thanks so much.
Great.
Thanks so much for having me.
It was a real pleasure.
Emil Torres is a PhD candidate at Leibniz University and the author of the forthcoming
book, Human Extinction, A History of the Science and Ethics of Annihilation.
You can also follow Emil on Twitter at at XRiskology. Thanks for listening. Thank you.