Tech Won't Save Us - The Dangerous Ideology of the Tech Elite w/ Émile P. Torres
Episode Date: May 26, 2022Paris Marx is joined by Émile P. Torres to discuss why longtermism isn’t just about long-term thinking, but provides a framework for Silicon Valley billionaires to justify ignoring the crises facin...g humanity so they can accumulate wealth and go after space colonization.Émile P. Torres is a PhD candidate at Leibniz University Hannover and the author of the forthcoming book Human Extinction: A History of Thinking About the End of Humanity. Follow Phil on Twitter at @xriskology.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.Find out more about Harbinger Media Network at harbingermedianetwork.com.Also mentioned in this episode:Émile wrote about the dangers presented by longtermism and its relationship to Elon Musk.Paris wrote about Jeff Bezos’ vision of life in space and why Elon Musk is planning for climate catastrophe.Elon Musk was tweeting about birth rates again (he has some odd views on who should be having kids) and retweeted someone praising a paper by longtermist Nick Bostrom.After moving to Texas, its governor said Elon Musk liked the state’s “social policies” (in reference to abortion limitations) to little pushback, then Musk forced California employees to move to Texas. Support the show
Transcript
Discussion (0)
I take it to be a quasi-religious worldview that not only says you're ethically excused from caring
about the poorest people around the world, but also you're a better person for focusing instead
on the long term. Hello and welcome to Tech Won't Save Us. I'm your host, Paris Marks, and this week I have a
fantastic conversation for you with Phil Torres. Phil is a PhD candidate at Leibniz University in
Hanover, Germany, and has a forthcoming book called Human Extinction, a history of thinking
about the end of humanity that will be published later this year or early next.
Phil has been writing a lot about this concept or ideology of long-termism lately, and it's one that I think listeners of this show and people who are interested in the tech industry should know more
about because it is highly influential among people in the upper echelons of the tech industry, the really powerful, incredibly rich people who also have a lot of influence and power over society and the direction of our society in the future, not to mention shaping our ideas of what that future should look like. them are people like Peter Thiel and Elon Musk, who have not only talked about being associated
with these movements, but have actually funded the kind of academic work that goes into legitimizing
them, as well as people like Jeff Bezos. And the issue here, as Phil and I talk about in this
conversation, is that they are incredibly focused on the future of humanity, not just people in the next 100 years or 200 years,
but people who will be living thousands
and millions of years from now.
And they consider that those lives have the same value
as lives that exist today.
So realizing the kind of millions and billions of people
that could live in the future
is then considered of much
greater value than helping, you know, the few billion people that exist on our planet today.
Unless helping those people helps to avert, you know, an extinction event or an event that will
stop the realization of this grand future of space colonization, of the merging of technology and
humanity to create what they call post-humans and the
development of the technologies and the systems that will be necessary to realize the future that
they envision. And as we describe in this episode, that then creates a justification for someone like
an Elon Musk to say that he needs to be the richest man in the world and have all these billions of
dollars to not pay taxes that could
actually go to solving the world's problems because he needs to extend the light of consciousness
into space so that we can start off this process of colonizing space and extending the human
population in that way. And as Phil explains, naturally, these kinds of ideas are emanating
from people who have quite a degree of power in society,
wealth in society, who are not affected by things like global poverty and global hunger
in a direct sense. I think that we could certainly say there are indirect consequences of that,
that maybe they don't really consider or care about. But by framing their work as something
that helps humans in the future, that helps to realize the future of humanity, that helps to extend humanity into space and what have you, that creates a much more reasonable justification for many people than simply to say that global hunger or poverty or mild climate change is not something that we should really be caring about or paying much attention to because it doesn't really affect those people directly, even though it has consequences for billions of people on the
planet who are not as powerful and wealthy as they are. So needless to say, I think that this is an
important conversation. And I will just note that after we recorded this discussion, Elon Musk
started tweeting about population levels again and US birth rates. And he also
retweeted someone who tweeted out a paper by Nick Bostrom, calling it likely the most important
paper ever written. And this paper, as Phil Torres describes, is one of the foundational texts of
this movement of long-termism that creates the justifications for, you know, everything that we're talking about in this episode. So that's just to further note how Elon Musk is attached to this and how Elon Musk
is incredibly supportive of it because it helps to justify not only his mass accumulation of wealth,
but also the project that he wants to realize of space colonization, extending the light of
consciousness, all of these things. So if you like this episode, make sure to leave a five-star review on Apple Podcasts or Spotify,
and also share it on social media or with any friends or colleagues who you think would enjoy
it or learn from it. And if you do want to support the work that goes into making the show every week,
you can join supporters like Isaac in Brooklyn, Ashley in Rotterdam, and Josh from Sydney,
Australia, by going to patreon.com slash techwantsaveus and becoming a supporter. Thanks so much and enjoy this week's conversation.
Phil, welcome to Tech Won't Save Us. Thanks for having me. It's a pleasure to be here.
It's great to speak with you. You know, you've been writing for a while about this concept of
long-termism that I think is really important for the audience to understand, especially when we
think about,
you know, these tech billionaires and the impact that they're having on the world and the kind of projects and ideas that they're trying to, you know, disseminate into the public consciousness
and have us all buy into. And I feel like, you know, when you initially hear the concept,
it brings to mind the notion of long term thinking, right? And for a lot of people,
that is a positive thing, right?
We need to think long-term to address major challenges like climate change
or to build infrastructure projects that will be important for addressing these things,
if we want to build high-speed rail or something like that, right?
But you explain that long-termism is distinct from simply thinking long-term
and comprises a series of very troubling beliefs
about the future and the future of humanity and the actions that we should be taking today as a
result of that potential future, right? So what is this ideology that you describe as one of the
most influential ideologies that few people outside the elite universities and Silicon Valley
have ever heard about? Yeah. So I would perhaps start by just emphasizing that there is a
distinction between embracing long-term thinking, which I believe is very important,
and that sort of mode of thinking about our actions in the present, how they might
affect the further future people in future generations, 100 or 1000 years from now,
is really important and different from this sort of long termist ideology or normative worldview.
So I think that the term long termism, which was coined around 2017, is really unfortunate.
The idea is not just that future people matter as much as present people, which is a claim that I
think a lot of moral philosophers would accept, but it goes beyond that. And there's sort of this
idea that value, however we define that, and philosophers have defined it
in many ways, is something to be maximized. And there is no intrinsic difference between
the loss of some value that exists in the world and the failure to bring some value into the world.
So that means, you know, if there was a million people in the future, let's say a
thousand years from now who perished, let's say instantaneously, so there was no suffering, but
they just disappeared and they all had happy lives, that would be the equivalent of failing
to bring a million people into the world who would have had happy lives, people who could have existed but then never would.
Because when you look at things from the point of view of the universe, the idea is to just
maximize the total amount of value that exists cosmically. So there's fundamentally no difference
between failing to bring value into the world and then removing value from the world. So that's sort of a fundamental idea
that motivates the long-termist worldview. And as a result, a lot of the individuals who are
driven by and animated by this particular perspective have become obsessed with
calculating how many people could come to exist in the future. And the greater the
number, the greater the possibility there is for maximizing value. So this means that you would
want to not just consider the possibility of biological creatures, but also if there could
be digital beings living in computer simulations that also bring into the universe happiness or value or something like that, then you should consider
them as well. So there are various calculations ranging from 10 to the 54 to 10 to the 58.
These are supposed to be sort of lower bound estimates, more conservative estimates as well.
Digital beings who could exist in vast computer simulations in the future if we colonize space and we convert entire planets into something called computronium,
which is matter optimized for performing computations, and then create these really
vast computer simulations with all these digital beings who are all, for some reason,
living happy lives. That would result in a universe that's as full of value as
possible. And that is the ultimate goal of the long-termist, or what they would call a strong
interpretation of this sort of moral reorientation to the far future, a strong long-termist view.
That's the ultimate aim. So you can see that that's quite different than saying,
insofar as people exist on Earth in a million years,
we should care about them. Their suffering doesn't count for less than our suffering.
And insofar as there are actions we take today that might affect them, perhaps like nuclear waste,
or maybe some forms of climate change, if there's like runaway climate change,
or perhaps not runaway climate change, but if there's some catastrophic scenario that may actually have really long-term effects,
then we should think about their well-being in active today. That's very different than saying
that the failure of these digital people to come to existence trillion years from now would be a
great tragedy. And therefore, because so much value could exist
in the future, we should prioritize ensuring that these people come into existence rather than,
for example, alleviating the plight of poor people today.
I think it's very odd to reduce humanity and how much we care about people that exist today and the future in terms of like
a very odd construction of value and like the value that the existence of a particular being
could bring and like whether they are a happy being theoretically or not. And when you were
describing, you know, because I've read a few of your pieces in preparation for this, and when you
were describing these kind of like worlds of digital beings that these people imagine could exist in the future as if that is like the same value, quote unquote, as someone who is living today.
And so like we should care the same amount about a digital being in some kind of like different world somewhere else in space that we have like set up versus
like someone who is suffering today. It was really odd. And it made me think of like the matrix. But
then I was like, but then again, maybe they're not going to have like living beings plugged into the
computers to power them. So a little bit different. But I do think it's really odd to like
position the future of humanity in this way. And to think
about like, this is what should matter to us. This is what we should care about. This is what
we should kind of structure everything that we're doing right now around. And it also kind of feels
a bit to me like when we think today and like when we think about the economy and society and how
it's geared around, you know, motivating growth and these really kind of abstract ideas around
economic value and economic activity. In that way, I could kind of see the extension of it
to a certain degree, but then it still seemed really odd to say that digital beings and these
kind of post-humans, as you describe them, should be the kind of thing that we care about achieving rather than actually
addressing real problems in the present? Yeah. So there's lots to say about the details here,
but fundamentally, it would not be inaccurate to say that it's greatly influenced by a particular
moral theory called total utilitarianism. And the standard interpretation of total utilitarianism is that
our aim is to maximize the total amount of, let's say, happiness or pleasure, just good,
pleasurable experiences that people have. You want to maximize this not just within the population
of people that actually exist, but within the universe as a whole. So one way to increase the amount of
total value within the universe as a whole is to keep the population stable and to increase
the happiness of every single individual. So then the result is just a larger amount.
But another possibility is to simply increase the population itself. So if you have, you know, 100 people who are fairly happy,
and you want to make the universe better, maybe you could bring into existence 100 extra people
who are also fairly happy, then you'd have twice as much happiness. So underlying this is this
notion that happiness can be quantified, that more is always better. And tied into this is another very strange view that I find very
implausible and off-putting, which is this notion that people, you and I, are containers.
So it's sort of this container model of persons. We exist as means to an end. The end is to
maximize value. And we are the containers that are filled with value, positive value or maybe negative
value that would be bad.
So when you look at it from this perspective, okay, you have this container, you fill it
with as much value as you can, then to maximize value in the universe, it would be good to
create another container and to create as many containers as possible.
And so that's why there's fundamentally no difference between non-birth and death.
Death just removes a container from the universe. Non-birth prevents a container
from coming into the universe, assuming both those containers contain net positive amounts of value.
If you look at it from this perspective and you take seriously cosmology, here Earth is.
It's existed for four and a half billion years. The universe
has been around for 13.8 billion years. In front of us is billions and billions of years and a vast
universe with all sorts of untapped resources that we could go out there and exploit.
We exploit them by creating these vast planet-sized computers, or we terraform other
planets, we spread life and so on. And the result is that the number of people could be absolutely
enormous, the amount of value in the future could be absolutely enormous. And so then,
when you have this cosmic perspective, and you see how much value there could exist in the future,
versus how much value there exists right now. The value of
the future absolutely dwarfs the amount of value right now. Therefore, and this is kind of the
crucial point, the practical implication, which I find so problematic, if you want to do the most
good, then what you really should do is focus on the far future and not on current projects. So like this strong long-termist
view is the primary value of our current actions is how they affect the long-term future because
the future could be so much bigger than the present. So should we prioritize alleviating
global poverty? I mean, that would be a very good thing. But if we could increase the probability
that this huge number of people exist in the future
by a tiny, tiny amount, in terms of expected value, that will be so much greater than
alleviating global poverty. And ultimately, they coined this term existential risk or existential
catastrophe for any event that would prevent us from realizing all of these future people, which they refer to as
our potential. So existential risk would foreclose the realization of our potential, the creation of
all these future beings. Therefore, in expectation, you know, barring from probability theory,
the best thing we could possibly do as individuals and as a society is to focus on the far future,
focus on reducing existential risk. Since global poverty is not an existential risk,
it really shouldn't be prioritized. It shouldn't be priority one, two, three, or four,
as Nick Bostrom argues, or priority five, which is to colonize space as quickly as possible.
It's really much lower down on the list, same with
basically every threat that is not existential. And so ultimately, then this particular framework
would then lead people to tend to deprioritize and to minimize the significance of a wide range
of current day problems from climate change, to global poverty, to eliminating
factory farming. Climate change, to me, is very much bound up with climate justice,
and this important fact that polluters should pay. And it should be on us to help individuals
in the global south who will be most affected from ultimately suffering the
externalities of our industrial activities up in the global North. But from this sort of broader
existential risk or long-termist perspective, it's what Nick Bostrom would call a mere ripple
on the great sea of life. Yes, in the short term, it's going to be really painful. But
in the grand scheme of things, what really matters is that we colonize space, we simulate these people, and in doing so we maximize value.
It really shows like the potential harm of people who have a lot of power, then coming to believe
that this is what needs to be most important and what needs to drive their actions, right? Not the addressing of
poverty and of climate change and of, you know, inequality, the housing crisis, like all of these
issues that we're dealing with today that are causing a lot of harm and pain and suffering
that are so-called maybe reducing the value, the happiness that people are experiencing in the
present, because the real goal is to ensure that a lot
more people or digital people or whatever it is can be realized in the far future. Once we have
colonized space, develop these technologies, found the way to ensure that we can create digital
people, like, you know, there's all these kind of ifs and things that we hope to be able to,
or that these people hope to be able to realize in order to arrive at this future. And I want to come back to this concept of existential risk, right? Because I think that this is really important, this notion that we need to protect against these risks that could ensure that these futures that that this future of space colonization and digital beings and whatnot cannot be realized, because it sets humanity back in some sort of way that ensures that this future of space colonization and digital beings and whatnot cannot be realized
because it sets humanity back in some sort of way that ensures that this can't be done, right?
And so we need to protect against those things. But these like what they consider smaller level
problems like climate change or like poverty do not need to be addressed. And there's a concept
that you discuss by one of the folks that you quote, who believe in these things called the grand battle, where he discusses how like in the next
century or couple centuries, like, this is really the moment when we determine whether we can,
you know, achieve this grand future that they are outlining, or whether we are going to like,
be stuck on our small planet and not be able to realize these things. Can you talk
about that a bit? Yeah, sure. Another way to articulate these ideas, which might be useful
in answering your questions, is the idea is that we're at a pivotal point in human history,
because we stand on the verge of colonizing space. And if values get locked in once we
colonize space, then that may determine the entire future of the universe, perhaps, if we're alone in the universe. So that's one reason
we're at a pivotal point. Another reason they think we're at a pivotal point is that artificial
superintelligence might be invented within the next century. And that once that's invented,
there's no turning back. And then suddenly we're joined in the universe, at least on Earth, by a system that is more intelligent than us in every way. And that's just going to be a complete game changer. And so then the question is, from their sort of value maximizing total utilitarian view, imagine two universes. One, basically things remain as they are right now, and we survive for the next billion years,
at which point the sun is going to make complex life on Earth impossible.
So in this case, the universe, let's say it contains, just to simplify, a thousand total units of value.
Again, assuming that value is the sort of thing you could quantify to units, which itself
is very dubious, but this is their perspective.
But then there's this other universe where we colonize space and we create these huge, huge, huge numbers, unfathomable numbers of digital beings.
As a result, in the next trillions and trillions of years, there are, let's say, a million units
of value. We're at a point now where perhaps we can choose between these two universes.
And the universe with a million units of value in
total is much better than a universe with just a thousand units of value. And I think somewhat
superficially, you might say, okay, yeah, it's better to have more value than not.
But when you look at the details, it's a wild, deeply implausible perspective. And so, yeah,
so the idea of existential risk is any risk that
would prevent us from creating that, what they would consider to be a better universe with this,
all this extra value. A lot of people intuitively think that existential risk means risk of
extinction, but that's not what it is. It's literally just any scenario that would prevent
us from simulating all these digital beings. So you could imagine, here's one existential
risk scenario. There's an all-out thermonuclear exchange. And as a result, there are these huge
firestorms, and they loft all of this soot into the stratosphere that blocks out incoming solar
radiation. There's a complete collapse of the food chains. And let's say 8 billion people starve to
death as a result. So that is one existential catastrophe.
But here's another. We continue to advance technology. We cure all diseases. We figure
out how to finally to live in some kind of harmony with the natural world. We create
these nice eco-technological communities that are sustainable and so on. There's world peace. I mean, I'm intentionally
making this utopian because if we were to create this world and that were to last until Earth
became uninhabitable in a roughly billion years, and then let's say we sort of just died with
dignity and we're just like, well, our story's over. It's been great. We achieved basically
utopian worlds here. That would be an existential catastrophe just as
much as the first scenario. And the reason is that both would involve our potential being realized
only to a very tiny extent. The vast majority of our potential, which again is all of these 10 to
the 58 people in computer simulations would never be realized. And so there's a reason why the community that
has developed these ideas and is the primary defenders of this perspective is overwhelmingly
white and quite privileged. People who attend elite universities in the West, like Oxford in
particular, or are in Silicon Valley, there's a reason why they're attracted to it.
I take it essentially to be a quasi-religious worldview that not only says you're ethically excused from caring about, for example, the poorest people around the world, 1.3 billion
in multidimensional poverty, but also you're a better person actually for focusing instead on the long term. It just so
happens also that a lot of the biggest existential risks, quote unquote, are threats that could
potentially affect the richest people, unlike global poverty. So not only is there this ethical
motivation supposedly to worry about artificial intelligence and nanotechnology and stuff like
that. But also, if their arguments about the dangers of AI and nanotechnology are correct,
it follows that those are some of the few risks that could actually destroy the world such that
Musk and Thiel and the others suffer as well. A friend of mine described this sort of like an
apex model of risk. It's like, okay, for the people who are at the apex, the top echelon
of the socioeconomic hierarchy, what are the threats that are most likely to affect them?
Well, insofar as any threat will, it's not going to be climate change unless there's an improbable
runaway scenario. And Elon Musk and
and so on are worried about climate change insofar as it's a runaway effect. But otherwise,
you know, they say we'll survive where we means them and homo sapiens as a whole.
But yeah, nanotechnology and AI, those could actually pose a threat to them. And so there's
multiple interlocking reasons here why the notion
of existential risk is very appealing to a lot of these individuals. And yeah, it really does give
them an excuse to just kind of ignore the plight of poor people, which is exactly what they want,
of course, to begin with. When you're filthy rich, why do you want to care about what the
poorest people are going through? Exactly. Why spend $6.6 billion to feed the world to ensure that the global hungry are fed, as the Elon Musk exchange last year, when you can plow that money into going to Mars and extending the light of consciousness to another planet, right?
That's very much the dichotomy, the framing that they're setting up, why should I have to pay taxes to the US government, when that is going to limit my ability to make these
investments in allowing us to colonize planets and what have you, right? Like, these are very much
kind of the choices that they are setting up the false choices to make that very explicit. And I
think, you know, we've been talking in kind of abstract ways,
but now I want to start to turn our conversation to deal with these much more concrete aspects of
this. And I feel like one of the pieces that is really important is that in this entire ideology,
this way of thinking about the future and what needs to happen, there's a big focus on technology and the realization of technological
progress, right? We need to ensure that technology can continue to develop. And the understanding of
technology, I feel like, is positioned in this way that is very self-serving to them, right?
Technology can only be understood in this way that develops in one particular fashion, right? There's one
route that technology can go and we need to ensure that it can keep making those developments
without thinking about can technology be envisioned in other ways, serve different goals?
Can we refocus on different types of technologies that realize different aims? No, there's just one
form of technology. It's the technology that
allows them to achieve this particular future. And we need to ensure that our resources go into
realizing and developing those technologies rather than, you know, doing all these other
things that might have other benefits, but wouldn't have these kind of long-term consequences.
Yeah. Perhaps the default view among technologists is a kind of techno
deterministic view that, you know, the enterprise of technological development is fundamentally
unstoppable. Some scholars have called this the autonomous technology thesis. Technologization
is just this autonomous phenomenon that depends on our individual actions, but ultimately we don't
have any control over the direction. And certainly there is bound up with that a kind of
view of linear progress over time. And we know to some extent what technologies will be
developed in the future. And it's just a matter of, you know, sort of catalyzing the developments
needed to get there. And furthermore, this notion that technology is a value neutral entity it's just a mere tool
as opposed to i feel like you were gesturing at this like how exactly do we want to realize
these technologies values end up being embedded in artifacts and that you know has all sorts of
ramifications not just for how the artifacts are used, but perhaps our broader
worldview. So yeah, I think those are all very problematic. And the link between this contemporary
sort of long-termist community and these tech billionaires, some of whom are unfathomably
powerful, and who will unilaterally make decisions that will affect the world that we
and our kids, if we have kids, will live in. Just individuals making decisions that will affect
billions of people. And it's worrisome that a lot of these individuals hold the views that I just
mentioned, this sort of technology as neutral tool view, a kind of techno-determinist view, perhaps a notion that technology is
essential to progress, which- Is debatable.
Is debatable. When you look at the- A lot of the existential risk scholars themselves
suggest that there's maybe a 20% chance of human extinction this century. So just
this century, 20%. I mean, imagine getting on a plane and the pilot saying, like, there's a 20%
chance the plane will crash. Everybody obviously would flee, you know, would race towards the exit.
So that's what a lot of them believe. Why? Because of technology. All of them also say the primary source of risk is
anthropogenic. It arises mostly from advanced technologies, precisely the sort of technologies
that they want to develop because they'll turn us into radically enhanced post-humans. They'll
enable us to go to space. They'll enable us to upload our minds and then simulate huge numbers of people
in the future. There is a lot of overlap between these tech billionaires and the existential risk
community. So for example, Peter Thiel gave a keynote address at one of the effective altruist
conferences that they held. And effective altruism is this very quantitative approach to philanthropy
that has given rise, has been the petri dish
out of which long-termism has grown.
And furthermore, Peter Thiel has donated to the Machine Intelligence Research Institute,
which is a long-termist research group based in Berkeley, California, although I believe
they're moving to Texas. I think they're following Musk.
Of course.
I would double-check that, but I think that's the case.
And yeah, and then Musk, he's mentioned Bostrom on many occasions.
He seems to be pretty convinced by Bostrom's argument that we may very well live in a computer
simulation.
And in fact, I think in terms of explaining some of Musk's behavior, there are two issues
that come to mind.
One is the long-termist view, that ultimately the good he could do in the long run will
so greatly exceed whatever harms he might do in the present, because he might be instrumental
for getting us into outer space, that he just doesn't care that much if he upsets people,
if he's mean to people, if he harasses
people. And then on the other hand, I think also, he seems quite sure that we live in a computer
simulation. And I wonder if that doesn't also affect his behavior, where he's just like, well,
maybe none of this is really real. And so certainly, if you're him, you might have extra
reason to think this isn't real.
You know, what's the probability that you're going to become like the richest person on earth, maybe the richest person ever, maybe the most powerful human being ever in human
history.
That's pretty unlikely.
So you might think, yeah, maybe I am in a computer simulation.
It also gives you the opportunity then to like dismiss the consequences of your actions. Like,
oh, it's a computer simulation. So if I choose to do this thing that creates a lot of harm for
people, whatever, that's it, right? It does at least perhaps open the door to
trivializing the consequences of some of your actions. Because I don't know, these are just
digital people. I don't know if they're real, if they're actually feeling anything. Maybe the simulator
has some way of... Who knows? There's all sorts of possibilities if you accept this premise.
And his brother has said he's really bad with people. And I think that's something he's
acknowledged himself, right? I think what you're setting up here and what you're describing,
I think it really kind of gives us insight into the way that these people think, right? I think what you're setting up here, and what you're describing, I think it really kind of gives us insight into the way that these people think, right? Like for them, for
someone like Elon Musk, the threat that we face is from not developing our technology, from not
colonizing space to allow the light of consciousness, as he says, to extend into other
planets, and then, you know, continue on from there elsewhere. And then we can grow the population. We can ensure that if something happens to Earth,
the human species continues on somewhere else. And, you know, for someone like Musk,
we're just kind of downplaying all the actual threats and challenges that come with living
on a planet like Mars. We ignore the cancer that we get from the radiation and all that because,
you know, as long as we get there, that's the most important point, right? But then, on the other hand, we look at this
through the lens of like an average person who does not think through a long termist view and
is not the richest man in the world or has connections to some of the richest men in the
world. And seeing the actions of someone like an Elon Musk,
who is distracting us from the actual problems that we face, who says that electric cars and
that Tesla are the greatest contribution to climate change ever in the world, even though
that is not true at all. And his action is actually delaying us from addressing the actual issues with climate change
and actually ensuring that we have climate justice, that we create a planet that can
actually survive in the conditions that we've created, you know, addressing world hunger,
you know, just ensuring a society that is like fair and decent for everybody. It really seems
that the actions, you know, as you described with technology creating these actual risks of human extinction, it seems that by letting someone like an Elon Musk and these people who have these really odd beliefs about the future, that actually creates a lot of risks, not just for those of us who aren't the richest man in the world, but even for the human species as a whole. Yeah, I very much agree. There's just too much to say
on this point. I mean, one thing that comes to mind right away is one of the leading billionaire
donors right now to long-termist causes, a 30-year-old named Sam Bankman-Fried,
who perhaps you've come across. Yeah. Last week's episode, if people have listened with
Bennett Tomlin, we talked about Sam Bankman-Fried, and week's episode, if people have listened with Bennett Tomlin,
we talked about Sam Bankman Freed. And he also mentioned, if people don't remember,
that he is an effective altruist who believes in accumulating as much money as possible
to realize these sorts of visions.
Yeah. So, okay. Yeah. And so sorry for not having listened to it at the preview.
No, that's okay.
But that's great that he was mentioned. Yeah. So he's motivated by the effective altruist notion
of earn to give. Like literally some of the EAs have argued, go work for a petrochemical company,
go work on Wall Street. Yeah, these are evil, but they're really good ways for you individually to
make a whole lot of money. And then you take that money and give it to a charity. I don't know.
I guess there's a certain kind of logic there. But anyway, so I mean, he's made his money from cryptocurrencies. And this is not my area of expertise by any means. But the point
is that cryptocurrencies have a massive carbon footprint. I'm not gonna be able to remember the
exact details precisely. But there's a study from just a few years ago that found even if something
like if we were to become as a civilization net zero next week, but Bitcoin were to persist, we still would not be able to keep temperatures from rising above 1.5 degrees Celsius.
So, I mean, this is a massive problem.
My point then is like, so Sam Bankman-Fried is somebody who is trying to do good, but he's ultimately involved in a Ponzi scheme that has a massive
carbon footprint. So this would be a case then of somebody who is motivated by this sort of long
term thinking, but ultimately, like, I don't know, they might be doing a significant amount of net
harm, ultimately, by contributing to climate change, and so on. And there's, of course,
lots to say about Musk, and, you know, downplaying climate change scenarios that on. And there's, of course, lots to say about Musk and downplaying climate
change scenarios that do not involve a runaway greenhouse effect. Runaway greenhouse effect
would make Earth completely unlivable. It's probably what happened on our planetary neighbor,
Venus, as a result of water vapor rather than carbon dioxide, but it perhaps could happen here.
Seems to be very improbable. Consequently, a catastrophic climate change, yes, it will be very bad for mostly poor people, but will survive. So you end up kind of
minimizing them. You see that literally in interviews with a lot of these individuals.
And then, of course, as you alluded to earlier in the conversation, Musk sort of dangled the
$6 billion that would be needed to alleviate, was it extreme poverty or?
It was hunger.
Hunger. Okay, that's right. So I can hardly express how upset that makes me. But, you know,
I think from his perspective, which really does seem to be infected, if you will, by this sort of
sci-fi perspective, this sort of long-termist framework, thinking about the future of humanity
spread throughout the heavens in digital form and so on.
The problem of hunger today, it's just really a minor problem.
If you take seriously that the non-existence of digital beings in the future, trillions
of years from now, is just as bad as the death of somebody now, it really does follow that you shouldn't be
so concerned about global poverty. It's just not a big deal. It's a small fish. There are much
bigger fish out there to fry, such as ensuring that these people come into existence. The problem
with that, of course, is that if somebody doesn't come into existence, they're not harmed because
there is no person to be harmed. It's incredibly immoral to say that
there are people who exist today. And because I believe that there are going to be all of these
digital beings, like a million years in the future or something, that we shouldn't actually
take the actions that would help people today because we might realize these lives that would
not be recognizable to us as humans today, which maybe that's okay. Maybe that's not a problem.
Maybe that's not recognizing something's humanity because it doesn't look like those of us who
exist right now. But still to say that something like that is of equal value to someone who is starving,
like Elon Musk lives in Texas.
He used to live in California.
Like there are homeless people and people who are struggling like very close to him.
And to say that that doesn't matter because I, as someone who is incredibly rich, need
to get us to Mars and develop these neural
link technologies and whatever to try to realize this future is just really disgusting.
And to be able to have the influence and the power to make a lot more people feel that
this is an acceptable trade-off, I think it shows a deep fundamental problem with the
world that we've allowed to be created.
Yeah. This perspective on ethics, it shares fundamental similarities with certain
approaches in economics. It really is morality as a branch of economics in some sense. We
are these just fungible little containers to be multiplied as much as possible, to fill the universe with as much value as possible.
Maybe a good illustration of the underlying reasoning behind saying that it would be a
greater tragedy to feed all the hungry people in the world, but never realize all these digital
people. You can imagine in the standard trolley scenario, there's a runaway trolley, it's heading straight down the track, and there's five people who are on the tracks and just
oblivious, whatever. There's a little sidetrack with one oblivious person, all of them innocent,
all of them deserve to live. And so you're by a railway switch, you pull the switch,
most people say, well, in this forced choice situation, yeah, I guess I would. It's a tragedy either way. But
it's better, I guess, than one person die than five people. Not all philosophers would actually
agree with that. But that's a pretty common intuition. But now you can imagine a variant
where there's nobody on the track ahead of the runaway trolley, and there is one person on the
side. But as you see the trolley racing down the track, somebody shouts to
you and says, I can't explain the causal details now. It's very complex. But I guarantee if
you were very smart, you knew like super advanced physics, you would understand that if the
trolley continues straight, it will prevent five people who would have happy lives from
being born. People who otherwise would have been born if the train goes off
on the sidetrack.
So then the question is, do you pull the switch?
And for a total utilitarian, absolutely.
Because if you bring five people into the world who have happy lives and you lose one,
you get more total value than if you save the one who actually exists and fail to bring
these five people into the
world who would have happy lives.
For me, from my perspective, and this gets at the crux of a fundamental difference between
me and the strong long-termists and total utilitarians, those are very much bound up
together once again.
From my perspective, it's atrocious if you pull the switch, even if you know with 100%
certainty that five unborn people who would have happy
lives will never be born. If you're unborn, you don't suffer. I don't think that's a tragedy.
There have been I don't know how many people who could have been born in the past who never were.
Nobody in their right mind is going to weep over them. And so yeah, for the long term,
yeah, you just absolutely would pull that lever, kill the one living person,
ensure that the five people
are born. And that is the reasoning that underlies this view of should we help global poverty
or should we work to colonize space, become super intelligent cyborgs, upload our minds,
you know, and so on.
What I was thinking about as you described that is, especially when you're touching on like
the population question and the unborn question, like there are a lot of things that these tech
billionaires do and say that are, I think, incredibly like problematic, but that seemed
to be like accepted as something that makes sense by a lot of people, not so much by me,
simply because it's something that they would say and that they would do, right? Like, we have Jeff Bezos, who is investing in this like 10,000
year clock that I feel like can be seen like, in a sense, like, yes, long term thinking is good.
But as kind of like an object of this kind of long term is thinking because he too is saying,
you know, we need to build these colonies in space so that we can realize a trillion humans
who are living in these colonies. And if we stay on earth itself, we're going to be subject to
stagnation. And I can't remember the other word that he uses, but it's essentially like, you know,
we're going to suffer as a species because we won't be able to continue to grow into a trillion
people by inhabiting these space colonies
and continuing to grow. And then Musk obviously talks a lot about the light of consciousness
that he is trying to spread by allowing space colonization. And I think those terms are really
weird when you think about them. There's a lot kind of wrapped up in that. He talks about his
businesses as a form of philanthropy. He doesn't need to donate his wealth. He doesn't need to pay taxes
because the actual businesses that he's running are philanthropy for the human species because
it's achieving these incredible things. And then he also says these really worrying and concerning
things about population, right? That people aren't having enough kids, especially smart people, he says, need to be having more kids. There was that thing when Texas passed the
abortion laws where he wouldn't say anything about it and the governor said that he was fine with it.
I guess, what do you make of the broader project that these tech billionaires are trying to
carry out and the consequences of that? Yeah. I mean, I should also add that it's
difficult to know exactly what motivates these people. Absolutely. This system selects for
people who have, let's say, sociopathic, ecomaniacal tendencies, people who aren't
like that, who aren't capable of screaming at someone, firing them,
not caring. The fact that those people will have lost their livelihood, maybe they have kids and
so on, doesn't keep you up at night. Those are important qualities to have within our capitalist
system to be successful. And so it's hard to tell the extent to which some of these tech billionaires
are in some deep way kind of motivated by the long-term view, which is
ultimately kind of an ethical view. It might be the case that the long-termist view is useful to
them because, again, it justifies. So then if you're a sociopath and you still want to appear
as an ethical person, oh, there's this framework over here that you can incorporate. And actually,
it suggests that what you want to do anyways is the right thing. Also, the fact that there could be all these huge numbers of future
people, from the capitalist perspective, I mean, those people are also consumers.
Yeah.
You know, so maybe it's not just about maximizing value in the universe. It's about maximizing your
own, you know, about you having your bank account. So that being said, it's really worrisome that there just
isn't a sufficient amount of reflection on things like space colonization, on the underlying drivers
of this whole industry. For example, there has been some work recently on the possible risks
of space colonization, which for the longest time was simply accepted
by virtually all futurists, all long-termists, and so on, as something that would significantly
reduce the probability of an existential catastrophe. So the idea is that the more
spread out a species is geographically on Earth, the lower the probability of extinction,
because any single localized catastrophe is not
going to affect the entire population. So the same thing applies to the cosmographical,
not just geographical realm. So we spread out, we become multi-planetary, then we increase the
probability of our survival. But there has been some scholarship recently that's really very
compelling that suggests that colonies on Mars actually would really
increase the probability of catastrophe here on Earth. That, you know, these colonies eventually
will become Earth-independent. Their living conditions will be so radically different than
ours. It's entirely possible that there might be modifications to the human organism,
either through natural processes or by incorporating technology, you know, that is through cyborgization. On Mars, you ultimately get a kind of like
a variant of Homo sapiens. Again, they have different interests and so on. Eventually,
they're going to want their independence. It's hard for political scientists to imagine
us creating these Earth-independent colonies. Earth-independent meaning they can exist without
the help of Earth. They don't need food
to be shipped from Earth to Mars. It's hard to imagine these colonies not eventually wanting
their independence. And once that happens, the situation may be very volatile within the anarchic
realm of the solar system, where there's no overarching government or referee or something
like that, that can mollify the parties and ensure
that there's peace between them. So multiple factors from egomaniacal tendencies to greater
profit, more consumers, to maximizing value are motivating these billionaires to pursue
certain projects that they haven't really thoroughly thought about the potential unintended consequences
and could ultimately put humanity in a much worse situation than we otherwise would have been if we
had just stayed here on Earth. So there's just endless number of points to make about this.
But it's very disconcerting. Again, a fundamental problem, something that really, really does sort
of keep me up at night
is the fact that you have these individuals who have become or been allowed to become
so rich and so powerful that they single-handedly will make decisions that will,
in really non-trivial ways, influence what the future looks like for us.
And that's a really bad situation.
Even worse, they're
influenced by some of these views from Nick Bostrom and others. Yeah, I feel like the future
and space you're describing sounds a bit like The Expanse, which is a show that Jeff Bezos
apparently really likes and that he paid to ensure would continue to be created for a while on Amazon
Prime when it was cancelled by SyFy, you know, just interesting things that
come up there. But I feel like the point that I want to go back to to close our conversation
is just that these are people who are incredibly wealthy, who are kind of the elite of society,
whether they are the billionaires who have an incredible amount of money, influence, power,
or these academics who came up through elite
universities and really don't have the same kind of concerns or troubles as much of the rest of
humanity, and certainly not people in the global south or people who are homeless or what have you,
right? They're very separated and divorced from those experiences. And thus, they turn their thinking and their
visions to things that might affect them or things that seem higher and above the everyday
concerns of everyday people, right? And I feel like the real risk, whether it is something that
these people really believe, something that Elon Musk really believes, or whether it's something
that justifies the actions that he already wants to take, it creates this narrative and this ideology that can be weaponized
or that can be utilized to then say, yeah, all of this suffering exists. And yes, it's going
unaddressed because of the way that I'm choosing to deploy my capital and my power, but that is justified because we
are able to look beyond the everyday concerns of you who are working your job in the Amazon
factory and just trying to get by or in the Tesla factory and suffering the horrible racism in the
factory, but just trying to eke out a living, we don't need to be concerned
with those petty concerns so we can think more broadly and in the long arc of human history and
human civilization to actually serve the broader species instead of just think every day.
And I guess it provides them with an ability to then justify their mass accumulation of wealth,
their disinterest in human suffering as it exists today,
and these other problems that we face. And then by doing that, not only perpetuates suffering,
but creates a whole ton of risks for human society as it exists today and for all of the
billions of people who inhabit the planet. I think that's really well put. Anybody who
happens to glance at any of the
articles I've written might see that there are numerous cited cases of these super wealthy
individuals who specifically say that climate change isn't an existential threat. Therefore,
the implication is it's not something that should be prioritized unless it's a runaway scenario that
would cause our extinction. Otherwise,
yes, it's really bad. We can all agree on that. But it is not a top priority. And so this is a particularly clear and salient example of how this particular mode of thinking leads to powerful
individuals embracing views that are harmful and unjust because they're disproportionately harmful to people in the global
south who had little to do with climate change. Bangladesh, what is like less than 1% of all
carbon emissions. I mean, it's probably much less than that. I can't remember the exact figure,
but Bangladesh will be decimated. Millions of people have to migrate and so on. So yeah,
I feel like that's just a particularly egregious case
of this view sort of justifying a blithe attitude towards non-runaway, but still catastrophic
climate change. It's a real shame. It's very upsetting. And it's worrisome moving forward.
I completely agree. It's a huge risk. And that's why people need to be more aware of this. Phil,
I really appreciate you taking the time to chat.
I've really enjoyed reading your work on this, and certainly I'll have links in the show
notes for people to check it out.
So thanks so much.
Thanks for having me.
Appreciate it.
Phil Torres is a PhD candidate at Leibniz University in Hanover, Germany.
You can follow him on Twitter at at XRiskology.
You can follow me at at Paris Marks, and you can follow the show at at TechWon'tSaveUs.
TechWon'tSaveUs is part of the Harbinger Media Network,
and you can find out more about that
at harbingermedianetwork.com.
And if you want to support the work
that goes into making the show every week,
you can go to patreon.com slash techwontsaveus
to become a supporter.
Thanks for listening. Thank you.