Tech Won't Save Us - How Effective Accelerationism Divides Silicon Valley w/ Émile Torres
Episode Date: December 14, 2023Paris Marx is joined by Émile Torres to discuss Silicon Valley’s recent obsession with effective accelerationism, how it builds on the TESCREAL ideologies, and why it shows the divide at the top of... the AI industry. Émile Torres is a postdoctoral fellow at Case Western Reserve University. They’re also the author of Human Extinction: A History of the Science and Ethics of Annihilation. Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon. The podcast is produced by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry. Also mentioned in this episode:Emile wrote about the TESCREAL ideologies and AI extinction scaremongering.Timnit Gebru also did a great presentation on TESCREAL.Paris wrote about the religious nature of Marc Andreessen’s techno-solutionist manifesto and about Sam Altman’s (temporary) ouster from OpenAI.The Year In Tech livestream for Patreon supporters is on December 17 at 1pm PT / 4pm ET / 9pm GMT. More details on Patreon or Twitter.The Information did a great profile on effective accelerationism.Forbes revealed the man behind the e/acc moniker Beff Jezos.972 Magazine reported on Israeli’s use of AI to expand targets in Gaza.UK plans a “hit squad” to replace public servants with AI. Paris wrote about the threat it poses.Support the show
Transcript
Discussion (0)
Imagine like a five foot by four foot map.
And this map shows you where different ideologies or different positions are located.
EAC and long-termism would be about an inch apart.
In contrast, like the AI ethics people, like Emily Bender and Timnit Gebru and so on,
they would be like three feet away.
So if you stand far enough from the map, EAC and long-termism are in the exact same location.
Hello and welcome to Tech Won't Save Us. I'm your host, Paris Marks, and this week my guest
is Emil Torres. Emil is a postdoctoral fellow at Case Western Reserve University and the author
of Human Extinction, A History of the Science and Ethics of Annihilation. Now, Emil has been on the
show a couple times before and in the Elon Musk series, where we talked about long-termism,
effective altruism, and these ideologies throughout the tech industry that justify,
you know, a lot of the actions of powerful and rich people, but make it seem like they are moral and justified. And so we have continued to see that expand over the past year. And Emil
and Timnit Gebru, who was on the show earlier this year, have written about kind of the longer
history of these ideologies, which they call TESCREAL, which is an abbreviation
that we'll get to in the show, and more recently, effective altruism, which is linked to this
manifesto that Marc Andreessen wrote recently called the Manifesto of Techno-Optimism.
And so I thought it was a good time to have Emil back on the show to dig through all of this stuff,
to try to understand kind of where the minds and the brains of these
people in the tech industry are and how they have been shaped or diluted, you know, by these
fantasies, by these ideas that set themselves up as kind of the masters of history and of the future
as the people who shape what humanity and what human society is going to look like in the future. And that justify,
you know, the actions that they take for their own benefit, but act as though it's something in
service of us all. And so this conversation goes in many directions. You know, we dig into what
these different ideologies mean and where they come from. But then we also talk about kind of
the reasoning behind them and the influences on them to show
their real links to right-wing ideologies, right-wing movements, including far-right and
fascist thinkers that are very explicitly cited by Marc Andreessen and by other of these people
in particular. And it just further justifies, like my previous conversations with Emil,
my conversations with Jacob Silverman and other people, why we need to be forcefully pushing back on these ideas and on these ideologies,
being pushed by those in tech, because they do present a real threat to many people in our
societies, to people who have far less power than they do to make the decisions that govern
all of our lives. So this is something that we need to be
very aware of. And it also shows why adopting a Luddite politics by adopting a politics that is
very much in opposition to these people is so much more important as their power and their
influence continues to grow. And as they continue down this road to not just the right wing, but
the far right and the threats that
obviously poses to a lot of people. So as I said, it was great to talk to Emil again. I don't think
I really need to set this up anymore. This is a bit of a longer conversation, but I think that
you're going to enjoy it, especially as the year starts to wind down. I will also note that I did
record this one in a hotel room, so there might be some occasional background noise.
Hopefully, I've been able to minimize it. Fingers crossed it doesn't distract you too much from this conversation.
Before we get into the show, there are a couple of things you might want to know about because, I don't know, I think they're pretty exciting, but I'm biased, of course.
First of all, you might remember last year we did a live stream with some guests to talk about the end of the year and, of course, release that as a podcast episode at the end of the year for all of you as well.
Well, we're doing that again this year. And so on December 17th at 1 p.m. Pacific, 4 p.m. Eastern, 9 p.m. GMT, I'll be speaking with Gita Jackson, Molly White, and Aaron Thorpe about the year in tech, about all the biggest stories, you know, about the worst people in the industry and, you know, what we're looking forward to in 2024. And I'm sure there will be more in there
as well. Now that is a live stream for Patreon supporters. So if you do support the show on
patreon.com, you'll be able to have access to that. If you don't, you'll still be able to hear
our conversation. It will just be released later, you know, the last episode of the year on the
podcast feed with just the audio, if you want to enjoy that. But of course, if know, the last episode of the year on the podcast feed with just the audio,
if you want to enjoy that. But of course, if you want the full experience, if you want it earlier
than everybody else, you can join on patreon.com. And one thing that we'll be doing on that is you
might also remember last year, I ran a bracket called the worst person in tech where you all
voted for the person who was the worst and Peter Thiel won. Well, on Wednesday of this week, we
started this year's version of that bracket. So that is ongoing right now as you hear me talk
about this. And every single day, we'll be narrowing it down until Sunday when on that
live stream, I'll be announcing the winner of the worst person in tech based on your votes.
And then of course, you know, I'll announce it on social media for those of you
who are not in the live stream as well. So just a couple of fun things to kind of close off the year
that you might want to participate in. If so, you can join as a Patreon supporter to join the live
stream. If not, you'll hear the audio on, you know, the podcast feed. And then of course,
if you want to participate in the voting for the worst person in tech, just go find Tech Won't Save Us or me on whatever social media platform you prefer. And if you do want to support the work that goes into making this show possible, so I can have these, you know, critical
conversations so you and others can keep learning about this particular side of the tech industry
that, you know, doesn't get the attention that it deserves. Consider as the year winds down as 2023
comes to a close, joining supporters like Brian from Ottawa, Jake in Mission Viejo, California,
Ramon from Mexico City, and Matt from Edinburgh by going to patreon.com slash techwon'tsaveus
and becoming a supporter yourself. Thanks so much and enjoy this week's conversation.
Emil, welcome back to Tech Won't Save Us.
Thanks for having me. It's great to be here.
Absolutely. I was looking back through the history of the show. Obviously,
you were in the Elon Musk series that we did recently. And I was shocked to see that, you know, you hadn't been on like a
proper regular episode of the show since like over a year ago. And I was like, okay, we need to change
that. And there are some things that we definitely need to talk about. So happy to have you back on
the show for us to dig into all this. Thanks for inviting me. Yeah, of course. This is where like,
I force you to thank me for saying nice things about you.
I do this.
I do this to everybody.
When you were on in the past, we talked about, you know, these concepts that have become,
I guess, you know, quite familiar to people at this point, you know, long term is an effective
altruism.
Obviously, when we were talking about Sandbank Manfred and all this crypto stuff, like these
ideas were kind of in the
air, were kind of seeming to become more popular through, you know, the pandemic moment and
everything that was going on then. But, you know, you have been writing about these further
with Timnit Gebru, who of course was on the show earlier this year. And you talked about this kind
of broader set of ideologies called TESCREAL, you know, which is obviously an abbreviation. I was wondering if you could
talk to us about what this bundle of ideologies is, what that acronym stands for,
and then we can go from there. Yeah, sure. So the acronym TESCREAL
stands for a constellation of ideologies that historically sort of grew out of each other. So consequently, they form a kind of
single tradition. You can think of it as a wriggling organism that extends back about 30
years. So the first term in the test grill acronym is transhumanism. And in its modern form, it was
found in the late 1980s, early 1990s. So it goes back about 30 years. And it's kind of hard to talk about any one of these
ideologies without talking about the others. They shaped each other, they influenced each other,
the communities that correspond to each letter in the acronym have overlapped considerably over time.
Many of the individuals who've contributed most significantly to the development
of certain ideologies also contributed in non-trivial ways to the development of certain ideologies also contributed in non-trivial ways to the development
of other ideologies. So the acronym itself stands for a bunch of big polysyllabic words,
which namely transhumanism, extropianism, singularitarianism, cosmism, rationalism,
effective altruism, and long-termism. Yeah, so these ideologies are intimately linked in all sorts of ways, and
they all have become, if not in their current forms, influential within Silicon Valley.
They're sort of legacies. Their core ideas and central themes have been channeled through other
ideologies like long-termism and effective altruism, rationalism, and so on, that are currently
quite influential within Silicon Valley. There are many people in the San Francisco Bay Area,
et cetera, in big tech who would explicitly identify with one or more of these ideologies.
That's what the test group on was. And the emergence of these different ideologies
corresponds chronologically to their emergence over time.
So transhumanism, that's really like the backbone of the test girl bundle. Long-termism could be
thought of as something like the galaxy brain that sits atop, because it binds together all
sorts of important ideas and key insights from other ideologies to present, to articulate
a comprehensive worldview, or what you might call a normative
futurology, claims about what the future could and should look like that has been championed
by people like Elon Musk and so on and so on.
Yeah.
So we'll talk about their normative futurologies through this episode.
But I guess for people who hear you kind of name off those terms, I think
let's briefly kind of go through them just so it's clear kind of what we're talking about. Like
transhumanism, I think is quite obvious, right? This idea that we're going to enhance the human
species with technologies, kind of merge human and machine. These are ideas that I think we've
heard of and that have been around, as you say, for a while. So this will not be new to people.
Extropianism, I feel like might be a word that is a bit less familiar. What would that mean?
Right. So that was the first organized transhumanist movement. So its emergence
roughly coincides with the establishment of modern transhumanism. So really like very early
1990s. In fact, the founder of the Extropian movement,
a guy named Max Moore, whose name was originally Max O'Connor, but like many Extropians,
he changed it to better reflect his Extropian transhumanist worldview. Other examples,
his wife is Natasha Vita Moore, with a hyphen between Vita and Moore, so more life.
Yeah, there are a bunch of other examples that are somewhat humorous.
But yeah, so extropianism, it was a very techno-optimistic
interpretation of the transhumanist project.
There was a strong libertarian bent within that movement,
you know, belief that free markets are the best way forward
in terms of developing this technology in a
safe and promising way. In fact, Ayn Rand's Atlas Shrugged was on the official reading list
of the Extropian movement. So there was this Extropian Institute that Max Moore founded.
And part of the reason that the Extropian movement was sort of the first platform for transhumanism
and, you know, really established transhumanism, put it on the scene in Silicon Valley. Part of
that was because of the Extropian mailing list. So they had this listserv where people from all
over the world could contribute. This is how Nick Bostrom and Eliezer Yudkowsky, who's, you know,
leading rationalist and one of the main AI doomers
out there today.
Anders Sandberg, Ben Goertzel, who maybe we'll talk about in a moment because he's the founder
of modern cosmism.
All of these individuals were able to make contact, you know, cross-pollinate ideas to
sort of develop the transhumanist vision of what the future ought to be.
This is basically just Ray Kurzweil, right?
Like, this is the idea that, you know, we're going to have the computers kind of reach the point
where they gain this human intelligence and, like, we kind of merge. I guess it's kind of similar to
transhumanism in some ways, right? Yeah, exactly. So I would say that the next three letters in the
acronym of TESCREOL, those are just
variants of transhumanism with different emphases and maybe slightly different visions about
what the future could look like.
But ultimately, they are rooted in the transhumanist project.
This aim to develop advanced technologies to radically re-engineer the human organism.
So with singularitarianism,
the emphasis is on the coming technological singularity. There's a couple different definitions of that. For Kurzweil, it's about humans merging with AI, radically augmenting
our cognitive capabilities, becoming immortal, and so on. And ultimately, that will accelerate
the pace of technological development to such a degree that beyond this point,
the singularity, we cannot even comprehend the phantasmagoria of what the world will become.
It'll involve dizzyingly rapid change driven by science and technology. So it's continuing with
the metaphor of the singularity, which is taken from cosmology. There's sort of an event horizon
beyond which we cannot see.
And so that'll mark a new epoch in cosmic history. I think it's the fifth of six epochs
he identifies in his grand eschatology, his grand view of the evolution of the cosmos as a whole. And the sixth epoch, that culminates with us or our, you know,
our cyborg-ish or purely artificial descendants spreading into space.
So there's this colonization explosion and the light of consciousness
then being taken to, you know, all the far reaches of the accessible universe.
And ultimately, as he puts it, the universe wakes up.
So this is singularitarianism. And in fact, the term singularitarian, that was coined by
extropians, but in particular by a guy named, I think his name is Mark Plus. So another guy
who changed his last name. I feel like though, when you talk about Kurzweil, he's the kind of
person who reminds me of those people who are always predicting the apocalypse is going to come, like really religious folks. He reminds me of someone like that, but it's a secular techno-religion, constantly predicting that the singularity is going to happen and it's going to happen. And then the date just keeps getting pushed because it doesn't happen, because it's basically a religious belief, not something that's founded in anything concrete.
Yes, I would completely agree. The connections between singularitarianism and transhumanism, more generally, the connections between those things and traditional religion, like Christianity,
are really significant and extensive. It's not uncommon for people to describe something as a religion, an ideology,
a worldview, and so on, as a religion in a way to denigrate it, right? That's just sort of a facile
means of criticizing something. But in this case, the connections really are quite significant.
So transhumanism itself, this was modern transhumanism emerged in the early 1990s,
late 1980s. But the idea of transhumanism, that goes back to the early 20th century.
And the reason I mentioned this is that it was proposed initially and explicitly as a
replacement for traditional religion.
So Christianity declined significantly during the 19th century.
And if you look at what people were writing back in the latter 19th century,
early 20th century, they were reeling from the loss of the meaning, the purpose,
the eschatological hope, hope about the future, all of that was gone. So you have basically a
bunch of atheists who are searching for something to fill that void. And transhumanism was proposed as a solution to this problem.
So through science and technology, we could potentially create heaven here on earth. Rather
than waiting for the afterlife, we'll do it in this life. Rather than heaven being something
that happens in the other world, it being otherworldly, we create it in this world
through human agency rather than relying
on supernatural agents. So it's very religious. And in fact, a number of people have, in a critical
mode, have described the technological singularity as the techno-rapture. And you're totally right
that consistent with all of this, Kurzweil himself, as well as other transhumanists and
extropians, they've proposed their own prophetic dates for when the singularity is
actually going to happen. According to Kurzweil, the singularity will happen in 2045.
It reminds me of like Elon Musk predicting self-driving cars every few years, right? Like
it's the same sort of thing. But, you know, I think we'll come back to this point about religion,
but, you know, moving through the acronym, you know, cosmism, I think that is something that
is probably quite familiar
to people as well right is this basically what elon musk is proposing when he says you know we
need to extend the light of humanity to other planets and this idea that we kind of need to
colonize space in order to advance the human species yes definitely so know, I'm not sure about the extent to which Musk is conversant with
modern cosmism. But nonetheless, the vision that he seems to embrace, this vision of us spreading
into space and expanding the scope and size of human civilization, that is very, very consistent
with the cosmos view. So, cosmism itself, I mean, this goes back
to the latter 19th century. There were a bunch of so-called Russian cosmos. But at least with
respect to the acronym, you know, Gebru and I are most interested in cosmism in its modern form.
So, as I mentioned before, I mean, this was first proposed by Ben Goertzel, who's a computer
scientist, transhumanist, was a participant in the
exotropian movement, has close connections to various other letters in the acronym that we
haven't discussed yet. But he was also the individual who popularized the term artificial
general intelligence. So there's a direct connection between modern cosmism and the
current race to build AGI among companies like OpenAI,
DeepMind, Anthropic, XAI, and so on. So that's part of the reason why I think modern cosmism
is included in the acronym. We sort of felt like if we didn't have the C in TESCREOL,
something notable would be missing. And cosmism basically, it goes beyond transhumanism in imagining
that we use advanced technologies, not just to re-engineer the human organism, not just to
radically enhance our mental capacities to indefinitely extend our so-called health span,
but also then to use this technology to spread into space, re-engineer galaxies,
engage in what Goertzel refers to as space-time engineering. So actually intervening on sort of
the fundamental, you know, the fabric of space-time to manipulate it in ways that would suit us,
that would, you know, bring value, what we would consider to be value into the universe.
So that's the notion of cosmism. And really, it doesn't take much squinting for the vision of what the future should look like,
according to cosmism, to look basically indistinguishable from the vision of what
the future ought to look like from the perspective of long-termism. There's sort of a slightly
different moral emphasis with respect to these two ideologies.
But in practice, what they want to happen in the future is basically the exact same.
Yeah, that's hardly a surprise.
And if we're kind of moving through the acronym still, rationalism would be the next one.
And I feel like kind of going back to what we were saying about religion, correct me
if I'm misunderstanding this, but it's kind of like, you know, we are not appealing to religious authority
or some higher power to justify kind of our beliefs or our views in the world or what we're
arguing, but rather we're referring to science and these things that are observable. And so
you can trust us because I don't know, we're engineers and scientists and blah, blah, blah.
Like, is that, is that the general idea? Or does it go a bit beyond that?
Yeah, so that is definitely part of it.
Even though many people, including individuals who are either in the rationalist community
or were members of the community and then left, have described it as very cultish.
And part of the cultishness of rationalism is that it has these charismatic figures like
Yudkowsky, who are considered to be exceptionally intelligent.
I saw some posts on the rationalist community blogging website, Less Wrong.
So that's sort of the platform out of which rationalism emerged.
And it was founded by Yudkowsky around 2009, that somebody had provided a lot of people for fear of
them questioning Yudkowsky or others who supposedly have these really high cues,
for fear of them appearing unintelligent, embarrassing themselves and so on.
There is a sort of irony that it's all about rationality and thinking for yourself,
not just following authority, but it is very hierarchical. I would say the exact same thing about EA. It's also very cultish. And
that's not, I don't use that word loosely here. I could provide 50 examples, most of which are
jaw-dropping. Like, yes. And at the end, it's just impossible for someone to look at these examples
and say, no, EA is not cultish
or rationalism is not cultish.
No, it is very much a cult.
And yeah, I mean, really the main, the core idea with rationalism.
So it was founded by this transhumanist who participated in the ex-Stropian movement,
who also was a leading singularitarian along with Ray Kurzweil.
I'm referring here to Eliezer Yudkowsky.
So him and Kurzweil were leading singularitarians. In fact, Kurzweil, Yudkowsky, and Peter Thiel
founded the Singularity Summit, which was held annually for a number of years and included
speakers like Nick Bostrom. I mentioned Andrew Sandberg before, as well as individuals like
Ben Goertzel. So all of these people are, you know, they're all part of the same social circles.
So ultimately, what motivated rationalism is this idea that if we're going to try to
bring about this utopian future in which we re-engineer the human organism, spread into
space, create, you know, this vast multi-galactic civilization, we're going to need a lot of, quote unquote,
really smart people doing a lot of, quote unquote, really smart things. One of these smart things is
designing an artificial general intelligence that facilitates this whole process of bringing about
utopia. And so consequently, if smartness is absolutely central, if intelligence is crucial
for the realization of these normative futurologies
and the utopian vision at the heart of them, then why not try to figure out ways to optimize
our rationality? Identify cognitive biases, neutralize them, use things like Bayes' theorem,
for anybody out there who's familiar with that, and tools from decision theory to figure out how to make decisions
in the world in the most optimally rational way. So ultimately, that sounds like, I think from a
distance, it might sound good, right? Because nobody wants to be irrational. Nobody sets out
to be irrational. But when you look at the details, it turns out it's just deeply problematic. I mean, it's based on a narrow understanding of what rationality means.
It's based on that that Yudkowsky once argued in one of his less wrong posts, that if you
have to choose between, let's imagine you have to choose between two scenarios.
In the first scenario, there's some enormous number, just unfathomable number of individuals who suffer
the nearly imperceptible discomfort of having a speck of dust in their eyes. The other scenario,
the second scenario, is a single individual who is tortured relentlessly and horrifically for 50
years straight. Which scenario is worse? And well, if you are rational and you don't let your emotions influence your thought process,
and if you do the math or what Yudkowsky calls, if you just shut up and multiply,
then you'll see that the first scenario, the dustbag scenario, that's worse.
Because even though the discomfort is almost imperceptible, it's not nothing.
And if you multiply not nothing by some enormous number,
that's how many people have this experience, then that number is itself enormous.
So compared to 50 years of horrific torture, then actually the dustbacks is much worse.
That, I think, exemplifies their understanding of what rationality is and the sort of radical
extremist conclusions that one can end up at if one takes seriously this sort of rationalist,
so-called rationalist approach to making decisions. And so I think what you have kind of laid out
there for us really shows us how all of these pieces, as you were saying, come together in
long-termism at the
end of the day, right? That kind of really kind of mathematical view of the population and like,
you know, how you're calculating the value of individuals and stuff in the end. But also,
you know, we want to spread out to multiple planets and we want to ensure that we have
people who are digital beings living in computers on these different planets, because that's equal to, you know, actual kind of flesh and blood people that we consider people
today. And so like all of these kind of, I think we would consider odd kind of ideological
viewpoints kind of building over the course of several decades to what we're seeing today.
And I don't think we need to really go through effective altruism and long-termism because we've
done that in the past. And I think the listeners will be quite familiar with that. Before we talk about kind
of further evolutions of this, what I wanted to ask you was, you know, once you started writing
about Tess Grill and once you started putting all of this together, like what do you think that
tells us about kind of the state of the tech industry and the people often, you know, powerful, wealthy men who are,
you know, kind of spreading these ideas and becoming kind of obsessed with this view of
the world or how they understand the world? Like, what does that tell us about them and
the tech industry as a whole, that this is what they've kind of coalesced around?
Yeah, a couple of things come to mind. One is that there are many individuals, including people in positions of power and
influence within Silicon Valley, especially within the field of AI or AGI, artificial general
intelligence, who started off as effective altruists or long-termists. Just as a side note,
in the media, there's a lot of talk about EA. It was two individuals on the board of directors of OpenAI who were pivotal in ousting Sam
Altman and so on.
In this context, EA is really shorthand for EA long-termism because EA is kind of a broad
tent.
And there are a bunch of EAs who think long-termism is nuts and want nothing to do with long-termism.
But the community as a whole and a lot of its
leaders have been moving towards long-termism as its main focus. There are a bunch of individuals
who gained positions of power within AI and Silicon Valley and so on and started off being
effective altruists, in particular, long-termists. And then there are a number of people, like probably Musk is perhaps
a good example, who did not start off as long-termists. They couldn't. I mean, the term
was only invented in 2017. And the history, the origins of long-termists go back about 20 years.
And Musk probably wasn't all that familiar with that literature. But nonetheless, they have this
idea about what they want to do,
which is colonize space, merge humans with AI and so on. And then sort of turned around,
notice that there's this ideology that's on the rise called long-termism that provides a kind of
superficially plausible moral justification for what they want to do anyways. So Musk wants to
colonize space. Maybe it's just like, you know, sort of boyhood fantasies of becoming multi-planetary,
you know, saving humanity by spreading to Mars and so on. And then he looks at it and goes,
oh, the long-termists are developing this quote-unquote ethical framework that says what
I'm doing is arguably the most moral thing that one could possibly be doing.
So I think that sort of gestures at one thing that this bundle of ideology sort of reveals about
Silicon Valley. It's a very quantitative way of thinking about the world. The fact that this
particular utopian vision appeals to these individuals, I think does say a lot about them.
Because this vision was crafted and designed almost entirely by white men, many of whom are
at elite universities, namely Oxford, mostly at Oxford. And the vision is deeply capitalistic.
I mean, some people described it as capitalism on steroids. It's also very Baconian in the sense that it embodies an imperative that was articulated
by Francis Bacon, who played a major role in the scientific revolution, sort of on the
philosophical side of that, where he argued that what we need to do is understand nature.
So he's arguing for empirical science, understand nature.
Why? Because once we understand nature, then we can subdue, subjugate, and ultimately control,
conquer the natural world. The long-termist vision is very capitalistic. It's very Baconian. It's all
about subjugating nature. In fact, a central concept within the long-termist tradition is that
of existential risk, which is defined as basically
any event that would prevent us from realizing this type of utopian world in the future among
the stars, full of just astronomical amounts of value by virtue of there being astronomical
numbers of future individuals. And, you know, Bostrom himself offers a more sort of nuanced definition, offered a more nuanced definition in a 2013 paper where he said that an existential risk is any event that prevents us from reaching technological maturity.
What is technological maturity?
It's a state in which we've fully subjugated the natural world and we've maximized economic productivity to the physical limits.
Why does that matter? Why is technological maturity important? Well, because once we've
fully subjugated nature and maximized economic productivity, we'll be optimally positioned
to bring about astronomical amounts of quote-unquote value in the future. All of this
is to say that the utopian vision is, I think, just deeply impoverished.
It's just capitalism on steroids. It's just Baconianism to the extreme. It very much is
just an embodiment of certain tendencies and values that were developed within the Western
tradition and embody a lot of the key forces that have
emerged within the West, like capitalism and science and so on. One of the most notable things,
as I've written before about the Tascarillo literature, is that there's virtually zero
reference to what the future could and more importantly, should look like from the perspective
of, for example, Afrofuturism or feminism, queerness,
disability, Islam, various other indigenous traditions, and so on. There's just no reference
to that. It's just a very Western white male kind of view of what the future ought to look like.
I think that's another reason that this cluster of ideologies is so appealing to tech billionaires. And by virtue of it being
so appealing, it reveals something about those tech billionaires themselves. This is the world
that they ultimately want to bring about and want to live in. Yeah, it's a real view from the top of
the pyramid, right? From the people who have kind of won the game of capitalism and want to ensure
they can remain in their positions and not be
challenged and what have you. And just to add to what you were saying, like you've talked about
how it is kind of like capitalism on steroids. And you see this in the writings and kind of what
these people are speaking about when they promote these ideas, people like Mark Andreessen or Elon
Musk or, you know, what have you. And I feel like they have over time become much more explicit
about it, which is how, you know, they believe that to realize this future or to make a better
future for everybody when they do kind of make reference to people beyond themselves, is that
technology fused with capitalism or with free markets is what is essential to achieve that,
right? To basically say the government needs to step out of the way,
the government can't be regulating us
or trying to halt the technological future
that we're trying to achieve
because that is ultimately not just bad for us,
but bad for everybody.
And I feel like this piece of it,
like obviously the idea of using technology
to dominate nature and things like that
have been around for a really long time.
But I feel like this particular piece of it, or this particular aspect of it,
is potentially much more recent as well, right? Like, if you think about what people who were
thinking about technology or developing technology might have thought in, like, I don't know, the
first half of the 20th century, or even a little bit after that, like, there was a very strong
relationship between technology being developed in, like, the state, right? And the role that the state played in funding it. And then those ideas shift most notably in the 1980s in particular, where all of a sudden the state is the enemy and the tech industry is pushing back against the hierarchies of the corporation and the government and the bureaucracy and all this kind of stuff. And like, I don't know, I think you can just see like these ideas taking root in that moment or kind of reemerging in a particular form that now kind of continues to
evolve over the course of many decades to the point where we are today, where we have these
people who are like the titans of industry, who are at the top of this industry that has really
taken off since the internet boom in particular, and who now feel that they are the smartest people in
the world, the most powerful people in the world, that they know what the future should look like
and how to develop a good world for humanity. And so naturally, it needs to be these ideas that
also kind of elevate them and make it so that their position is justified ideologically within
the system so that they are not challenged and they are not going to be pushed out of the way for some other kind of form of power to take their place.
Yeah, exactly. Another thing I've pointed out in some articles, and this ties back to something I
was mentioning a few moments ago, which is that long-termism and and, you know, kind of just the test group ideologies in general, they not only say
to the rich and powerful that you are morally excused, you have a moral pretext for ignoring
the global poor, but you're a better person. You're a morally better person for focusing on
the things that, you know, you're working on because, you know, there's just astronomical
amounts of value that await in the future astronomical amounts of value that await in the
future, amounts of value that utterly dwarf the total amount of value that exists on Earth today
that has ever existed for the past 300,000 years since Homo sapiens has been around.
And consequently, it's like lifting 1.3 billion people out of multidimensional poverty.
That is, in absolute terms, a very good thing. But relative to the amount of value that could
exist in the future, that is a molecule in a drop in the ocean. You know, if you're rational,
if you're smart. As these people most definitely are very rational, very smart.
Yes. The most effective way of being an altruist then is to do what you're doing anyways try try to
merge our brains with ai try to get us into space and try to build agi it's wild stuff yeah now now
we we need to put a cap on this pyramid that we've been building of these ideologies right because
you know i think it won't be a surprise to any listeners of this show that the tech industry and
the leaders of the tech industry have really been like blackpilled and like their their brains are filled with brain worms in the
past you know few years like they have been intensely radicalized publicly in a way that
maybe some of them would have said this stuff like in private in the past right and obviously
you mentioned peter teal earlier he has long been kind of one of the kind of pushers of right-wing
ideology and and quite radical right-wing ideology and
quite radical right-wing ideology within Silicon Valley for quite a long time. But I think to see
so many of these people speaking out and kind of echoing these right-wing conspiracy theories and
these right-wing kind of ideas is more novel, not in the sense that they've never been right-wingers,
but that they have adopted quite an extreme right, a hard right, even a far right kind of perspective on the world that they are championing more and more directly.
And I feel like you can see that most evidently in this embrace of what they're calling effective accelerationism or techno-optimism in the past few months.
How would you describe this idea of effective accelerationism and how does it build on these kind of existing test grill ideologies that we've been talking about already? Or is it different than them at all, just giving it a fresh coat of paint?
I think the best way to understand effective accelerations or the acronym is E slash ACC, which they pronounce EAC.
Yeah, and they love to stick it in their Twitter bios and or sort of their ex bios and, you know, really champion it. It's very fashionable right now.
Yeah. And I think one thing to say about it is like, maybe distinct from test grill is like,
you know, these are particular ideologies as we've been talking about that kind of
maybe have this kind of philosophical grounding to them. Certainly there is that with effective
accelerationism, but as you talk about with the E slash ACC, you know, EAC, I feel like one thing that is maybe potentially distinct about
this is that it does seem designed in particular for meme culture and to try to like, you know,
kind of tap into this to a certain degree. You know, I don't know how effective that part of
it has been, but it does seem like the idea is like, this is something that needs to go up on
social media and we need to make something that's like appealing and easy to understand for
people to really kind of divide people into like the us and them. And this idea seems to be trying
to pick up on those sorts of things. But I wonder what else you see there.
Yeah. I mean, I think one way to understand EAC is as just a variant of Tess Creelism. So there are a bunch of points to make here.
One is, you had mentioned just a few minutes ago that a lot of the champions of the Tess Creel
bundle or Tess Creel ideologies have a strong sort of libertarian proclivity, we could say.
And that's true, but actually, I think this gets at a key difference between EAC and EA,
by which I mean long-termism or the long-termist community, which I think their main disagreement
concerns the role of government. So EA's long-termists, so a lot of these individuals,
like people who help to shape the long-termist ideology,
going back to the ex-Tropians, very libertarian. There's this notion that the best way to move
technological progress forward and ultimately to realize the transhumanist project is for the
state to get out of the way. Then you had some individuals in the early 2000s who started to
realize that actually some of these technologies, the very technologies that will get us to utopia, that will enable us to re-engineer humanity and
colonize space, those technologies may introduce risks to humanity and to our collective future
in the universe that are completely unprecedented. So some of them began to take seriously this idea
that maybe the state actually does have a role and there should be some sort of regulations. And so one expression of this is the writings of Nick Bostrom, who points out that
the mitigation of existential risk, again, any event that prevents us from creating utopia,
that mitigation, that is a global public good. In fact, it's a transgenerational global public good. And
those sort of public goods oftentimes are neglected by the market. The market just isn't good at
properly addressing existential risk. Well, if the market isn't good at that, then maybe you need the
state to intervene and to properly regulate industries and so on to ensure that an existential catastrophe doesn't
happen. So this realization has sort of defined the EA long-termists, rationalists, and so on,
that sort of tradition of thinking. Basically, they're libertarian about everything except for
some of these super powerful speculative future technologies like AGI, molecular nanotechnology
would be another example. That's where the state should intervene. But maybe that's the only place
where the state should intervene. Right. So I guess that this is like kind of a split that
we're seeing in particular with kind of this AI hype machine that we've been in for the past year,
where, you know, and we just saw it kind of play out very clearly with the open AI stuff where on the one hand you have these people who call
themselves like the AI safety, I believe is the term they use, kind of people who believe that,
you know, AGI is coming and, you know, we're building these tools that are going to have
these computers kind of reach human level intelligence or beyond. But, you know, we need
to be scared of this and we need to be regulating it and we need to be like concerned about what this future is
going to be. And then on the other hand, you have these like AI accelerationists who feel
take off all the guardrails we need to plow through because ultimately this AI is going to
be great for society, even though it like has some risks. And so I guess you also see that in
just the term effective accelerationism,
where the idea, and of course, you know, you hear Sam Altman talk about these things,
Marc Andreessen has expressed it in his techno-optimism manifesto. It's kind of like,
don't put any rules and restrictions on us because this is going to be the most amazing
thing for humanity if you just allow us to continue developing these technologies,
push forward even faster, you know, accelerate them basically is what they would say. And this
is the way, you know, through the market, not through any government involvement that we're
going to improve society. So is this kind of the schism that's really kind of playing out?
And is this part of the reason that we have kind of the elevation of this effective accelerationism
in this moment in particular, because of this AI divide that is playing out and how prominent this has become.
Yeah, I think there are two things to say here. One is, insofar as there are existential risks
associated with AGI, what is the best way to effectively mitigate those risks? The accelerationists
would say it's the free market.
You fight power with power. If you open source software, enable a bunch of different actors to develop artificial general intelligence, they're going to balance each other out. And so if there's
a bad actor, well, then you've got five good actors to suppress the bad actions of that other actor. And so the free market is going to solve that problem.
The second thing then is an assessment about the degree to which AGI poses an existential risk
in the first place. So not only would they say, okay, the best way to solve existential risk is
through the free market, through competition, fighting power, pitting power against power. Many of them would also say, actually, the existential risk associated with
AGI, they're really minuscule anyways. A lot of the claims that have been made by Bostrom
and Yudkowsky and other so-called doomers, they're just not plausible, they would argue.
And so one of the arguments, for example, is that once you have a sufficiently
intelligent artificial system, it's going to begin to recursively self-improve.
So you'll get this intelligence explosion.
And the reason is that any sufficiently intelligent system is going to realize that for a wide
range of whatever its final goals might be, there are various intermediate goals that are going to
facilitate it satisfying or reaching those final goals. And one of them is intelligence augmentation.
So if you're sufficiently intelligent, you're going to realize, okay, you know, I want to,
you know, say, I just want to get across town, or I want to cure cancer or colonize space or
whatever. If I'm smarter, if I'm better at problem solving, if I'm more competent,
then I'll be better positioned to figure out the best way to get across town,
cure cancer, colonize space, whatever. Once you have a sufficiently intelligent system,
you get this recursive self-improvement process going, a positive feedback loop.
And so this is the idea of FOOM. We create, we create AGI and then FOOM, it's, you know,
this wildly super intelligent system in a matter of, you know, maybe it's weeks or days, maybe
it's just minutes. And so a lot of the EAC people think that FOOM is just not plausible. It's just
not going to happen. So consequently, they would say, actually, the argument for AGI existential
risk is kind of weak because a key part of it,
at least, you know, in certain interpretations is this FOOM premise. Well, the FOOM premise is not
plausible, therefore the argument sort of fails. So I think those are, you know, two, I think,
important ideas. I guess just to cut in here, I would say, you know, to make it a bit more
concrete, you know, I guess what you're talking about is, as you say, on the one hand, you have
the kind of AI doomers as the kind of effective accelerationists would call them,
basically saying like, oh my God, the AGI is going to be so powerful, it's going to be a threat to
all of us. And as you're saying, they would say, no, not really. But then, you know, Andreessen,
for example, would build on that and say that the AI is not a threat because it's actually going to
be able to be an assistant for all of humanity that's going to make it easier for us to do a whole load of things. You know, thinking back
to what Sam Altman said about it being your doctor or your teacher or whatever. But Andreessen,
of course, goes further than that and says there will be these AI assistants in every facet of
life that will help you out. And then, of course, there's also the kind of, I think, more ideological
statement. Like if people go back and listen to my interview with Emily Bender, where exactly as you were saying, they're saying that the AGI will allow
us to enhance human intelligence or augment human intelligence because intelligence, whether it's
computer or human, is the same thing. So as soon as we have computer intelligence, that's increasing
the total amount of intelligence in the world. And so once we have more intelligence that's increasing like the total amount of intelligence in the
world. And so once we have more intelligence, everything is better off and we're all better off
and everything is just going to be amazing if we just let these computers like improve over time
and we don't hold them back. And of course, you know, the other key piece of that there,
when you talk about the free market is Andreessen's argument that we can't have regulation because the
regulation is designed by
the incumbents to ensure that they control the AI market. And as you're saying, we can't have
these kind of competitors developing their own AI systems. You know, if you think about
Andreessen as a venture capitalist, you know, you can see kind of the kind of material interest that
he would have in funding companies that could potentially grow instead of having it dominated
by kind of existing monopolies like Google or Microsoft or whatever. But yeah, I think
that's just to make the points that you're saying more concrete and to show themselves in the
ideologies that these people have. But, you know, happy if you have further kind of thoughts on that.
Yeah, no, I think that's exactly right. You know, so both IAC and long-termists,
or traditional test creelists, you might call them, both of them are utopian. And Dreesen says he's not utopian, but if you look at what he writes about what awaits us in the future, it's utopian. can just read it for you, where he says, we believe the ultimate moral defense of markets is that they divert people who otherwise would raise armies and start religions into peacefully
productive pursuits. And it's like, but almost every line in his manifesto starts with, we
believe, we believe, right? It's this faith-based argument that techno-optimism is this thing that's
going to make everything better if we just believe in like the tech founders and the AGI or whatever. Yeah, totally. And so the EAC people themselves have described their view as basically
a religion. It's like one of them said something like, you know, it's spirituality for capitalists.
You know, to put it differently, it's like capitalism in a kind of, you know, spiritual
form. It feels like going back to what you were saying earlier, when you were talking about
how, you know, there were these atheists who were like seeking out some sort of kind of
religious experience or some sort of spirituality and found like transhumanism or, you know,
these other ideologies to kind of fill this hole that existed within them.
And it very much feels like, obviously, I think there are some other incentives for
people like Marc Andreessen, Sam Altman, Elon Musk. But but i think that there's also especially when you think about how these ideas
or these ideologies have kind of a broader um get a broader interest from the public or from people
in the tech sector or whatever you can see that there's also this kind of yearning for a grander
kind of narrative or explanation of our society, of humanity, of our future,
or whatever.
Yeah, totally.
And so maybe it's worth kind of tying together a few ends here.
You know, so both of these views are deeply utopian, technically utopian.
So they think the future is going to be awesome, and technology is the vehicle that's going
to take us there.
The long-termists have more of an emphasis on the apocalyptic side of that. So maybe these
technologies actually just the default outcome of creating AGI isn't going to be utopia,
everything being awesome. Actually, it could be catastrophe. And so this apocalyptic aspect of
their view, that links to the notion of libertarianism.
So this is where the state plays a role to enable us to impose regulations to avoid the apocalypse,
thereby ensuring utopia. So one is more techno-cautious and the other is more
quote-unquote techno-optimist. I mean, it's more like sort of techno-reckless, something like that.
I mean, they just judge the existential risk to be very low and think, yeah, it's all going
to be fine because the free market's going to figure it out.
And the more apocalyptic long-termists are saying, no, actually, we shouldn't count on
everything being fine.
Actually, the default outcome might be doom.
And it's really going to take a lot of work to figure out how to properly design an AGI
so that we get utopia
rather than complete human annihilation.
The two sides of AGI also find expression
in the term godlike AI,
which is being used by long-termists
and doomers and so on,
as well as people like Elon Musk
who refer to AGI as summoning the demon.
So the idea is basically AGI is either
going to be a God who loves us, gives us everything we want, lets us live forever,
colonize space, and so on. Or it's a demon that's going to annihilate us. And so, yeah, and again,
the EAC people are like, no, no, no, all of that's just kind of science fiction stuff.
What's not science fiction is utopia though. I think this is really the key difference between IAC and EA
long-termism. And beyond that, I think that is the most significant difference. There are minor
differences than in terms of their visions of the future. So, you know, long-termists care about
value in a moral sense.
So maybe this is like happiness or it's knowledge or something of that sort.
And drawing from utilitarian ethics, argue that our sole moral obligation in the universe.
Well, if you're a strict utilitarian, it's the sole moral obligation.
Maybe you're not a strict utilitarian.
You could still say that a very important obligation we have is to maximize value. And this notion of value maximization,
which is central to utilitarianism, I should point out utilitarianism historically, that emerged
around the same time as capitalism. And I don't think that's a coincidence. Both are based on
this notion of maximization for capitalists. It's about profit for utilitarians. It's this
more general thing, just value, happiness, something like that. That's what you need to
maximize. Whereas for the IAC people, they're not so concerned with this metric of moral value.
They care more about energy consumption. And so a better civilization is one that consumes more energy.
And they root this in some very strange ideas that come from a subfield of physics called
thermodynamics. And so if you read some of the stuff that the EAC people have written,
they frequently cite this individual named Jeremy England, who is a theoretical physicist at MIT.
And he has this legitimately and scientifically interesting theory that the emergence of life
should be unsurprising given the laws of thermodynamics. So basically, you know,
leaving systems, matter just tends to organize itself in ways to more optimally dissipate energy.
And that's consistent with the second law of thermodynamics.
Don't need to go into all the details.
But basically, the EAC people take this to an extreme.
And they say, okay, look, the universe itself, in accordance with the laws of thermodynamics, is moving towards a state of greater entropy.
Living systems play a role in this because what we do
is we take free energy and we convert it into unusable energy, thereby accelerating this
natural process of the universe itself heading towards a state of equilibrium. And so the goal
then is that if you create bigger organisms, call them meta-organisms, they
could be corporations, they could be companies, civilizations, and so on.
All of these entities are even better at taking free energy, converting it into just dissipated
energy, thereby conforming to what they refer to as the will of the universe.
You know, maybe people are struggling to follow.
And I think if that is the case, then that is an indication
that you are following along because it is very strange. And basically what the EAC people do,
so what matters to them is a larger civilization that converts the most energy possible into just
unusable dissipated energy. That's what dissipated energy means. You can't use it anymore. For them, the ultimate goal,
the emphasis is not so much maximizing moral value. It's about creating these bigger and
bigger civilizations, which means colonizing space, which means creating AGI, which is going
to help us to colonize space, new forms of technological life. With respect to human
beings, you may be merging with machines and so on.
All of this is conforming to the will of the universe by accelerating that process of turning
usable energy into dissipated energy, increasing the total amount of entropy in the universe.
And so what they're ultimately doing here, and it's very strange, is that they're saying that what is the case in the universe ought to be the case. And anybody who's taken a class on moral philosophy is going to know that that's problematic. You can't derive an ought from an is. Just because something is the case doesn't mean that it should be the case. But that is basically what they're doing. And maybe one thing to really
foreground here is that even though the sort of quote-unquote theoretical basis is a little bit
different from the long-termists, in practice, their vision about what the future ought to look
like is indistinguishable from the long-termist view. I mean, we should develop these advanced
technologies, artificial intelligence, merge with machines, we should develop these advanced technologies,
artificial intelligence, merge with machines,
colonize space and create, you know,
an enormous future civilization.
That is exactly what the long-termist want as well.
So like, if there's a map,
imagine like a five foot by four foot map,
and this map shows you where different ideologies
or different positions are located.
EAC and long-termism would be about an inch apart.
In contrast, like the AI ethics people, like Emily Bender and Timnit Gebru and so on,
they would be like three feet away.
So if you stand far enough from the map, EAC and long-termism are in the exact same location.
Yes, they disagree about the extent to which AGI is existentially risky.
And they disagree very slightly about what we're trying to maximize in the future. Is it energy
consumption? Or is it like value, like happiness or something like that? But other than that,
they agree about so much. I mean, they talk about the techno-capital singularity. So obviously,
that's drawing from this tradition of singularitarianism, the S in Teschriel.
One of the intellectual leaders of IAC, Beth Jezos, recently revealed to be Guillaume Verdin.
But he's founded a company called Extropic. And although the term extrapy didn't originate with the extrapyans, they were the ones who popularized the term and, you know, provided a sort of more formal definition.
So his company itself is named Extropic, which, you know, sort of gestures at the extrapyan movement.
He's mentioned that, you know, the IAC vision of the future contains a pinch of
cosmism, to quote him. So there are just all sorts of connections between IAC and long-termism and
the other sort of test-grill ideologies that substantiate this claim of mine that IAC really
should just be seen as a variant test-grill realism. There's just sort of different emphasis on a few minor points, but otherwise it's exactly the same. I mean, even there are
interviews with the EACT people in which they talk about how mitigating existential risk really is
extremely important. It really does matter. They just think that existential risk is not very
significant right now and that the long-termists and so on have this very overblown view of how dangerous our current technological predicament is.
So, yeah, I mean, they're just – and the debate between, you know, EAC and the EAs, long-termists and so on, I mean, that should be understood as a family dispute. Family disputes can be vicious,
you know, but it is a dispute within just a family of, you know, of very similar worldviews,
very similar ideologies. Yeah. I think that's incredibly insightful and kind of gives us a
great look at, you know, these ideologies, how it all kind of plays out. And I just want to make it
like a bit more concrete again for people, right? Like, you know, obviously you think about Elon Musk when you think about this kind of
energy piece of it and, you know, how they're incentivized to use more energy and they believe
that society is better if we just use more energy.
You know, Elon Musk hates like the degrowth people and the suggestion that we might want
to control those things.
And maybe we can't just kind of infinitely expand on this one planet, right? And so obviously his
vision of how we address the climate crisis is one that's around electrifying everything,
creating a ton more energy so we can still have private jets and personal cars and all this kind
of stuff, right? Like nothing has to change. We actually keep using more and more energy.
It's just now it's all electrified and not using fossil fuels.
So the crisis is fixed, right? And if you look at someone like Jeff Bezos, when he presents this
idea of humanity going into space and living in these space colonies that are orbiting around
Earth, and he wants the population to expand to a trillion people, he basically sets up this
choice that we have to make where we
either stay on planet Earth and we accept stasis and rationing as a result, or we go down his
future and we go into space and then we have this future of dynamism and growth because we just keep
expanding and all this kind of stuff, right? We're using more energy. There's more and more people.
So just to give an example of how this plays out. And of course, you see the energy speak in Andreessen's manifesto as well,
where he's talking about this. We need to be producing and using more energy because that
is the way that we get a better society. And to close off our conversation, I just want to talk
about some of the other influences that seem to be coming up here, right? In Andreessen's manifesto,
which I think it's
fair to say is kind of one of the key documents of this kind of effective accelerationist techno
optimist movement at the moment, you know, that at least of the various writings that are out there.
And, you know, some of the names that stood out to me was he makes kind of direct reference to
Nick Land, who is this kind of far rightright, anti-egalitarian, anti-democracy
philosopher-thinker. I don't know how you'd want to describe him, you know, who explicitly kind
of supports racist and eugenic ideas. And, you know, Andreessen kind of presents him as like
one of kind of the key thinkers in this movement. And then another one that he directly cites,
but doesn't say so by name. He includes the name later in this kind of list of people. But he cites Filippo Marinetti, who of course was an Italian futurist, which was
a movement that was basically linked to fascism, that was part of Italian fascism. And so these
are some of the people that he is kind of calling attention to when he talks about what this vision
of the world is. What does that tell you? And what else
are you reading about the types of influences that Andreessen and these other folks are citing
when they talk about this? Yeah, so actually, I think this gets at another possible difference
between EAC and EA long-termism. The EA community, at least according to their own internal community-wide surveys, tends to lean
a bit left, whereas I think EAC tends to lean quite right. I mean, that being said, the long-termists
and really the tax rule movement itself has been very libertarian. I mean, that's sort of a key
feature of it. Again, they're sort of libertarian about everything except for these potentially really
dangerous technologies. Actually, just as a side note, there was an individual who works for
Elias Yudkowsky's organization, Machine Intelligence Research Institute, or MIRI is the acronym,
who responded to somebody on Twitter asking, is there a term for EAC except with respect to advanced nanotechnology, AGI,
and so on? And this individual's response, his name is Rob Basinger. You could find his tweet
if you search for it. His response was the term for that, for EAC, except with respect to AGI and
so on, that's called Doomer. So the Doomers are, you know, from the beginning, a lot of those
Doomers were accelerationists from the start. And then they came, you know, from the beginning, a lot of those Doomers were accelerationists
from the start. And then they came to realize like, oh, actually some of these technologies
are super dangerous. So maybe we need to regulate them. Now they're called Doomers,
but they are accelerationists about everything except for AGI and maybe a few other technologies.
So all of that is to say that there still is like a pretty strong kind of right-leaning
tendency within the test grill community but
i think you know the iac movement is probably even more unabashedly right wing uh in in many
respects and every now and then they reference you know the the dangers of fascism and so on
but a lot of their views are you know kind of you know fascist or they're they're aligning
themselves with people who have kind of fascistic leanings.
Elon Musk is maybe another example here. Even the intellectual leader along with Idrisen of the
IAC movement, Beth Jezos, Gion Verdon being his real name, he has referenced Nick Land in some
of his work, although apparently he wasn't familiar with Nick Land until he started writing several years ago, the IAC Manifesto, which is up on their Substack
website. But I mean, he has explicitly said on Twitter that he's aligned with Nick Land.
And yeah, I mean, Nick Land is a far-right guy who's been instrumental in the emergence of
the so-called Dark Enlightenment. I mean, Nick Land wrote a book called The Dark Enlightenment.
And in fact, The Dark Enlightenment also has roots in the less wrong community. So, you know,
less wrong was the Petri dish out of which the neo-reactionary movement emerged. And the
reactionary movement is very closely tied, it overlaps significantly with this notion of the
dark enlightenment. So yeah, a lot of it is worrisome. I mean, these are people who, you know, I mentioned before, like there are two broad classes of
dangers associated with test realism. One concerns the realization, successful realization of their
utopia. What happens if they get their way and they create an AGI that gives us utopia,
everything that they want? What then? Well, I think that would be absolutely catastrophic for
most of humanity because their utopian vision is so deeply impoverished.
It's so white male Western and Baconian and capitalistic and so on that I think, you know,
marginalized communities would be marginalized even more if not outright exterminated.
But then the second category is the pursuit of utopia.
And this subdivides into the doomers and the accelerations.
Why are the doomers dangerous in their pursuit of utopia?
Well, because they think these technologies like AGI could destroy utopia.
If we don't get AGI right, and if we do get AGI right, it'll bring about utopia.
Therefore, extreme measures, including military strikes at the risk of triggering thermonuclear
war, all of this is warranted to prevent the AGI apocalypse.
And so that is why I think the doomers are super dangerous
and they are increasingly influential within major governing entities, United Nations, UK government,
US government, and so on. That scares me. The accelerationists on the other side,
they're dangerous because not only do they minimize the threat of some kind of existential
risk, but the current day risks that are the primary focus of AI ethicists like
and Emily Bender and so on, those are just not even on their radar. They don't care about them.
Social justice issues are a non-issue for them. So accelerate, accelerate, accelerate. If people
get trampled during the march of progress towards these
bigger and bigger, more energy-intensive civilizations, so be it. Maybe that's sad,
but sorry. The stakes are so high and the payoff is so great that we can't put any brakes on
progress and so on. So this just sort of ties into the right-leaning tendency to neglect, minimize, disregard,
or denigrate social justice issues.
So people, you know, in Dresen and the other sort of right-leaning or even far-right IAC
people, they just don't care about social justice issues, just don't care about the
harms that AI is already causing.
There's almost no mention in either the Doomer or the Accelerationist camp about the use of AI
in Israel and how it is played a part in what some would describe as a genocide that's ongoing,
unfolding in Gaza. This is small fish. I mean, these are just like minnows,
and we're talking about whales,
you know? So all of this is very worrisome. And the fact that it's, it is so right-leaning,
you know, it's unsettling. Yeah, absolutely. And, you know, I'll put a link to the Israel
article so people know what you're talking about in the show notes. You know, I would also just
say, you can see those ideas also infecting policymakers as well, right? As we've been
talking so much about regulation, we saw in the UK, the conservative government there kind of made an explicit push
to be like a leader in AI regulation, you know, a kind of regulation that aligns with what these
kind of doomers or whatever you want to call them are saying in order to be focused on this far
future stuff rather than the reality. And then a couple of weeks ago, the deputy prime minister in
the UK was
saying, we're going to roll out an AI hit squad to reduce the number of people working in the
public sector and to get it to take over like immigration and these other kind of key areas,
which is like, this is very worrying. And this is exactly the type of thing that people are
warning about. But just to bring it back to, you know, effective accelerationism and these
ideologies, they're also very clear
on their enemies, right? The decels or the decelerationists. And of course, Andreessen
points to the communists and the Luddites, you know, very much, I think the people who appear
on this show, who push back against them. And I'm sure you consider yourself a part of that,
that class as well. Yeah.
I mean, one thing to disambiguate there is that term de-cell.
That is an umbrella term that can include all sorts of people from Emily Bender and Timmy Gebru to Eliezer Yudkowsky.
Those are two radically different groups.
So AI safety, you mentioned that earlier.
I mean, Yudkowsky is part of the AI safety.
A lot of individuals who work in AI safety, you mentioned that earlier. I mean, Yudkowsky is part of the AI safety. A lot of individuals who work in AI safety are quote-unquote doomers. AI safety came directly
out of the test grill movement. So that is an appendage of this test grill organism.
And again, the idea is that, okay, AGI, it's going to deliver us to utopia,
but it might also destroy us. So let's try to figure out a way to build a
safe AGI, hence AI safety. That contrasts very strongly with AI ethics, which is much more
concerned with non-speculative, actual real-world harms, especially that disproportionately affect
marginalized communities. So there are two things to say about this one is
doomers are d cells and intrescent and the other ex like to position themselves as the enemies of
the doomers this version of d cell but again that is a family matter you know that is a dispute
among family members because their vision of the future is almost identical the only big difference
between them is their probability estimates of agi killing everybody if it's built in the near future.
So that's the one point. And then the other individuals who would be classified as decels,
I do think that they are enemies of Andreessen. And these would be people like myself,
who don't have this bizarre kind of religious faith in the free market,
who don't think that the default outcome of developing these advanced technologies is that
everything is going to be wonderful. I mean, look around at the world. I mean, there's just
overwhelming evidence that technology, it does make some things better. It also makes things
a lot worse. And again, you know, the situation in Palestine is one example,
but there are a billion others that could be cited and deduced here. And so hence,
what does that imply? It just implies we need to be cautious and be careful and be prudent and
think ahead and get the government to implement regulations that protect vulnerable, marginalized peoples around
the world.
That's the best way to proceed.
So yeah, just to disambiguate that term de-cell, it could mean doomers, which are basically
the same family as the EACs, or it could also mean these people like in the AI ethics community
who are just a world apart of the doomers and the EACs.
Yeah.
So consider what type of D cell you want
to become, you know, veer more toward the Luddite and communist side of things, you know, to think
about how Andreessen frames it. I would also say, you know, think about what our own pushback to
effective accelerationism is. Shout out to Molly White, friend of the show, who added E-Ludd to
her username, effective Luddism, I guess, you know, so big fan of that. But yeah, Emil, always great to speak with you. Great to kind of get an update on how these kind of ideologies are progressing, get more of the history on this to understand exactly the brain worms that are infecting the tech class and the powerful people who make so many decisions that affect our lives. Thanks so much. Always great to speak with you. Thanks providing critical perspectives on the tech industry. You can join hundreds of other supporters by going to patreon.com slash tech won't
save us and making a pledge of your own. Thanks for listening and make sure to come back next week. Thank you.