Angry Planet - The Cult of Rationalism in Silicon Valley
Episode Date: March 25, 2025A lot of the people designing America’s technology and close to the center of American power believe some deeply weird shit. We already talked to journalist Gil Duran about the Nerd Reich, the rise ...of the destructive anti-democratic ideology. In this episode, we dive into another weird section of Silicon Valley: the cult of Rationalism.Max Read, the journalist behind the Read Max Substack, is here to help us through it. Rationalism is responsible for a lot more than you might think and Read lays out how it’s influenced the world we live in today and how it created the environment for a cult that’s got a body count.Defining rationalism: “Something between a movement, a community, and a self-help program.”Eliezer Yudkowsky and the dangers of AIWhat the hell is AGI?The Singleton Guide to Global GovernanceThe danger of thought experimentsAs always, follow the moneyVulgar bayesianismWhat’s a Zizian?Sith VegansAnselm: Ontological Argument for God’s ExistenceSBF and Effective AltruismREAD MAX!The Zizians and the Rationalist death cultsPausing AI Developments Isn’t Enough. We Need to Shut it All Down - Eliezer Yudkowsky’s TIME Magazine pieceExplaining Roko’s Basilisk, the Thought Experiment That Brought Elon Musk and Grimes TogetherThe Delirious, Violent, Impossible True Story of the ZiziansThe Government Knows AGI is Coming | The Ezra Klein ShowThe archived ‘Is Trump Racist’ rational postSupport this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Love this podcast.
Support this show through the ACAST supporter feature.
It's up to you how much you give, and there's no regular commitment.
Just click the link in the show description to support now.
Hello, and welcome to Angry Planet.
I am Matthew Galt.
Jason, say hello.
Hello.
So I'm imagining today as a part two of the conversation that we started with Gil Duran
a couple weeks ago, which is about the broader ideologies that rule us now,
The first one you should go back and listen to you if you haven't, dear listener, was about the Lords of Silicon Valley.
Their ideas about techno feudalism and how they are changing America.
This one is about a more widespread in some ways more pernicious ideology.
Welcome to the cult of rationalism.
And here to help us understand all of this is journalist Max Reed of the Readmax substack and podcast.
Sir, thank you so much for joining us.
Thank you for having me.
So I've been circling around this for a while, and there were kind of
two things. I've been circling around doing an episode about this for a while, and there have been
kind of two incidents in my life that really kind of pushed me along. One was that I'm friends with
a woman who's a fan of the show and a friend of the show, who was kind of involved in these
rationalist communities for a long time, and has for a long time told me this, like, this is a
cult. And I don't think people understand how dangerous and bizarre it is and how widespread it is
on the West Coast and in Silicon Valley circles.
And then I got into all of the people in my life that I know in real life are software engineers.
And I got into an argument with one of them recently where he started the conversation with,
hey, have you read this guy Peter Singer and looked into all of his philosophies?
And it's like, I really think if you read Peter Singer, like, kind of explains the world and really
tells you how you should live your life.
And if you're not living how Peter Singer tells you to live, you're doing it wrong.
And the conversation got very heated.
So Max, I know this is a big, bizarre topic with a lot of personalities and a Manson-like cult.
Kind of rolled up into it now, too.
Let's start off with some of the very basics.
What is rationalism?
Well, how long have you got?
We got all the time in the world.
You know, I've sort of struggled to answer this question succinctly, partly because it's,
it's a really diffuse, I think the right word is maybe movement. It's something between a movement,
a community, a kind of self-help program. It really really the idea, like the core idea behind it
is that you can as just a, you know, normal human being sort of perfect your skills of
reasoning to better approach problems, whether those are personal problems or political
problems or philosophical problems or technical problems.
And I, you know, I give it a sort of big tent definition because, as you were hinting at,
there's a wide range of beliefs that are contained kind of under the tent of rationalism.
And some of them are pretty normal, you know, it's not all that different from, say,
cognitive behavioral therapy.
There's a certain strain of rationalism that's just kind of doing programs that help you kind
of eliminate biases from your thinking.
And then there are strains that are way out there to the point of justifying, I mean, from what we can tell, to the point of justifying, you know, cold-blooded murder, essentially.
And this is both like a wide variety, but also I think these two strains sit sort of socially, much more closely than I think people realize.
historically speaking, rationalism, you know, as this, you know, community movement, whatever you want to call it, emerges in the 2000s and is tied mostly to this guy, Eliezer Yudkowski, who is a blogger, a really prolific blogger and writer whose posts kind of attract a community of like-minded people who are interested in approaching the world in the same way that he does, which is to say sort of,
you know, reason first and, you know, eliminate your biases, like, you know, think clearly,
even if, even if your chain of thought takes you to a strange position, if the logic behind it is
sound, then you should take it seriously and believe it. And Elyzer's main obsession, I would say,
is his belief that a godlike superintelligence is coming sometime in the future, maybe much
sooner than we believe it to be, but that the existence of this superintelligence of, you know,
basically what I guess we now call AGI or ASI, a kind of, you know, planet-wide, like smarter than
humans, you know, computer brain, that this AGI or ASI is going to, potentially is going to
kill all of humanity.
And the most important, because of the inevitability of this or the near inevitability of it, the most important task that any human being can set themselves to is either preventing the rise of this AI or at least sort of what they say, they call it aligning, aligning the AI so that it shares human values so that it won't commit, you know, human omniside.
Eliezer is, he writes a site called Less Wrong, which is now sort of the main gathering place.
for Died in the Wool Rationalists,
and he founds a nonprofit
called the Machine Intelligence Research
Institute, or Miri.
Miri, Less Wrong, and then a third
nonprofit called Seafar, or the Center
for Applied Rationality, are kind of
the three pillars of rationalism
as a community. If you move to
the Bay Area and are involved in the rationalist
scene, you'll spend a lot of time
with people who work at Miri and
Seafar. Certainly, you know, if you're a
rationalist anywhere, you're probably going to spend a lot of time on
Less Wrong, debating everything
from AI to how to give philanthropically to just sort of random, you know, relationship questions.
So because of a leaser's sort of prominence among a particular kind of person, like I suppose I should
say one thing about rationalism is that it has a, I think it has an attraction for people who
already think in particularly kind of what I guess you might say logical ways. I'm trying to
avoid like medicalizing this. I think a lot of rationalists are self-described, specter people
on the autism spectrum.
And I think that maybe some people overdo it's like ADHD or something.
They sort of overdiagnosed their own autism to sort of fit in better.
And I don't know that what we're talking about is necessarily, you know, autism straightforwardly.
But if you think in a particular kind of way that is kind of casually associated with autism or with computer science or the tech, the tech world in general, this kind of, you know, believing yourself to have eliminated emotions and biases and whatever from the way you're thinking, you would find this sort of.
program that Yukowski is putting forward kind of attractive. And because also he's taking
very seriously in the 2000s, which at the time was a sort of AI winter, because he's taking
seriously developments in artificial intelligence, he develops kind of a following among
AI researchers of the time and is in fact, continues to be quite influential among people who
are doing genuinely extremely important work in AI up into what is right now, like a serious
hype cycle for the technology. He's, I would throw out two points of
interest. One, he's written, he's published op-eds in Time Magazine about AI. And two, the idea that this,
this superintelligence is coming is pretty widespread in the AI community. I would point to last
week, Ezra Klein in the New York Times gave an interview with Biden's AI expert of this, you know,
and the subject is the God machine is coming. And the people who work on AI know the God
machine is coming and it's going to change everything. So like this, again,
And kind of like when we were talking to Gil Duran, a lot of this stuff sounds pretty loony tunes on the face.
But it doesn't matter so much of you believe it.
The people, a lot of the people in power and the people that are setting the agenda for how the country works right now and building the technology that the country uses, believe it.
We are talking about kind of the way that the people's minds work who are in charge.
Yeah, exactly.
You know, another thing that Elyzer wrote, that Yutkowski wrote, that I think is worth mentioning actually, I mean, it's a joke, kind of, is he wrote this nine-part, a fanfic essentially, called Harry Potter and the methods of applied rationality in which his version of the Harry Potter character uses rationalist methods to understand the way magic works, basically.
And I bring this up both to, like, give you a sense of the kind of, frankly, of the kind of intellectual plane we're working on.
Like this is a intentionally kind of dumbed down version of rationalism, but it's fully within keeping of the kind of thought experiment, you know, philosophy that rationalism consists of.
But I also think this is an important one because this is in a weird way.
And I don't mean to say this in a sort of, you know, Dateline 2020 way.
But it's like a gateway drug for a lot of people where it's like if you're 13, 12, 13, you're a little weird.
You think about things a little differently.
You're a Harry Potter fan.
And you find this fanfic online that shows you a particular way of approaching the world.
world and thinking, it's extremely attractive. And so it's, you know, somebody like me for whom this is
not a particularly, you know, intuitive way of thinking about the world or approaching the world,
it can be hard to say, like, how does a guy like Yutkowski, who has these like sort of obvious
flights of fancy, how does he become such an important figure in these cutting edge technologies,
these like multi, you know, potentially multi billion dollar software sectors? And I think the answer is,
you know, this sort of combination of he's writing stuff that appeals to them at the right way, at the right time.
He's taking them seriously in a certain way. And he clearly has some level of kind of rhetorical
charisma. I don't want to over-emphasize his role either. You know, he is very important. He is sort of
the founder, to the extent there is a single founder. But it's also kind of a, it's a decentralized,
you know, outside of these sort of nonprofit pillars and websites. It is kind of a decentralized
community. There's a lot of people who would call themselves rationalists who don't take
Utkowski particularly seriously, who aren't, you know, acolytes of his particular brand of rationalism.
But, you know, him and these institutions are kind of where it started and still, to a large
extent, where the action is and where the energy is.
Jason, you look terrified and confused.
Do you have a question?
Yeah, I mean, I guess that's fair.
Maybe we want to talk about this later, but I would love to understand this God machine in the sense of what it is going to do.
I mean, will it – okay.
That's kind of – you're asking, I think, the question that a lot of normal people ask when they're kind of confronted by this thing, because it's kind of the circular reasoning.
Whenever I encounter the arguments or hear people talk about it, it's somebody like Sam Altman, who,
runs a company called OpenAI that provides a service called ChatGPT that is a large language
model. It's these chatbots that people use, right? It's a very fancy chatbot. It's a word
calculator. They have said that the goal is to create a machine that will replicate the human
mind. And then beyond that, we'll be on a level of like a god and you'll, you'll ask it how
to solve the economy and it'll solve the economy.
It'll destroy
white collar labor
and be doing a bunch of things for you.
It'll be like the industrial revolution on steroids
and the tech billionaires will benefit.
And Max like jump in here,
but like that's kind of pitch, right?
Yeah.
Yeah, there's a one sort of the most
rigorous philosophical version of this concept
is something that the philosopher Nick Bostrom,
who's an important,
who's an Oxford philosopher who has a, you know, has academic credibility, but is associated
with rationalist, with the rationalist community. He calls it the singleton. And, you know, to him,
the idea is not merely that it's like a godlike computer brain, but it's also like the single,
you know, manager and administrator of all planetary economics and politics. And like I, as far
as it sounds, I think it's important to emphasize that like, I think a real key kind of appeal for
this thing is that it is a thought experiment, is that you can? Is it like, it has this science
fictional, like, let's just like, you know, bullshit for a little while and think about, like,
what would it be like if this, if such a thing happened? Except that at some point in the,
in the thought experiment version, because your chain of logic is sound and because you've made a
commitment to sort of following it wherever it goes, you start to take it really seriously.
And I think it's worth like, you know, seeing too that what maybe began in some ways among
rationalists as a kind of thought experiment, you know, in 2003 or whatever, way before large
language models existed, when we were, you know, way outside of what we now know is possible
with the most cutting edge machine learning technologies, you really were just kind of bullshitting.
You were like, well, look, brains are computers. Eventually, we will be able to build a computer
brain. What might happen if we do that? Well, it might kill us all because we've programmed it poorly.
And there's a bunch of, I think, relatively famous thought experience about this. Maybe one of the more
famous is the paperclip maximizer, which I think is also a Bostrum, initially, originally a
Bostrum idea where the idea is if you program your all-powerful computer brain to make as many
paperclips as possible, well, it might just start turning all matter into paperclips,
including humans, including the planet, including the universe, because you've insufficiently
aligned it to, you know, to our values. And a lot of people, Ted Chang sort of most famously have
pointed out that, in fact, the paperclip maximizer is like a scary story about capital.
capitalism and like, you know, pushing value to the absolute extreme. But then over the last 20 years, as we now are all living through, we've seen a set of enormous, and especially in the last five, a set of enormous advances in the capabilities of machine learning tech of AIs. And so all of a sudden, you know, depending on your gullibility, say, or your sort of belief in the ability of, you know, just pushing power behind LLMs to continue advancing their abilities, this idea of a kind of. This idea of a kind of.
of godlike intelligence seems slightly less far-fetched. You know, we have these these computers
that can sort of talk with the fluency of a human. It really still, even even to the extent that,
like, we've already become a nerd to that, that is a pretty astonishing development, given
where we were in the 2000s. And where it gets a little murky, like as a sort of final way
of thinking about it, is that, you know, Sam Altman, for example, has a strong direct financial
interest in the declaration of AGI or artificial general intelligence, which is one of the
sort of jargony words that is meant to mean something close to like human level intelligence,
you know.
And so there's a lot of, you know, he has this direct interest because in the contract that
Open AI has with Microsoft, Open AI can sever that, you know, Microsoft owns a portion of Open
AI right now.
And they can sever that deal when Open AI, quote, achieves AGI, end quote.
So there's also, when Ezra Klein is doing this on his podcast or whatever, there is, in fact, also a reason for a bunch of people who have money invested in this world, you know, the indirect benefits of like creating the hype or whatever, but also the direct benefit of being able to sever yourself from Microsoft.
All this stuff is sort of mixing around.
And so it becomes in the interest of somebody like Altman to take Yudkowski seriously up until the point where he then, like, his distaste for superhuman intelligence becomes a sort of,
liability. So, you know, there's like, in some ways this is a story about like right place at
the right time, like a sort of, you know, the rationalists were conveniently there to provide
this incredible investment pitch. Like, we're so close to creating something that's going to
solve the economy, you know, why don't you start writing some checks? That has given them sort of
more prominence and more power than maybe they even would have expected 20 years ago. Oh, God. So
there's like three things that come into my head really quickly. One is that apparently some of the
people at least were reading Osamov or should have read Osamov when he wrote about the AC, as he called it,
you know, these computers that first ran states, then ran the world, and, you know, and this,
the whole story is some drunken programmer asks, can entropy be reversed? And thousands of
thousands of years later and thousands of computers later, they're still working on the problem.
And finally, the heat death of the universe. All right. You figure, well, that pretty much ends the story.
No, the last line of the story is, and let there be light.
Because the computers become God, and there you go. And then, I mean, so clearly someone's been reading this shit.
Or misreading it. Yeah.
You know, misreading it.
And then it also just strikes me that these people didn't have happy childhoods and have a very poor, almost Marxian understanding of reality, which is that if only human beings would behave the way the new computer wants us to, the AGI wants us to, we'd solve the economy.
You can't solve the fucking economy because the fucking economy is made up of seven billion assholes all pulling in the wrong direction.
So I'm just kind of curious how this computer is going to actually change all of humans.
And maybe if we turn into paperclips, that'll take care of the problem.
Yeah, I mean, I think there's so much hand-waving.
I mean, it is like a lot of this is sort of underpants gnome type like, you know, step one, build an LLM, step two, question marks, step three, the single-top.
arrives. You know, like, and I think that's a built-in, that's like a, that's not just a sort of flaw.
That's like a built-in, that's part of the, um, the core kind of even attraction of rationalism to
some people, I think, is that you just have to come up with this, like, you don't have to really
stress test any of your assumptions. It just has to sort of make logical sense. And some of them get
into this sort of, you know, this sort of what I guess you would say is like vulgar Bayesianism,
this sort of statistical thing where they'll say, well, I think there's a 70% of,
chance that LLMs are going to be created in the next five years, you know, that are going to
reach human intelligence the next five years. And there's like a 40% chance that they're going
to be evil and like a 10% chance that they're going to kill all of us, which gives the whole
thing a sort of fake kind of credibility. You're like, oh, we're not saying they're going to kill
all this, but we're assigning a 10% chance, which is actually quite a high chance that we're all
going to die from machines or whatever. And, you know, I can, like, to the extent I can see the
appeal of it, like there is a sort of, you know, dorm room appeal to just bullshitting with people
online through these kind of questions. I mean, you know, we've all been on message boards where
there's some version of these conversations and this stakes are low. Nobody's taking them seriously.
It's a way to waste time or to pass time or, you know, test your observations or assumptions.
What's interesting, one of the interesting things to me here is the way it's gotten really
mixed up with all the money flowing through Silicon Valley. The other thing is, you know,
as you are saying, like, there is, I think, a particular attraction of this kind of way of thinking and
this community in this body of thought to people who are maybe less sort of, I'm trying to think
of the right, sort of the right nice phrasing for this. Like, I don't want to say crazy people,
but people who don't have clear senses of self, let's say, people who are searching,
people who are, you know, had difficult childhoods maybe, who are difficult, who have
trouble making friends, you know, difficult social settings. And in that way, I see rationalism
not just as this kind of, you know, message board bullshit thing, but a kind of a new
religious movement in the long California tradition of Scientology and est, even of the kind of
more culty end of the militant 70s left-wing groups in the Bay Area, where you have people
flowing in, looking for, you know, friend groups, social groups, belief systems, and finding a
sort of friendly one that this one has its own, as we're saying, has its own kind of God, has
its own eschatology, as its own sort of millinarianism. And that has real consequences for
people's lives. I mean, I have to take that segue that you set up so perfectly. What's a Zizian?
Yeah. So, you know, one reason I'm on this podcast is because I wrote a long post recently,
because this group of rationalists, well, I'm trying to think of how I should even start
So you may remember the week of Trump's inauguration. In fact, on the day of Trump's
inauguration, there was a shootout at the Vermont-Canada border between a Customs and Border Patrol
officer and a couple of people in a car. And this was sort of like at the moment,
immediately I was like, oh my God, this is going to be a really terrible sort of right-wing
talking point for the next five years. As it turns out, the people who were shooting at the
CBP officer were not like, you know, evil migrants like looking to, you know, to fight back against the border police or anything.
It was to rationalists.
It was rationalists who people knew online who had sort of fallen in with a kind of infamous person named Ziz.
Ziz, the sort of capsule biography of Ziz is she was born in Alaska, moved to, went to University of Fairbanks, University of Alaska, Fairbanks, moved to the Bay Area in 2016.
At this point had been fascinated with AI safety, AI alignment initiatives, sort of fell in with the rationalists in the bay, hoping to contribute in one way or another to the sort of AI efforts that they were putting together.
And then grew pretty quickly grew very disillusioned with the movement.
You know, it's been extremely well established at this point.
There's been some very good reporting in Bloomberg and elsewhere that there's sexual harassment.
and sexual assault were endemic,
and particularly in the sort of institutional rationalist community.
Ziz,
because of that and some even wilder allegations,
Ziz joins with some friends and starts protesting rationalists,
essentially not because she thinks that the AI stuff is bullshit,
but because she thinks that the leadership is not up to the job.
So she and her friends protest a C-FAR reunion gathering in 20,
19 and the venue, the people running the venue call the cops.
SWAT team rolls up, removes Ziz and three friends arrest them.
They're giving out flyers about how Yutkowski needs to get removed from the leadership of the organization.
This is sort of the first time Ziz is on the radar of the broader rationalist community.
And I think it freaks everybody out a little bit.
This is she and her friends are wearing V for Vend, the guy Fox masks, and it seems a little spooky and a little scary.
Then over the next five years, you know, we're only sort of piecing this together now.
There's this kind of slow radicalization may not even really be the right word, but this slow process by which the Zizian kind of splinter group gets more hardcore, more isolated, and is now at this point implicated in a number of different killings.
So first there's a, the 80-year-old landlord of a, or an 80-year-old man who owned a plot of land where some of the Zizians had an encampment were living in an RV essentially, tried to evict them because they were in arrears.
One of the Zizians stabbed him with a samurai sword.
He then turned, he was armed and he turned around and shot two of the Zizians, one of whom died, not the one with the sword, the one with the sword survived.
The parents of another Zizian were killed in Pennsylvania, and Ziz herself was detained there for a while for interfering with the investigation, though later released.
This CBP agent who was shot at the border, the two people involved in that shooting, one of whom died and one of whom is now in custody, also Zizians.
And then finally, the landlord that I was just talking about, the 80-year-old man, was killed the same week as the CBP shootout in its.
stabbing,
uh,
in which a,
uh,
the guy who was arrested,
apparently applied for a marriage license with the person who
survived the shooting in Vermont.
I mean,
it's all very kind of,
you know,
the,
the map here,
it's,
it's not quite a,
uh,
there's not a clear cult in the sense of like,
you know,
Charles Manson,
whatever else you can say about him.
He had this plan with the killings.
He was going to start a race war.
You know,
there was like us kind of,
everybody lived together.
Zizians have a kind of more diffuse.
diffuse, decentralized organization, as far as I can tell. It's not entirely clear why, for example,
the parents of this one member of the group were killed, like what the motivation there could have
been, why Carl Lund, the landlord was killed. But this is a group of people who have a set of
what I think is fair to say are extreme beliefs who are clearly not afraid to, you know,
commit grievous bodily harm in pursuit of whatever particular goals they're going for.
And they emerge directly. I mean, from the sort of from the heart of the rationalist movement,
essentially. And it's, they're not, they're the sort of most prominent and tabloidy and bloodiest of
the kind of rationalist cults. But as I wrote about when I was first writing about these guys,
since the beginning of the pandemic, there's been a number of different sort of, um,
accusations or even just kind of people writing memoirs of their time in and around rationalist
adjacent institutions that were in retrospect quite cult-like, had that kind of California
new religious movement, cultishness about them that ended up being, you know, psychologically
very damaging for a bunch of people.
I would argue that even though they lived across the country, they do live together digitally,
which is increasingly as kind of third places are wiped out of American public life.
A lot of us, and I certainly do this, gather with our friends on the internet.
And the Zizians did that, right?
They have Discord servers.
Yeah.
Yeah.
And they're, you know, I mean, one of the really interesting things I found about doing this were, about writing this out and reporting it out is how much I could retrace the sort of steps and the development of Zizz zizian thought of Ziz herself of the group because of how much of it was taking place on.
personal blogs, on Twitter, you know, I didn't even have to go into the discords to like get sort of,
you know, the meat of this stuff, so to speak. And I think there's a real, you know, as we've been saying,
like, uh, I mean, I'm not a psychologist. I'm not a profile. I don't want to like get too far here,
but I don't think it's that hard to kind of see how people who are, you know, isolated from their
childhood, say, like from the beginning, they're, they're slightly different. They're isolated people
who are moving to new places or who are otherwise isolated in their own communities,
you know,
can find and becoming like dramatically engaged with stuff that appeals to them for a set of
particular reasons.
You know,
so much of what happened with Zizz and the Zizzians happened during the worst of the
pandemic.
And I don't know how much of that is really even about social isolation.
So much is about it just a broader kind of instant,
like the disappearance of a bunch of institutions that might have kind of caught this
behavior.
there's a lot of court dates that Ziz and her compatriots don't show up for and the cops just don't seem particularly concerned with picking them up or doing anything about it.
And I think there's a real like, you know, you can, you can maintain that kind of cohesion like through the internet, as you're saying, you know, through these message boards, through these blogs that allows you to, you know, keep the cult, so to speak going.
So what did they believe?
This is a good question because the sort of there's a, there's a, I think there's sort of
three pillars to Zizian thought that are kind of important for our story.
The sort of core belief, the thing that that, the main like cause of Zizianism, so to
speak, is radical veganism, basically.
So Ziz thinks that any kind of like animal husband,
been jury is slavery effectively and you know like killing animals for food or for meat or for
whatever else is the moral equivalent to genocide so she has you know this is not like let's be
frank this is not like a totally this is a this is an extreme but not a totally out there view i
imagine a lot of sort of pita members believe some version of this certainly there's like you know
like longstanding religious movements that that believe you know in no killing of any kind
of creature at all.
Where I think the sort of the rubber, this, like the rubber of that kind of, you know,
vegan philosophy meets the road is Zizz's belief in, she has what I guess you would call
a bicameral theory of mind.
So she's not like Julian James is sort of famous like, you know, the bicameral mind thing,
though I imagine she must have read it.
But she believes that each hemisphere of your.
brain.
And she goes back and forth on whether this is truly like a biological hemisphere, each one,
or this is just kind of, she's got, everybody's got two minds that is a different, has a different,
is a different person, is a different entity.
And that with the right kind of ritual, which is usually sleep deprivation or taking a lot of
hallucinogenic drugs, you can split apart these two brains and have them operate and act
independently of one another.
Classic culture.
Yeah, I mean, very much. And it gets sort of deeper. So like what makes it particularly cult to me is that Ziz believes that one in 20 people is born with one of their brains having an intuitive understanding that animal life is worth exactly as much as human life.
Ziz calls these people single good.
One in 400 people is born with both hemispheres of their brain recognizing this.
Those people are double good.
isn't going to shock you to learn that Ziz herself is double good.
And I don't think that anybody else in the group is double good.
So only Ziz has like full access to true moral perfection.
And only she can give you the kind of, you know, ritual need to like to disconnect the two parts of your brain so that you can, you yourself can find out if you're double good or figure out what, you know, what your relationship is.
So those are this.
So one pillar is veganism.
One pillar is a sort of bicameral mind thing.
And then there's a third thing which a lot of.
people have said, and I actually have had some trouble finding a kind of directly applicable
like a version of this from Ziz herself. But I think there's a, there's a kind of, you could see it in
the way where it's, you can see it in, in the sort of history of Zizianism, the relatively short
history of Zizianism, which is a kind of metaphysical belief in never surrendering that, you know,
like, there's somewhere between like salesmanship and like actual magic that if you just never,
ever surrender to somebody, they can't force you to do anything, which is the kind of thing.
And, you know, there's examples of, from what I can tell, of this actually working out quite
for Zizz.
I said there's a couple of instances where she is, there's their bench warrants out for her arrest,
and she interacts with the police and then walks away and nobody arrests her and nobody
picks her up.
And it's not really clear why that happens.
And I think that if you, you know, if you have convinced yourself that you have, you know,
some kind of magical ability to get out of shit by just simply refusing to surrender,
you can just keep escalating that until you end up in a shootout with the cops.
You know, regardless of the kind of particular aspects of the surrender thing,
I think we can also see how, if you believe both that veganism, that like, you know,
carnivorism is a, is a genocidal ideology effectively that, and you're a rationalist
who believes quite strongly in the sort of like endless chains of logic, you know, whatever else.
You can you can kind of talk yourself into like such an extreme version of this that you will become a militant.
You become a terrorist, whatever else.
And this is this part of it, I think, is where I can see some of the kind of 70s hard left, 70s Bay Area hard left groups, like where there's some overlap and some historical resonance.
Right, because you've decided that almost scientifically that you are correct.
and that the whatever the emotional circumstances of the situation are don't matter,
you just need to explain the situation to the person you're arguing with.
And if you have to use your force of will to get through the moment, right?
Yeah. Yeah.
Yeah. And it's a very, it's like, it is funny because you end up in the same place as, you know,
DSLA or whatever, like gun toting and like like, like eagerly getting into armed confrontations with the cops or whatever.
but you're you're you're you're you're you're you're you're you're you're not constructing it through
Len and Mao and Che Guevara or whatever you're constructing you know your own like
thought expect you know your own sort of like like Cartesian thought experiments um
and yet and yet here you are like effectively doing the same kind of in the same position
doing the same thing got imagine like you're in a shootout with the CPP and the last thing
that goes through your mind is roco's basilisk how awful like just how
How awful.
God.
Can we strip back back to basics?
Like, if, like, if you, like, rationalism 101.
Mm-hmm.
What's the thought process there?
Like, you approach a moral situation or, like, a dilemma of some kind.
And do you have, like, an algorithm set up in your brain that you're kind of running it through?
Like, what is, like, what does the thought process actually look like for someone?
I mean, I think it depends a lot on the sort of school of rationalism or like how you approach it.
I mean, I think the broad idea is that you do what you can to eliminate your own cognitive, social, political biases and to think as close to sort of a being of pure rationality as you can about a given issue.
So I'm trying to think of like a prominent, you know, if you go read, for example, Scott Alexander, who's a very widely read blog.
who writes a substack called Astral Codex 10, and for years wrote a blog called Slate Star Codex.
And he's a psychiatrist by trade, but is like a very prolific blogger in the rationalist,
in the rationalist tradition and in the rationalist community.
In some ways, he's probably a more prominent rationalist than Yudkowski is even.
And I think Alexander's a smart guy.
This is a pseudonym.
I can't remember what his real name is.
I think Alexander is a very smart guy.
If you read him on certain kinds of psychiatric drugs, he clearly is a domain expert.
he knows what he's talking about.
But if you read him on issues that you feel like you are kind of you yourself are kind of
up to snuff on, you, you like the example that I always think of the sort of to me the most
famous example is back in 2015, he tried to assess whether or not Donald Trump was racist.
And he did this by basically isolating like 25 different sort of tweets or events.
or quotes that suggested that Trump might be racist.
And over the course of like, God, 15, 20,000 words.
I mean, he's just like a hallmark of the rational style is prolixity because you really are like bulletproofing, pursuing everything down the furthest down the alley, bulletproofing everything.
It's very hard to get through all this stuff.
And he comes to the conclusion after all this word that Donald Trump is not racist, which is like if you've got 25 examples of a guy maybe, like,
Like to me, the way I think about the world is, if I've got 25 examples of a guy maybe being racist, like, the fact that you have 25 examples of that is like pretty, that's a pretty, that tips me into he's, I don't really even need to look at them. Like, he's probably kind of racist.
And you had to stop at 25 probably. Yeah, exactly. Just to cut things off. And, you know, it'll do, he'll do stuff like, there's the, there's a photo of Trump, the now infamous photo of Trump, like eating a distata, you know, salad or something at at Trump Tower being like, happy Cinco de Mayo. And he's like, yeah, he's like, this is not.
this is not racist.
And I, you know, this is one of the, this is the thing where you're sort of like,
I don't know how to, how to like explain to the rationalist mind, the sort of categorical mind that like, okay, this is kind of a funny tweet.
And it's not like racist, racist, but like this is a guy who thinks about race.
The race is very present in his mind in this way.
And I think that like when I think about the way rationalism works, it's that kind, there's like very little tolerance, I think for that kind of gray area like social, complicated social.
nuances, you know, to like lean on an old phrase, there's a lot of missing the forest for
the trees. Like they're just, you're just like so much counting the trees and looking very
closely at the trees and nobody ever really seems to be able to see the forest. And this is,
I'm being rude. Like there are plenty of rationalists who can, who can like, who are much
better at this than say Scott Alexander is. But this is, I think, gets sort of answers your question
from the reverse that like what you start with, everything is understood, individual
individually in a vacuum.
You examine it.
You prod it.
You come up with a conclusive determination about it.
Or, you know, you're a little Bayesian like 70% this, 30% that, whatever.
And then you move on to the next one.
And connecting that stuff into like a system, like, into like systems thinking.
Even like forget like, you know, the, the sweep of like literature or whatever.
Just trying to do a Norbert Wiener on it or whatever.
Like you are just not like they, it doesn't seem to work very well.
And it's funny because I, you know, I don't think, I think it's, it was.
It's sort of insulting to like Dick Hart to like compare this our new rationalists with the, you know, the philosophical rationalists of the 17th century.
But I do think a lot of the criticisms of those rationalists basically apply in the same way.
A lot of this is like, you know, you imagine the kind of people who are attracted to rationalism, what we understand about it.
You know, you sort of might picture in your head, oh, like this is engineers and scientists and they're thinking like scientifically and empirically about the world.
and so like they must be you know correct about these things but it is in fact the whole problem is that
very few of them are actually thinking empirically like very few of them are thinking scientifically in the
sense of the scientific method like it really is happening in their heads and they don't feel
the need to look out and elsewhere and test their assumptions and think about what is going to happen
because so much of it is happening just on a message board this sort of endless like flow of
of argument and discussion well it's interesting because you know the first year philosophy
and I had more than that, sadly.
Anyway, no, I mean, I studied philosophy.
And the first thing you learn is that you can make an argument for anything.
You can make an argument logically, because logic is such a flexible tool, that a ham sandwich is God.
And probably my favorite example is St. Anne Salma makes the argument for the existence of God.
Do you know his, it's fairly straightforward.
It's basically, he says that God is the greatest thing.
That's the definition of what God is.
And something that exists is ipso facto greater than something that does not exist.
Therefore, by the very definition of God, God exists.
Yes.
And I mean, I remember everyone sitting around and saying, you know, I am, I'm
sitting here. I'm kind of hungry in seminar. It's like about 9 p.m. You know, a pizza,
greater than any pizza, by definition, must exist. Therefore, soon to be delivered to this
classroom is the best pizza you can possibly imagine. A God pizza, if you will. A God pizza, if you
will. I mean, how could, devoting your life to this is pretty silly. Yeah. I mean, it's, but it, like, like, like, without
defending
without defending that.
And I am being utterly
ridiculous and I realize
But you can also, but you can see the appeal.
Like you can see how there is a,
there's like, especially if you're like socially unaware,
if you otherwise don't have many friends, if people don't think the way you do,
like to me this is one of the many stories on the internet over the last 30 years
about people who otherwise would probably not have had very rich social lives,
suddenly finding themselves, finding like-minded communities.
And there are a lot where, you know, like this is the story to my mind of like the Trump Coalition is like the Internet basically created a place where assholes could all congregate and coordinate and find an asshole candidate to elect to be president or whatever.
And at least the rationalists aren't that, so to speak.
But it is like a weird, like in some, in many ways, I think a lot of these people are in fact quite smart.
They have, they have powerful brains in some way that would be better suited rather than like sitting on less wrong debating the existence of God or pizza.
like going like you know being parts of their communities and being and being parts of of like
efforts to to solve things and I think there was like one thing we haven't talked about yet is
um effective altruism which is maybe the most sort of mainstream version of rationalism and I
think for a long time people were sort of excited so effective altruism the the sort of short version is
that the idea is if you're giving money away you want to do it in like to save the most lives
possible, basically, or think about, like, how efficient are your donations being?
So, like, there's been a huge strides made in malaria prevention in Africa, because a lot of
people, associated with rationalists, realize, like, for such a small amount of money,
you can save so many lives in malaria prevention.
This is, like, a particularly cheap way to save lives.
And, you know, I think a lot of people were sort of like, okay, finally we found a way to, like,
make use of these guys weird brains is, like, they're going to figure out, like, what are the
most efficient ways that we can save lives around the world.
And then, of course, effective altruism turns into Sam Bankman-Fried and FTCS and this, like,
huge fucking scam everywhere.
So there's like a real, like, the, this is, to me, this is like, there's, there needs to
be more than just kind of pointing the rationalists at something and being like, figure this
out.
There has to be like a more, a broader and more diffuse, like, let's learn other ways to think
and let's spend time around people who don't think like this and, like, figure out how to
be slightly more normal in the world.
Well, and importantly, the scam of San Bankman Fried was ethically correct because it was a way
to acquire the most amount of money to do the most amount of good.
Yeah, exactly.
Yeah, I mean, this is sort of zizzy.
You know, you can see where Ziz comes up with, let's just shoot at the cops or whatever.
Like everything, other ends always, as far as rationalist's concern, the ends always justify
the means.
And this gets into, like, I know people shit on that Michael Lewis book about Sam Bankman
Freed.
I actually liked it quite a bit.
And there's a great moment in it where he talks about being in,
what was the tropical country that they had set up there?
Wasn't it the Bahamas?
Was it the Bahamas?
Being in the Bahamas with them and watching them have one of these
effective altruous meetings where they're talking about how they're going to spend
the money and they're talking about malaria nets.
And as the conversation goes on longer,
they talk themselves instead of worrying about
the immediate future around them, people they can help right now.
Well, the existential threat based on Bayesian priors of, you know, an astrological body
hitting the planet or the terminator's coming or the, you know, just the planet dying
are greater.
And we have to think about, you can't think about people now.
You have to think about people 500 years from now, a thousand years from now.
We've got to do long term.
And I thought that was one of the really effective pieces of parts of the book, is he really, like, for all the book's faults, Lewis really gets to the heart of, like, kind of how this weird little sociopath attempted to talk himself into having a moral compass and how it went so incredibly wrong.
Can you talk about long-termism a little bit?
Yeah.
So the other sort of mainstream philosopher besides Nick Bostrom who gets associated with rationalism and especially with effective altruism is this guy Will McCaskill, who around the pre-SbF fall and a little bit afterwards, you know, he had a, he had, sorry, McCaskill had recently published a book.
And book is about philanthropy, but it's a long-termism is the kind of underlying philosophy.
And to me, the sort of dorm room rationalist way of rendition of it is saying basically like, well, you think all human lives are valuable, right?
Well, what about lives that don't exist yet?
What about lives in the future?
What about lives 100,000 years in the future?
500,000 years in the future.
If you truly believe in the value of human life, then you should be doing everything you can right now to ensure that everybody, not just living people, but people far in the future, are having great lives as well.
And again, it has that kind of ring of, that's an interesting thought experiment and actually maybe is true.
And in fact, on a sort of basic level, like it aligns with what I already believe, which is, you know, we have to we have a duty to care for the earth for future generations.
And, you know, it's important to like, you know, raise kids right and, you know, make sure that, you know, society survives and all these things.
But as with all rationalist things, it kind of expands itself both in the, both in like thinking about like what we should do.
about asteroids or whatever, but also in the classic SBF way that we've been talking about
where all means are justified by the ends.
And so if you are like, so you end up with a thing where you're like, well, because the most
important thing for me to do is to ensure that humans in five million years have happy long
lives, I'm going to steal a ton of money.
And that as far as the sort of the logic goes, certainly in SBF's head.
And I think for, you know, for a fair number of rationalists in the same position they would
find themselves making the same sort of leaps, that all that nets out. That checks out. You're like,
oh, yeah, yeah, okay, that does make sense. Not much of a subject, just a little bit, though.
What do we think are the chances he'll be out this time next year? I have it at a, I have it at 89%.
How about you guys? Or now you're playing with statistics. That's right. I'm using my rational mind
that bribing Donald Trump's not that hard.
I think he's like a just under the wire.
I mean, I don't know, maybe just under the wire before Trump leaves office.
Oh, you think he'll hold out that long?
Well, I mean, now I say that out loud and then I'm like, well, how is Trump leaving office exactly?
Do I think that's going to happen?
I mean, I do think if you, it's actually a funny document.
If you look at the, I can't remember exactly when SBF made it, but he had, there's some document circulates
circulating that was like SBF's plan if he ever gets arrested. And there's a bunch of like the first item is like come out as a Republican, go on Tucker Carlson or whatever. And I think that's a it's a good document to get into like a high level rationalist brain, I think, where it's sort of like you are thinking about things in this very strategic, effectively sort of sociopathic like way. And again, not all rationalists think like this. Not all rationalists are doing this. But like that is the document of somebody who has tried to teach himself to think about things in this very particular.
you know, quote unquote rational way.
Yeah, it's an attempt to like a lot of religious movements, not all of them, but like a lot of
religious movements and it's an attempt to clarify the chaos of life and kind of give you guide
and help answer unanswerable questions that we're all going to be struggling with probably
forever, right?
Yeah. Yeah. Yeah. And you know, it's like I make the comparison to other, other California
Wu movements advisedly not just to like to sort of be mean about rationalism but also to like
connect it to like you know this isn't this isn't necessarily a kind of tech industry alone
thing it's not necessarily a brand new kind of thing that it has roots and historical
resonances to you know groups and events that we can see I mean we don't even have to talk
about new religious moves we go all the way back to the 19th century like what is America
a good at besides sort of producing these like crazy woo-woo cults.
And so, you know, I can be simply both sympathetic to people for finding that appealing
and and sort of enlightening.
And if it gives them meaning, that seems great.
But there's very few of these cults, these, you know, groupie schools, these, these militias,
these radical movements, very few of them have ended well for their members.
And a lot of them have ended quite poorly, not just for members, for people who happen to be
nearby in one way or another.
And, you know, rationalism, again, big umbrella.
Zizian's just one small part.
I don't think that, like, Yudkowski needs to get thrown in Supermax prison or whatever.
But I do think that, you know, to the extent that we've, that this sort of movement has been
given an enormous amount of sway and influence and power within what is the most influential
and powerful industry in the country right now, it seems, let's say like we're playing
with fire.
Where is Ziz right now?
In jail. I don't know which state, though. I think in California somewhere.
On which charge?
I don't, I actually don't know. I think she's, she's picked up as a person of interest in both the Pennsylvania and I know that they both, there was like warrants out in both the Pennsylvania and the Vermont shooting, partly because she was identified.
So the woman whose parents were killed who went by plum online, Michelle, plum owned property in Vermont and seems to be the person who bought the gun.
for Theresa Youngblood and Ophelia, who were the two people who were killed, or the, sorry,
the two people who were in the shootout in Vermont.
I don't know, you know, like, this is the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the,
makes it much harder to track this than I would have expected, but I, I, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm
do you, do you think that mainstream news missed this?
or underreported it.
Not just the Zizians,
but kind of like,
I've been thinking about,
as we talked to Gil Duran
was very much about like Curtis Yarvan
and Peter Thiel and how a lot of this stuff,
they weren't trying to hide what's going on.
A lot of this stuff happened out in the open.
Yeah,
I mean,
I think the thing about rationalism is that it always,
and still in many ways does seem,
it always did and still in many ways does seem kind of harmless.
Like it is because of the fact,
that it's just these, you know,
and silly.
Dormroom thought experiments, the silliness.
And, you know, even as I sort of sit here and I'm like,
let's be a little careful with it.
Like, you know, it can be hard to sort of construct a specific like,
okay, here's why this stuff is quite bad,
except that we now have so many examples of from like total fraud in the mode of
FDX to like actual murders in the mode of the Zizians
to suggest that there's something rotten at the core of this particular movement.
You know, and I think the other thing is like it was, it's not just harmless, but it had for a long time the kind of attractiveness that only a sort of contrarian account can to like editors, you know, like as a journalist you learn what, especially like magazine editors are looking for and it's always, almost always the sort of alternate theory of things.
And rationalism provides like a very good alternate theory of things. It's like its own freakinomics or whatever. And so you.
have this kind of, you know, a situation where you have seemingly kind of harmless group of
people who have these wacky beliefs that you can kind of squint at and see your way through.
Like, it makes for very good copy.
And you would feel weird writing like a polemic against them, you know, circa 2013 or whatever.
And there's that kind of strange, you know, like, like, like so many things, you can be right early,
but nobody's going to listen to you until, until.
it's until kind of it's too late.
So I don't know.
I mean, I think that, like, there was, it was sort of baked in that this would not get
the kind of attention that it does until later in the cycle, so to speak.
Yeah, I guess it is just hard to imagine that a guy in a fedora who wrote a fanfic for Harry Potter
that is as long as the entirety of the Harry Potter series would one day be
writing about how
AI is
going to use 3D
printers to mass manufacture
a disease that's killing all humans
because the AI thinks that that's the rational
course of action
and that that essay would be published by Time magazine.
Yeah, totally. I mean, it's just very, I don't know.
You know, the other thing is like, because of the
way it's been wrapped up with AI
and because of the, especially over the last five years,
the sort of advances we've had in AI.
There's a certain amount of credibility that gets attached to these guys, even as crazy as they sound.
And you see this, this is Ezra Klein's argument.
You know, Kevin Ruse and other times columnists just wrote last week about we have to take
AGI, so-called AGIES seriously, because insiders are taking it seriously.
And that to me is like sort of one of the signal mistakes of this kind of tech journalism.
That like actually it seems to me that the insiders are much less sort of well informed about
exactly what's what the, with the long-term socio-political effects of the technologies they're
creating are going to be, then they, then they believe themselves to be. But the fact that there
have been these like huge unprecedented advances in AI technology over the last five years does
mean that they deserve to be taken seriously, you know, to some extent, in some ways and allows,
allows you to like look past a certain amount of craziness just because, well, gosh, you know,
who would have thought that we'd be talking to our computer?
like this in 2016.
I also feel that the legacy of half-built and half-tested nuclear reactors in wide open swaths
of data centers rotting away is probably going to be more impactful than the AI technology.
But I, you know, I'm open to being wrong.
We'll see.
It's so long-termist of you.
I know.
Yeah, it does take a long time to get those reactors up and running.
So who knows what's going to happen in 20 years.
Jason, do you have anything, any other questions?
So, so many, but you know what?
I'm just going to think my way through it.
Yeah, you don't need me.
You can do it all by yourself.
I can do it all by myself.
Let me put a really awful tag on the end of this.
Jason, do you know what Rocco's Basilisk is?
I don't.
Oh, Max.
Will you tell us?
This is, well, yeah, so this is the kind of thing, like, when, both when you want to see, to me,
this is like a signal story that explains both some of the attraction of rationalism and also like
why it's such a stupid way to live your life and think. So at some point in the 2000s in the late
2000s, a frequent poster to Les Rang who went under the name Rocco R-O-K-O came up with a thought
experiment that goes something like this. So if you assume that there is going to be, or if you assume
that it is possible to build some kind of far future superintelligence as we've been talking about,
a godlike AI machine, you might also assume that that.
AI will have wanted to be created and may also in fact be would want to reward the people
who helped its creation, but would maybe also want to punish people who did not aid in its creation.
Not only people who actively sought to prevent its creation, but even the people who just
sat around and didn't do everything they could to hasten its creation.
Oh my God, it's Donald Trump.
Okay, sorry.
Except that.
So that AI would want to punish you.
you, and even if you're dead, what it might do is create a complete simulation of you, which
morally speaking, and there's some complicated rationalist math that goes into this, but just
roll with me here, is equivalent to you.
And it would torture that simulation of you for all of eternity for not having helped it
become created.
So the thing that the problem with this thought experiment is the minute you've thought of it,
you've basically instantiated it into being.
And you are now obligated to help this thing.
comment to be, because the only defense you could possibly have is that you never, ever thought
that such a thing could happen. It never even crossed your mind. So this thought experiment, which is like,
it's like a short story, right? It's like a, it's like a golden age sci-fi short story.
Totally. Which I love, you know, I sort of love about it, but it freaked out Yukowski and the rest
of the rationalists so much that they banned its discussion on the message board entirely. You were not,
for years, you were not allowed to say Rocco's vassalus, because it would infect your mind and then
the far future super intelligence would come torture a clone of you forever and ever and
ever.
They seem to dis I think they've learned that it looks so silly to other people that they've sort of
disavowed it entirely.
But it is like it is like a perfect to me.
It's like a perfect like, oh yeah, that's the kind of stuff that you guys believe.
Jason, legend goes that Elon Musk and Grimes met and bonded discussing Rocco's basilisk.
So in some ways it did hasten it hastened some very terrible things.
one of the things I love about that is it really reveals to me because like all of these machines all of these algorithms all of these AIs carry the biases and the programming of the people that create them.
And so thinking about an AI super intelligence that tortures that creates a digital copy of you and then tortures them forever tells me way more about the people that would are programming the AI than the AI itself.
100%.
Sir, where can people find your work?
Thank you so much for coming on to Angry Planet.
Max, a real pleasure to meet you.
Thank you guys for having me.
I'm at max reed.substack.com.
The name of the substack is Readmax.
Perfect.
And thank you so much once again.
That's all for this week.
Angry Planet listeners, as always.
Angry Planet is me.
Matthew Galt, Jason Fields, and Kevin Nodell.
If you like the show, Angry Planet Pod, sign up.
Get early access to commercial-free versions of the show
and all of the written content coming down the pipe.
We will be back again next week with another conversation about conflict on an angry planet.
It's about Russian mercenaries.
We've already recorded it.
We've got some other things coming along and I'm pretty excited about.
Stay safe.
Until then.
