Rev Left Radio - [BEST OF] Ghosts in the Machines: Artificial Intelligence, Consciousness, & Capitalism
Episode Date: April 11, 2025ORIGINALLY RELEASED Apr 10, 2024 In this episode, Alyson and Breht wrestle with the possible rise of artificial general intelligence and its implications. Together they discuss the Silicon Valley Tech...no-Cult and their dark religious search for immortality, their hubristic attempts to "build god" and escape death, their neoliberal subjectivities and how that manifests in their work, how AI might manifest under the capitalist mode of production, the horrors and contradictions of "capitalism without workers", deflationary critiques of AI, humans as transitional creatures, consciousness and its complexities, intelligence without consciousness (philosophical zombies), Nietzchean nihilism, real religion and what it offers, embracing the inevitability of your own death, and much more! ---------------------------------------------------- Support Rev Left and get access to bonus episodes: www.patreon.com/revleftradio Make a one-time donation to Rev Left at BuyMeACoffee.com/revleftradio Follow, Subscribe, & Learn more about Rev Left Radio HERE
Transcript
Discussion (0)
Hello everybody and welcome back to Red Menace.
So on today's episode, we are going to have a sort of free-flowing, organic conversation about something that has been in the news a lot lately and has been in a lot of people's mind lately.
and I think is, you know, gaining steam with regards to the discussions around it.
And unfortunately, I think, you know, these discussions we're going to be talking about related to artificial intelligence, you know, are often discussed in very narrow ways that, I mean, you know, in mainstream discourse, take capitalism for granted and, you know, take this idea that these things are sort of good in their own right or abstract away from the material conditions in which they manifest.
And I hope, hope this conversation with Allison can be a sort of kind of corrective.
to some of that, as well as going deeper on the philosophical side of some of the things that
artificial intelligence, artificial general intelligence, et cetera, implies about our possible
futures, about who we are as conscious beings, about consciousness itself. And so I think we can
have a really interesting, wide-ranging conversation on this that you don't often get in mainstream
discourse. So hopefully we can provide something of insight here. But I think the best way to
start this conversation about artificial intelligence is to let Allison kind of put the, put the main
pieces on the table, explain the basics of what the discourse is about, the basic, you know,
technological parameters that we're operating within, and then we'll let the discussion sort of
take a life of its own. Yeah. So to kind of start very big picture, I think everyone listening to
this probably has some idea of what artificial intelligence is at this point, because, you know,
kind of with a shocking level of speed, it is integrated itself into daily life fairly quickly.
Over the last few years, we've seen massive developments within artificial intelligence,
GPT3 and now the GBT4 models, various different imaging models as well, such as Dolly, have come up.
There is this massive expansion of technological investment into artificial intelligence that now
is really starting to kind of pay off in terms of output. We have concrete uses that we can
use for it. You can use it to make images. You can use it to write documents for you. We are really at the
point where we're starting to see it kind of emerge as a social phenomenon, not just as something
that is being researched. And obviously, along with that comes a whole host of political and
ethical concerns that I think we're going to get into. Broadly, I want to set the stage a little
bit by just doing a bit of definitional work. So most of what is called artificial intelligence today
are LLN's large language models, which are a particular type of artificial.
artificial intelligence that can process a huge data set ahead of time. It can process it
multiple things in parallel. And the big kind of breakthrough with LLMs is that the way it approaches
the words and the data set, it can kind of understand the meaning of words and the context of
other words, which is a kind of big breakthrough. And that is really what's new with a lot of
these LLM models. So given a bunch of training data, an LLM will be able to respond to various
prompts in ways that are, you know, that makes sense within the context of the words it's
responding to. So if you ask it to give you a document, including specific things, it doesn't
just understand the words in your prompt. It understands them relationally to other words.
This is kind of the core idea. And then is able to construct a response where those relations
are baked into it. So again, if you've ever played around with like old chatbots, I don't know
if anyone else used to like AI Instant Messenger or America Online Instant Messenger back in the day,
but there were like really bad chatbots on there, right? And those could not handle a really
complicated request. They could give very basic responses to text input. They could mimic human speech,
but they couldn't really synthesize things together. And LLMs are actually capable of doing that.
If you go to chat GPT and you say, this is my work experience, can you put together a cover letter for me?
It'll be able to extrapolate beyond what you've given it and actually synthesize.
some information in between. Now, that synthesis isn't it actually knowing anything about you and your work information. It's a complex neural relationship between the words you've given it and other words that it knows are statistically likely to occur in relation to them. But it's pretty, you know, it does a good job of looking like intelligible speech, actually, if you have ever played with these tools. So these are the kind of large form of AI that we are seeing now breaking out. And the other term that we use when we talk about,
but AI that I think is relevant is AGI or artificial general intelligence.
Artificial general intelligence, there are actually like a lot of competing definitions
about what that might mean. But this is more about the kind of sci-fi notion of AI that you
think about, which is like a fully independent thinking machine that doesn't require
necessarily training data, but actually is able to think independently, take in information
on its own, of its own volition, and then synthesize new information based on that. LLMs are
decidedly not this yet. You have to feed in a specific set of training data, and it can only
work based on the training data that it has. So there is a gap between AGI and LLMs, but broadly
when we talk about like a synthetic consciousness, a consciousness that is purely computational,
what people are talking about is AGI. And so, you know, that's all just kind of big picture
definitional work. But I think that's important for us to understand where we're at now and what
sort of concerns we might have about the future of AI. So AI is already just with the LLM
models, I think, being very disruptive in society in ways that are really important to think
about, and people have concerns about it becoming this bigger AGI that might pose existential
threats to humanity. But as it already is, I think we can see a huge impact that is happening
now. AI is making huge inroads into the advertising field, which is actually kind of ruining the
internet in various ways. A huge amount of the content on the internet that is SEO. So search engine
optimized is written by AIs just to pop up high on a page on Google and get ads to you. This
has already fundamentally disrupted the usefulness of search engines. More and more images
that we are seeing online are AI generated. We now have demos coming out from OpenAI for
video AI, which some of it is very impressive and that will absolutely create disruption socially.
deep fake image video editing has developed because of AI in which people have made really
horrific applications of creating pornography where they impose the faces of other people into
existing videos, stuff like that. The social disruption is already here with the existing
AI models that we have. So, yeah, again, there's these two dimensions. Existing models and
AGI. I think the existing models are what we most need to be worried about at the moment.
and they are already causing massive problems.
One other place that I'm going to say that they are causing disruptions already that are worth
kind of wrestling with is I also think people overhype what they're capable of.
So LLMs are really good at giving you the text that you expect them to give you back,
but that's not the same thing as intelligence, right?
But we're already seeing all of these tech startups saying,
oh, we're going to replace doctors with LLNs for poor people who can't afford doctors, right?
And that is based on the faulty idea that it,
an LLM is actually reasoning or thinking and could do diagnostic work, but it can't.
It just knows statistical probability on what words it needs to return to you.
So I think the other social disruption is going to be gutting access to human knowledge that
is necessary in specific fields, again, like medicine, and replacing it with LLMs that have
been overhyped about what they can do, and that will probably lead to further disruption.
So broadly, this is why I think these talks about AI matter.
It's already integrating into our society.
It's already integrating into our society under the control of some of the worst tech startup people you've ever heard of, unfortunately.
And it's already causing disruption based on that that we as, you know, not people who are in the CEO tech startup positions don't get any input in.
And that is where a lot of these concerns come from.
Yeah.
So one thing there is to say is to think deeply about the, you know, a previous era of techno optimism.
And that is the 90s and the buildup to, well, the Internet's online now.
More people are getting on the invention of the sort of the personal computer and then the invention of social media.
As these things were coming online, there was a sort of very deep naivete on the part of its proponents or people who believed in what was happening,
that this is just going to be a wholly sort of good thing, that, you know, social media is going to connect people from around the world.
After the Arab Spring, for example, there was like a booth.
boost in this sort of techno-utopianism, like, oh, you know, this is going to like topple authoritarian
governments and people are going to be able to express themselves in new ways. And, you know,
none of the negative sides that we're all starting to become very accustomed to, we're ever sort
of seen and predicted. And that's happened in every iteration of technological development.
And so we should also think incredibly deeply about the techno-utopians that are, you know,
pushing these new ideas on us, promising us all of these wonderful things. But we, as communist as
Marxist, we understand we live in very specific material conditions and in a very, you know, ossified and stratified class society and any new technological injection into that highly unequal society is not going to produce utopia, but is, you know, going to be used to reinforce class rule and is going to have a bunch of downstream consequences that certainly its advocates and even many of its skeptics aren't even fully aware of and can't articulate yet. So we should always have a baseline skepticism.
on that front. I'm no expert in this realm by any means, but I've been doing some research
and there's a current issue with these large language models and even these image, these artificial
intelligent image generators, which is there seems to be a sort of narrowing over time of
syntactic diversity in the large language models and diversity of images when it comes to
artificially intelligent image generation. And it's sort of this sort of techno inbreeding problem
where AI itself starts using AI images as its raw material sources
and it sort of narrows the final product.
For example, I was looking at a collage of elephants, right?
There's a bunch of real pictures of elephants in the wild
that are pumped into the database for artificial intelligence.
And then this is a big collage, right?
And then there's a big collage of what comes out the other end.
And over time, the images of elephants
become more and more narrow, narrow, narrow, and the collage of artificially intelligent
generated images of elephants look like they're all more or less the same. And the actual
diversity exists in the real stock photos of elephants in the wild. Now, that's probably an
overcomable problem. But if we're jumping into things like, you know, like Allison was saying,
with using these large language models in the context of like medical advice specifically for
people who can't access health care, well, that's an issue we should definitely, you know, think very
deeply about before we unleash it in that way one of the things that that sort of shocked me and
you know it's maybe easily impressed here but uh the latest the latest version of chat gpt i was
just messing around on it and i you know i'm somewhat a public figure but you know not really
nobody knows who the fuck i am but i typed in like well what would brett o'shay from rev left
radio say about modern american society and it just fucking nailed it you know like everything i would
say it said oh and i thought that was kind of cool and
And sort of I was telling my friends about it.
And that's kind of crazy, you know.
But there is this sense in which these large language models have, in a lot of ways,
past the Turing test, which used to be this threshold for believability with regards to these technologies.
Like, you know, the Turing test was held up as this really important threshold.
And that seems to already have been blown out of the water and is not that impressive.
So I think that's kind of interesting.
But the thing I really want to focus on at this moment, and we'll get into.
class society, automation, basic income, contradictions, et cetera, here in a second. But you mentioned
something about the Silicon Valley techno cult, as I would call it. And we have to think very deeply
about, yeah, this class formation that is the sort of basis of this technological innovation and
progress. And we have to see these people as deeply embedded, not only in their own material
conditions, not only in their own echo chambers, but in a neoliberal subjectivity. You know,
the people that create these technologies, they are really high on their own supply.
They believe that they are these, you know, these Ann Randian almost Uber mention bringing about the dawn of the new gods, right?
They have this delusion that they're going to achieve immortality, this idea of the singularities just around the corner.
We're going to download our consciousness into computers and live forever.
I mean, the first thing when anybody starts talking about that, the first thing I do is look at them as individuals.
How do they live their life?
Are they incredibly alienated from their own humanity, the humanity of other people, nature itself?
Do they have any sense of wisdom, any sort of moral depth that sets them apart from anybody else?
Or are they just elites in a broken system that produces fucked up elites with.
fucked up, you know, economic incentives coming out of these monopolies, these tech monopolies
that they formed, and, you know, sort of delusionally grandiose about what they're out to achieve.
I mean, these people will literally talk about creating new gods.
Ray Kurzweil, who's this sort of techno-futurist, the head of Google for a long time, you know,
he was asked this question, does God exist?
And he said, not yet.
and the implication being that they're building, they're building God.
And who are these people?
Are these people worthy of building a God?
You know, even if that is possible, even if it's not just complete delusion?
Right.
And so the people and the incentive structures and the corporations that are behind these technologies
need to be, I think, the main locus of our skepticism,
because these are not particularly wise people.
These are not particularly deep people.
Look at Mark Zuckerberg, look at Elon Musk, and look at other people
that you see in this space, are these the sort of people that are really worthy of even,
even aspiring to the goals they claim to aspire to? Or are these, you know, in a sense,
you know, really sort of small, delusional people, scared of their own death, perhaps, very
short-sighted, blinded by their own delusions, creating technologies that they are promising
will just radically change for the better
all of the society, but coming from
where they come, we shouldn't
and we can be very skeptical of that. So yeah, do you have
any thoughts on that part of it?
Yeah, so the
techno cult side of things is fascinating
because this goes back actually much
further back than like the current wave
of LLMs and OpenAI
and kind of where we're at at the moment.
So on the internet,
this idea of
sort of a techno-utopian
but also techno-dispopian developed
of artificial general intelligence has been around for a long time.
Futurism has predicted the possibility of this for quite some time, and you've had individual
people within tech who have basically claimed that the development of AGI is an inevitability,
right? That is actually kind of one of these central claims in this ideology is that eventually
technology will achieve a certain level of complexity, where AGI becomes inevitable.
And then the question that humanity is posed with is what kind of
kind of being is that AGI going to be, right? Which if AGI is really inevitable, is a fair
question, I think, like, that is something that we should wrestle with? Is it kind of
be a deeply misanthropic being? Will it be a benevolent being? Those questions matter,
but it is kind of taken for granted that is going to develop. And within tech, you have
various people who have basically self-appointed themselves as I'm going to be the person who
funds the development of AGI to make sure that it's benevolent, right?
That is really this idea. So a lot of these discussions go back originally to an online forum called Less Wrong, where this idea of affective altruism really began to develop. You get these tech people who become very focused on the idea of we need to guide the development of AI to make sure that it is not evil. If any of you, I will get into the details of it, have ever heard of Rogo's basilisk as a thought experiment. That goes back to Less Wrong. It is specifically thought experiment about the risk.
of making AGI that is evil and is not benevolent. So these concerns go back a long way.
Basically, you have a number of people in tech who have said that, yeah, they're going to
put their money into this to make sure that AGI, when it develops, will not destroy humanity.
And I think, as you pointed out, Brett, there's a good question where we might ask,
does someone like Elon Musk, for example, have a good understanding of what benevolence means?
You know, it strikes me that the answer is no. But a number of the people who put themselves in this position, yeah, have politics that I think are increasingly blatantly fascist. Like I think it is not a stretch to say Elon Musk's politics are pretty straightforwardly fascist at this point, straightforwardly anti-Semitic as well. And, you know, those people claiming that they're developing AGI in a humane direction obviously have politics that push against that. But in addition to that, you know, the development of AGI under their model is, you know,
still under private control, right? It is still going to be backed by investors who can push
projects and directions they want. It is still going to be done largely through private
research with a profit incentive. And I think, given that, we might also ask, is a profit
incentive compatible with humanistic benevolence, right? And so I think individuals aside,
the very structure under which the idea for this research to occur is happening is at odds
with the stated goal of creating benevolent AGI eventually. So yeah, you do have this kind of, it is for
techno-utopianism, utopianism maybe oversells it, though, because they're driven by this weird
apocalypticism, right? This weird fear of an inevitable AGI that will destroy humanity and wanting
to respond and steer us against that. This is why you get people like Ewan who always make
statements about AI being the greatest threat to humanity, right? There is a weird dystopianism
and apocalypticism to it, where they then kind of get to be the savior figure who saves us
from that by making sure the thing that emerges is a nice techno god for us or something like
that. So there's a very strange ideology at play there. There's a very religious ideology at play
there. And there's the development of this technology in ways that I think even if AGI is inevitable
and even if they claim to want to be making it benevolent, they're probably just making a capitalist
fascist AGI right, which certainly doesn't seem any better to me. Yeah. Like sort of, yeah,
selling themselves as the saviors when they're incredibly limited people, you know, morally, spiritually,
existentially, imaginatively. But they really, they live in this sort of this feverish echo chamber
where they all sort of reinforce each other's ideas and their own superiority. And they really
do sort of buy their own bullshit. And that's incredibly dangerous. And that's one of the problems
of a complete lack of democracy is these technologies come online with absolutely no input from
regular people and the people that the figures that are cultivated in this space the people that
rise to these levels have been through the ideological washing machine as it were they're the elites
they're the elites of a system a certain sort of system has produced to them and those premises the
premises of that system far from being questioned it produced them so it must be right it must be
good and it's right and so there's a complete lack of skepticism of their own ideology there's a
complete lack of humility um and and these people are ego monsters and just just from
the perspective of people completely dominated by delusion and greed and their own egos, whatever they produce is going to be marred with those traits, those attributes of the people that create them, and not just the people, individually, the ideology that creates them and also the corporate and material basis out of which they spring, which is something that, you know, makes me have a lot of pause. Now, is this stuff inevitable? I think there is a certain sort of inevitability that we can disagree about the time.
line we can we can disagree about whether it ever reaches sentience consciousness we'll get into that
in a second but it does it does seem to be like that that there's a certain inevitability that
there's no restraints there's no checks and balances stopping any of this you know one company
makes a breakthrough nobody's going to stop them in fact there's an arms race it's just like
with nuclear proliferation no democratic oversight nobody casted a vote for let's create nuclear
weapons. You know, these small elites and these in like the Manhattan Project and this almost
conspiratorial, you know, scheming and plotting outside the, the mechanisms of democracy
and mass input create this horrific weapon that now haunts all of us. It could literally end the
world. It's, you know, dropped two on Japan, slaughtering hundreds of thousands of innocent
people, incinerating them, burning their shadows into the concrete with no input from regular
people. Now we just have to live with this shit. And, you know, to make
that exact same mistake in the 21st century with a brand new, possibly very dangerous thing,
I just think is like, how many times can we luck out? Like, you know, the mere fact that we
haven't blown up the world seven times over with nuclear weapons seems like an achievement
in its own right. And it's like, oh, let's keep creating stuff, never questioning the
premises, never looking inward, never asking who are the people and the ideologies creating,
because keep doing it. And it has a momentum in its own right. And that's incredibly dangerous.
is. So that's something to think about. And yeah, lately, especially as a very late, we've seen a real increase in investment. And when you see an increasement of capital investment into a new technology that exasperates and accelerates the development of that technology. And so people are now talking about a hard takeoff for AI or, you know, the time span being months to to a couple years. Some people have pointed out 2020.
as the time when artificial general intelligence emerges. Now, these are all just guesses, of course.
But whether it happens tomorrow in five years, in 10 years, and 15 years, we should be thinking
deeply about the implications. We should be struggling for democratic input into these decisions
that are going to affect all of us, our children, our grandchildren, the entire future trajectory
of humanity. These are certainly things that we should have a say in, be educated on, etc. But
the very idea that peasants like Allison and I and you, dear listener, should have any input to the Silicon Valley elite is laughable. You know, we, we are not intellectually capable of understanding the heights that they have reached and the depth of thinking that they have done on these subjects. So there's a sort of disdain for the very idea of democracy. And of course, Elon Musk, I think, you know, personifies that in like a really singular way. But that is a widespread belief in that entire community, I think.
Yes, absolutely. Now, yeah, do you have anything to say to that, or do you want to move into, like, these questions of capitalism?
Well, so I'm interested in talking about the question of inevitability a little bit, whether or not it is actually inevitable, because that's where, so I think I'm a bit of a skeptic on some of these questions slightly.
So, yeah, if we could go down that path real quick, I do have a couple thoughts on.
Please.
Yeah, so I am at least highly skeptical of any claim.
that says we can put a concrete date on AGI, right? Definitely 20209 seems way too soon to me.
Maybe I will eat my words on that one and the world will end. But generally, I think that seems
way too soon. And I think part of the issue, right, is that I think LLMs, which again are the
existing dominant form of AI that most people are interacting with, it's the form of AI that
looks like thought, right? Because you can kind of converse with an LLM and that kind of grants that
idea of thought there a little bit. But I think LLMs are like very far from AGI in a way that people
don't want to wrestle with. And it's not even clear to me that if AGI develops, it will
develop out of LLMs. So I think for me, there's kind of some room for skepticism around this.
So, you know, there's this question I think of like, does our current AI, like, pose a risk
of developing generalized intelligence? And I think I lean towards the skeptical side for a few
reasons that that I think have political implications. So one, I think there's like an epistemological issue with this question, which is that I think humans don't know what intelligence and consciousness are, right? We have a lot of different theories about it, but in this actually interestingly, where science and philosophy begin to overlap with theory of mind and neuroscience, but there's not anything near a conclusion in either of those two fields about what consciousness is, what makes consciousness arise, and the relationship between consciousness,
in intelligence, i.e., can things be consciousness, but lack conscious, but lack generalized
intelligence? It would seem to be the case. So there's all of these issues. And then the other
issue that I think is like a huge epistemological constraint that I think matters is sort of
a familiarity bias in an anthropocentric frame. We as humans do thought through the type of
consciousness that humans have, right? And so we have this sort of predisposition to assume that
consciousness will look something like our consciousness, which may be a total bias, right? It may be
very, very possible that if consciousness were to emerge in some sort of synthetic system,
it wouldn't look anything like human consciousness. So I think there's these epistemological issues
as well. And then I think broadly, one of the big issues that I have with this idea is that a lot of
the AGI kind of inevitability claims are based on what I would call emergent theories of consciousness.
So one approach to consciousness, and there are people within neuroscience and philosophy of mind who would support this,
basically say that consciousness is just an emergent quality of sufficient complexity within the material reality of the brain, right?
So the brain reaches a certain level of electrical activity, and consciousness emerges as a byproduct of that.
We don't understand the causal relationship, but it's an emergent property.
And most AI development is basically built on this theory, right?
that if we get sufficiently complex hardware, eventually these things that we see now, which are
not consciousness, might become consciousness. And that could be totally wrong, right? Like, that
actually just might be the incorrect theory of consciousness. And if it turns out to be incorrect,
then all of the AI development that's happening right now is developing in the wrong direction,
right? It could actually be developing away from consciousness. So there's a very real possibility
that the type of development we're doing is wrong because our understanding of what
consciousness is wrong. And then the last thing that I think, you know, gives me pause is I just
think there's a huge gap between what LLMs are and what consciousness might be and what knowledge
formulation might require. Because LLMs are able to take training data and synthesize from within
that training data, but they can't take anything external to that data and synthesize with that
information. And for me, maybe I'm just too impacted by Hume on this particular front. But I think if you
can't synthesize experience with information that you already have, you're not capable of making
inference, which is a necessary process for developing knowledge, right? And for especially developing
forward-facing knowledge. And so until that gap is bridged, which is not quite in place for
LLMs, I don't even really know if we can say that these models have anything close to knowledge
in the proper epistemological sense. So we may get AGI. We may be developing in the right direction.
those bridges may be crossed. We may close those gaps, but I think there are much bigger gaps than people want to give credit for. So I'm more skeptical of these imminent inevitability claims. Inevitability may still be the case. I just don't know how imminent it is because I think there are big philosophical questions here. And I think this is where you see that techno-utopianism wanting to bridge over these questions, right? And there's this capitalist ideology at play of, well, the science actually doesn't matter if I just throw billions at this. We'll obviously get it, right? And that's
might not actually be true. Right. Yeah, that's really interesting to get into the question of consciousness
because we know that there can be consciousness without intelligence or self-awareness. We have
animals all along the spectrum that I would assume are conscious on some level, but, you know,
are very limited in what they're able to conceptualize or conceive. They don't have necessarily
self-awareness. They're programmed by evolution to fill a niche and their sort of instincts are
geared in that direction. So there's consciousness without what we would consider.
intelligence. Can there be intelligence without consciences? You know, can we create machines that
are more intelligent than us, whatever that means, right? And as a side note, intelligence is not
just the IQ, right? There's many forms of intelligence. There is social intelligence. There's
emotional intelligence. There's creative intelligence. There's mathematical intelligence. Maybe
some things like mathematical intelligence, you know, we could create hyperintelligent machines
that can do that for other stuff maybe not so much certainly emotional and social intelligence
is important and informs other forms of intelligences but to have a hyperintelligence that is
really good at mathematical and logical intelligence but has almost nothing in the realm of
emotional and social intelligence i think creates a um a concern of its own a possible pitfall of
its own and that's to say nothing of the imminence right setting that question aside because
I do agree with that basic point. It's probably, and if I had to bet money, probably not as imminent as many of these people are portraying it to be. But there's this, you know, in philosophy of mind, there's this very famous article by Thomas Nagel called What's it like to be a bat? Yeah. And whenever we talk about consciousness and intelligence, I do think about that a lot. Because, you know, if we have this presupposition that we're creating at least a possibly conscious, you know, entity, yeah, as you were saying, Alison, that consciousness could be.
be incredibly different in ways that matter than anything that we would sort of want to call
conscience or think about when we say the word consciousness. So that's certainly interesting.
Now, sort of stepping back even further and looking at like a really speculative, philosophical,
you know, universalist perspective. You sometimes think about the evolution of consciousness.
And, you know, not to be like a Hagellian idealist about it, but, you know, there is.
is this sense in which, whether it's primary or secondary, epiphenomenal, whatever the truth
is about consciousness, there's this sense that evolution has pushed forward an evolution
of consciousness. You start with very simple creatures, entities, single cell organisms, and through
the processes of evolution via natural selection, higher and higher capacities continue to form.
And certainly human consciousness is an intelligent, self-aware consciousness that we can
say sort of you know more or less objectively is a higher level of consciousness than a cockroach
or a cat or a dog or whatever other animal you might want to point at so insofar as that's true
then we ask the question okay the universe is very big there are almost certainly other entities
and sentient creatures out there on the cosmos there are certainly almost certainly life that
evolution has been able to act upon and there could be aliens out there that got you know a billion
year head start or whatever it may be right um that have a much higher level of consciousness than
we do and then it's it's at that point it's sort of like trying to imagine the fourth and or the
fifth six seventh dimension it gets it gets very it reaches the the ceiling of our conceptual
ability to try to conceive of what it would mean for consciousness to be significantly more
evolved than the human consciousness yeah um and that that that's a very interesting thing but
i always am reminded of this of this Nietzsche quote that i always think is very interesting and
always gives me pause. He makes this point
many times throughout his work, but in thus
spoke Zarathustra, he says,
man is something that shall be overcome.
Man is a rope tied
between beast and overman,
a rope over an abyss. What is
great in man is that he is a bridge
and not an end. And there's this
idea, which, I mean, evolution says
this is true, you know, this
logical, philosophical speculation
says this is true that evolution
cultural, biological, technological,
has not stopped you know we're not we're not the end point of some process we are by definition
transitional and that means something comes beyond us just as you know neanderthals and and earlier forms
of of hominids were sort of transitionary creatures to us um and so then we ask you know does that
does that take the form of biological evolution well that seems really slow and weird you know
it seems like we're able to fly the fly from the nest of mere biological
you know, evolution via natural selection. Now we're able, especially with the, with the, the
dawning of, you know, I mean, gene editing, the deep understanding of biology, our cultural
understanding that we're able to sort of do cultural and technological evolution and sort of,
you know, kind of take the reins if we, in the coming years, if we wanted to, of our own
evolution. And, you know, some people have speculated that the creation of artificial general
intelligence, artificial super intelligence, far down the line. Maybe we're a transitional
creature for that. And maybe that is a higher level of consciousness. But then there's also
something incredibly scary about the idea that I was alluding to earlier where it's intelligent
but not conscious. And then we hand over the reins and we're actually not advancing
consciousness. We're not evolving it. We're ending it. You know? Right. Right. And then you have this,
these philosophical zombies in the form of machines that might go on to colonize an entire galaxy over
millions of years, but that don't have a light on the inside. And that's sort of a deep cosmic
horror that at least gives me pause, although it is highly speculative, of course. Do you have any
thoughts on that? Yeah. So broadly, I think you brought up a couple things that I think are
interesting that will help frame how I see that last potential question. So the Nagel article is
interesting, right? This famous, what is it like to be about? The answer to a certain degree is
you can't know, which I think is an important answer because
When I was getting at that like anthropological bias, or anthropocentric bias, what Nagel suggests that I think is the most important thing here is that it's not even quite a bias. It's fundamentally built into trying to do a theory of mind, right? We can only do that within our own mind. There's a fundamental inability to access what another mind might be like and what it might be like phenomenologically to experience being the kind of being that has that might. So that it leads
to my skepticism in a lot of ways, right? I think even if we can recognize our anthropocentric bias,
Nagel suggests like a hard cognitive limit, right, on our ability to ever get beyond that bias
in a way that I think, you know, is important here. And I think when you bring in evolution,
this is where it's interesting to me, right? Because I think the AI utopianism can accidentally
take a very linear theory of evolution, right? Which says that humanity has like the highest
level of evolved consciousness and intelligence, which maybe, again, there's so much definitional
vagueness there that who knows. And so the next step is going to be a thing that we then
create that will replace us. But that is still so anthropocentric. That still puts us so thoroughly
in the driver's seat of evolution in a way that I think is a historical given the grand scheme
of the universe in its history and how short of a time we have existed. And also the reality that
there are other creatures on Earth who have a good claim to a high level of potential generalized intelligence that looks radically different from humans.
So the example that I think of that is always the most fascinating to be cephalopods, right?
So increasingly as we do more research on cephalopods, we are finding that they have a level of intelligence that is actually very comparable to some of the most advanced animals, probably on par with, or perhaps beyond cetaceans, which are.
or whales, dolphins, which are generally known for their intelligence, and cephalopods seem to
have something that may represent consciousness. But here's where it gets really weird, is that the
underlying hardware of cephalopods is remarkably different from Mars, right? So one of the
strange things about them is they essentially have a small brain for each of their legs,
which is a really fascinating thing. So they do have some sort of central nervous system,
but they also have these localized nervous systems that are able to make their individual body parts
operate somewhat independently from each other. But not like on pure instinct, there seems to be
something like thought involved in it. And so here's this really interesting thing where
here's this animal that has problem solving capabilities that, to the degree that we can measure
them, seem to compare to human children, and who has an underlying neurology that is very alien
compared to the human mind, and evolved in a completely different context. And so,
So if we look at the cephalopod and we think of cephalopods as having the potential
for something like consciousness, I think what that shows is that evolution can involve
consciousness that looks very different than human consciousness on hardware that looks
very different than human hardware, right?
And that perhaps might be a point in favor of the possibility of AGI in a synthetic
system running on synthetic hardware.
But it also points to the fact that this idea that humans are this linear evolution towards
consciousness is kind of a mistake.
right? That evolution has occurred in other places in radically different forms. And so I often think
we kind of really, in the techno-utopianism, overplay our position as the intelligent species on the
planet, and is obviously what consciousness needs to look like, and are again developing attempts at
AGI based on those anthropocentric assumptions. That might be a mistake. Now, in a sense,
though, this raises the possibility that you got at, which is a scary possibility, which is that maybe
if AGI exists, it looks nothing like human consciousness, right? Maybe there's something that will get
created eventually, which is totally alien to how the human mind works, but is still a consciousness
or an intelligence, one without the other potentially, that can supersede us. That seems totally
possible to me, but we don't know how we would ever get there again, because of those hard
constraints in our own consciousness that Nagel imposes. How would we ever know how to build that
kind of system, right? So for me, there's both skepticism and, you know, possibility within these
ideas, but I think there is this just like underlying anthropocentrism to so much of the AI talk
and underlying linear kind of destiny-based evolutionary thinking that I think is overly
confident for a species, which has been around for an iota within the grand scheme of
the universe. Absolutely. Well said. And a product of underlying sort of political and social
ideology. Right. This very pragmatic, liberal capitalist sort of approach to the world, linear
progress, etc. It is a sort of cult of progress, if you will, that sort of gives rise to those
delusions. But I really love your point about other forms of intelligence, other forms of
sort of hardware, and that human beings and our form of intelligence and consciousness is a
branch of the evolutionary tree, not the peak of the evolutionary mountain. And thinking about it
And those lines is very interesting.
And to add to your point, I often think, and this sort of goes back to what it's like to be a bat, but is like how, and Hume, right, how sensory apparatuses dictate how we orient ourselves and understand the world and probably shapes our consciousness.
So one of the things that Nagle brings up in that essay is like echolocation.
We are so visually oriented creatures that it's incredibly difficult for us to try to put ourselves just,
censorily into a world of echolocation and then to draw conclusions about what that would mean
about our own self-awareness, how we'd understand the world. If a species with echolocation or
different sensory apparatuses got to the point of developing philosophy and science,
would it go in radically different directions or have that radically different style? That's
interesting. Humans, we have five senses. Some of us have a six sense where we can see Bruce Willis,
but most of us just have five cents
you know the sixth sense people
talk about proprioception like where your body
is in space whatever what five six
whatever I often think like aliens across the universe
like evolution shaped them in such a way that not only do they
might have a different nervous system
different hardware with relations to
yeah like the brain or whatever that might look like
but they what would it what would have been with
12 sensory apparatuses you know
how would they conceive the world
how would their consciousness evolve how would they
think even just thinking about their world their basic premises they take for granted
and so that is if if highly speculative at least humbling in our sort of understanding of
intelligence etc which that that is precisely what is lacking with these with these people
and these techno utopians in this you know like we've been calling it a techno cult because it
does have deeply religious undertones it's messianic it is like we're bringing on the you know like
Christians would talk about Jesus coming back, like we are giving rise to new gods, this idea that, which is the ultimate form of not being humbled is this idea that you can live forever, you know, that we're just going to figure, like we don't even know what consciousness is, but we're going to figure out how to download it into fucking computers. Are you insane? And then these people that are, you know, Ray Kurzweil is one of them, actively trying to keep their bodies alive so they can get to the point where we have the singularity where their consciousness can get downloaded.
Now, this is, again, highly speculative philosophical rambling, if you will.
But another thing I think about is immortality.
Like, you know, we don't know what the point of life is.
We have this sort of secular, atheistic, scientific view that is sort of the mainstream.
And a lot of these people would more or less adhere to certain versions of that.
But that's just our current iteration of how we think the world is.
And yeah, a lot of people deeply believe it to just be unquestionably true.
But go back to medieval Europe and ask people,
if Jesus and God exist, and there wouldn't even be a question.
There's not debates, you know, and there's not debates over whether God exists or not.
You talk like that.
You're going to get burnt at the stake.
Like everybody in the community, oh, we know for a fact.
And you go back to any sort of religious belief, animism, Greek and Roman gods, anything.
These things were fully believed like the modern subject of modernity and postmodernity,
you know, maybe not postmodernity, of modernity fully believe in science as the truth-finding
mechanism. And so first of all, that's incredibly limited. You should be skeptical of that because
we've only been around for a short period of time. We've always taken what we assume to be true as
totally true and those things have been overturned with enough time. Everything we find out about
the universe turns over some other premise that we had previously about the nature of the
universe or biology or whatever may be. So there's a complete lack of humility there. So with all
of that in mind. Even if you could, which I do not believe you can, especially not on any
timeline that I can fucking see. Be immortal. What if there are other things that happen when
you die? What if you were meant to go through this as a process and be liberated to something
else? There could be a million things. The fact that the universe exists at all is so fucking
insane that almost anything could technically be true. And we don't fucking have any way of knowing
what is true, what is beyond the cosmos, what came before it, what might come after it, even how
the universe is going to end. And we're talking about downloading our consciousness into
computers forever. I mean, that just seems like incredibly scary and evil. And there's this
idea of like being trapped in a digital universe of our own shitty simulation and never being
liberated by death. Like, do you know how cursed immortality would be? How, how fucking horrifying
it would be to never be able to
stop this shit. I will take
an atheistic death
to immortality in this cosmos
any day of the motherfucking week.
And that's just something that these people just blast
right past. Yep, I'm going to download my consciousness
and live forever. Okay, psycho.
No, thank you. You know?
Yeah, no, and that's the thing is even if
death is we close our eyes
and that's it, right? And it is
just the complete obliteration of the self.
Yeah, I think that's fine.
Right? I'm going to go ahead and throw that out
There. One of the things, you know, this has been a really fascinating thing in my personal life.
My grandparents are in like their late 80s now. And a really fascinating thing talking to them is,
and I find this very beautiful, actually. They are so unafraid of dying. Both of them kind of know.
It's probably around the corner, right? Like it is going to come for them. And they are just not
bothered by it. In this way that I find has this profound dignity to it, honestly. There's like
something beautifully human about it in both of them. Where even,
one of them is like, I just, it happens. It happens. Honestly, I'm tired. Right. Yes. And I think, like,
I don't know. There's something so beautiful in that ability to say, like, I've lived my life, right? I've
had my life and it doesn't have to go on forever. And it's okay. And I don't have to be afraid,
even if there's nothing after this, right? And I just think, wow, that is, what a thing to rob
humanity of, right? What a dignity to steal from people if you're going to upload us into the cloud
forever, right? And I just think, I think the hubris of it, compared to the humility of a human being
able to face our own finiteness, just drives me insane on a certain level, honestly. Like,
I just can't understand that level of hubris. I can't understand that disdain for life because
death is a part of life, right? And it really is, you know, in this own way, I think they see
themselves as very Nietzsche, but they really do have their own kind of life denial and hatred of
life in the same way that Christianity does in the Nietzschean framework. They have their own goal
for escape. They have their own idea that the engagement with the present is ultimately irrelevant
except for the investment and the escape into the afterlife. It's just the afterlife is some
technological afterlife, right? And I just, there's such a sickening disdain and nihilism
at the core of it, honestly, that I just, you know, it really invokes a sort of revulsion
in me. Then again, I look at just simple people like my grandparents.
be able to just face the reality of this is probably coming to the end. And I'll choose that
every day over this fucking weird nihilistic life hatred. Absolutely. And you're absolutely right
to put your finger on the nihilism of it, the disdain for life, the inseparability of life
and death and this just complete revolt against nature, trying to literally replace nature with
technology. And the hubris that that requires to even think in that direction. And in the world
that they want to build is not a i'll revolt against that world all day and the important thing to
remember is nietzsche wasn't even nietzschean right he was this incredibly like fragile sort of um bitter
in cell in some ways tended to for his entire life especially the second half of his life um by
women it was like literally keeping him alive ultimately a sister but other women as well
tending to him throughout his entire life and of course his philosophy of of ubermensch and and
and self-control and the transcendence of all values and this powerful being who's not restrained
by the herd is a sort of psychological projection from his inner weakness and his inner his inner
inability to be that and so you know it's a we got we got to psychologize nietzsche before we take on
his ideas and these people don't fucking do that and of course no of course the the nietzian elite you know
the the herd and then the elite i mean a lot of people especially in silicon valley sort of think of
themselves as this new aristocracy or this Nietzschean overmensch, this Uber mention.
The elite have almost never been this romanticized view that these people, that these people
have for them.
Like, you know, like, oh, the old aristocracy, they had, they had this idea of like tenderness
toward the common people and they ruled with dignity and this sort of Marcus Aurelius idea
of this elite that were truly deserving and were compassionate and wise in their rule.
Complete and utter bullshit.
It's never been the case.
Sure, there might have been in the, in the full spectrum of emperors and kings and aristocracies,
a couple people who were a little bit more thoughtful than the others.
But for the most part, they're just regular fucking people in some ways less morally, less morally developed.
I said on a recent, a Patreon episode that like talking just like, kind of like what you were saying with your,
with your grand, but just talking to like regular working class people, like, you know, people in their 40s and 50s,
there's sometimes more virtue, more moral clarity, more wisdom, and regular common people who actually had to live a real life, than these elites who are sort of floating in this, you know, ethereal upper realm of class society where they're completely cocooned within their echo chambers and complete comfort and opulence.
They have drivers and a servant class who serve them.
You know, these are not, that's not the conditions in which wisdom and moral depth can even grow.
And so, you know, this nostalgic idea for a true elite, this Nietzsche and a complete bullshit.
And of course, these people buy into it because it's a nice story for your ego to tell.
Oh, you know, I'm not just a fucking neoliberal subject who, through complete luck and chance happened to end up in this Silicon Valley fucking job, creating an app that does the same thing as 100 other apps.
I'm actually a Nietzschean Uber Mensch, who has transcended good and evil, you know, shut the fuck up.
But to your point about old people accepting death, so important.
When I was at an undergrad, I took a Jaron tautology class.
Yeah, Jaron Ptology, where it's sort of a focus of end-of-life care, et cetera.
And one of the projects was go to a nursing home and interview, you know, old people and about their life, et cetera.
So I remember I went out there by myself.
I went to a nursing home.
I was allowed in and, you know, some people volunteered to be interviewed.
And every single person, they were in their fucking nice.
90s. And I had to ask them, are you scared to die? Not a single person hesitated to say,
absolutely not. A lot of them did refer to religious faith, but my grandparents right now,
who I'm incredibly close to, who raised me for huge chunks of my childhood, who I love dearly,
they're in their late 70s. They're atheists. They got in a car crash in their late 20s,
and they lost a daughter. She was five years old. My mom was in that crash as a 7-year-old,
almost died but you know was was saved barely but her sister passed away and you know all the
trauma that came with it was in the 70s there's no mental health care your family doesn't know
how to deal with trauma like you just left to your own devices and that really just changed their
world views you know these simple rural iowa folk who you know had tragedy upon tragedy
and then sort of lost any belief in in god and they're approaching yeah they're 80s and you know
my grandma has has lung cancer and you know my grandpa has the
these issues and they try to stay as healthy as they can and as vigorous as they can. But yeah,
I have these open discussions with them as atheist even. Are you scared to die? And there's not
an ounce of hesitation. And they both say the same thing. Absolutely. I'm ready. Like when my time
comes, I'm ready. And to be in that psychological space, you know, as a 35 year old, I sort of sometimes
will get into counting. I'm like, okay, 35 plus 35 is 70. So I've lived maybe half my life,
maybe a little under half my life, when I hit 40, okay, then I'm, am I really, like,
on my halfway point? And that's sort of like, you know, counting down the day. But like,
imagine your reasonable time span being two to five years. And like having to grapple with that.
But there's something about the aging process where that fear becomes less and less. And you
are tired and your bones do want to rest. And for a 25 or a 35 year old to think about
annihilation is unfathomable. What are you talking about? I can't, I can't concede.
even being okay with that idea but then ask that same person when they're 75 80 years old how they
feel now there are exceptions and i actually have somebody a family friend who i don't know personally
but i'm i'm close with the person who's close with them and they just passed away two days ago at 65
years old and they and it reminds me of a leo tostoy novel of the death of ivan ilich was absolutely
terrorized um at at the at the point of death to the point where
they had to heavily sedate him just to prevent the panic attacks.
When he would start screaming, I don't want to die.
I'm pulling out tubes and stuff and trying to just panic and get the fuck out of there.
And I think that is a product of never wrestling with that question throughout your life.
Like you need, in the wisdom traditions, in the mystic traditions, in Buddhism, in philosophy, again and again, across cultures,
there's this idea of you need to learn how to die before you die.
wrestle with your own mortality. You know, Buddhist or Christian mystics would have like a skull on the corner of their death to remind them every day that death is happening. There's forms of meditation in Southeast Asia where you literally sit around and meditate around a rotting corpse and watch the process of decay to become intimately familiar with the realities of death. And I think the Ivan Iliacs of the world and the terrorized people at the end of death are precisely those people who don't know themselves who have never wrestled with these questions.
questions who've always tried to put them off. And I feel like this Silicon Valley techno
cult idea of immortality is precisely that. And far from being wise, that's anti-wisdom.
That's a hysteria. That's a trembling, fearing ego that doesn't want to be annihilated.
And of course, the ego doesn't. The more you identify with the ego, the stronger you make
the ego, the locus of your entire existence, the more terrorizing death is. And so, of course,
these people want to download their consciousness and live forever, they don't know themselves,
they don't know death, they haven't wrestled with this shit, and it fucking terrifies them.
And instead of just like accepting that and trying to deal with that, they want to run from it.
They want to create these fantasies of escape from death.
And that's only going to make their deaths much more terrorizing and panic-inducing than they need to be,
because they will all die.
Not a single one of them will download their consciousness into the cloud away forever.
And the longer it takes them to come face to face with that truth,
the more they're going to try to shove this shit down our throats.
Right.
Yeah, and I think that's the important thing, is that I think even if like AGI, all this other shit is inevitable or whatever, the idea of downloading a consciousness, I think is impossible, right?
Like, there's not any fucking actual basis to think that's a thing that can be done.
Again, because we don't even know what consciousness is, so that is a huge problem.
But even if, you know, the emergentist kind of approach to consciousness is true that it's just kind of this, you know, secondary result.
of electrical activity in the brain, that doesn't mean that it's data, right? That doesn't mean
it's the kind of thing that can be captured and stored on a drive somewhere. So it really is
a self-delusion, I think, ultimately at the end of the day as well. And it's a tragic one because
there's something meaningful to facing what it means to be human, right? Like, that is like, you know,
I often think in our lives, like, most of us don't live very, like, exciting or heroic lives.
We work, we do what we have to do. We try to take care of.
of our families, but the one heroic thing everyone gets to do is gets to wrestle with what
it means to be human in a sense, which is a big thing. And people have been doing it for the whole
history of our species, basically. And they just are robbing themselves of that. And it really
is a tragic thing. Yeah. After this recent family friend had this, this panic before death,
and he eventually passed away. It generated conversation with me and my daughter and my wife
and stuff. And I made it very clear, and I made it very clear for years. I want to
die with my eyes wide open. I don't want to be sedated. I don't want to go in my sleep. I don't want
somebody to sneak up and put a bolt in the back of my head. I want to like, I mean, ideally, like,
give me fucking, I don't know, this is kind of crude, but give me fucking cancer or something where
I have a timeline where I can see death coming. I can fucking wrestle with it. I want to be as
cognizant and as coherent as I possibly can in that transition into death, whatever comes next,
even if it is nothing but darkness forever.
It's a nice sleep.
I could use some sleep.
But if I go to sleep and I wake up to a bright light or whatever the fuck happens,
I want to be as conscious of that as possible.
And I want to be the sort of person that can gently surrender.
That can truly accept that death, be conscious for it, not be terrorized by it,
and to give into it to fully let go.
And in Buddhist meditation in particular, that idea of letting go, that idea of
disidentification, that idea of going down to the sensations of the body and getting away from
conceptual abstractions is, I think, one of the best ways that you can prepare your mind, body,
and soul for death. And that's why I find that practice in particular to be so, so useful in
my life. But there's other ways to do it as well, other mystical traditions and even philosophical
traditions where you can, you know, kind of wrestle with that stuff and take it head on. And I think
it's really, really worth doing. And I would urge people to do it. I've been terrorized by death.
I've had panic attacks over the mere mention of it.
I've had prolonged months-long existential crises where I was compulsively day and day out from the moment I woke up to the moment I went to sleep, terrorized by death.
And it was precisely through that process of, as I always say, having my face shoved in the shit of my own mortality, that I was able to develop a deeper and wiser relationship to death and a more compassionate orientation to all human beings because of the universal experience of being aware of our own death and having to die.
and that is a connective tissue to every other, every other sentient being who is self-aware and knows
that they're going to die. That's what we have in common and that can bring us together. And so
you can't have life easy. Too much comfort, you know, is bad for you. You have to be uncomfortable.
You have to face the terrors of life if you want to be a courageous person that is leveled up by the
tragedies of life and not made small by the tragedies of life. And so that's something we should all
keep in mind regardless of this discussion, you know. Yeah.
Okay. Well, let's talk a little bit about capitalism because some words come to mind, the forces of production, automation, the contradictions of social production and individual appropriation, which we've talked about many times, the idea of a basic income. We don't have to speculate about a hard takeoff or whether it's general intelligence. What we do know, what we've already achieved is that artificial intelligence in one way or another, even if it's much more limited than a lot of, you know, these people think, is still.
already radically altering our society and is only going to continue to do that and there is a
real question of it's not just you know very specific jobs at the bottom of the economic class ladder
that are going to get automated there's you know like you were talking about medical diagnosis
talking about lawyer work um you know white collar jobs being completely automated very quickly um even
without the rise of artificial general intelligence and that's already sort of putting some
pressure on economies and people's thoughts about the near future, but I do think that process
is going to continue. And that is going to, there's already these deep contradictions within
capitalism. And this seems to exasperate it. But there are multiple trajectories that we could
take. And from where we are right now, one of the more dystopian versions that this stuff can
take is we don't try to wrestle with the class society, the structures of class society.
We inject an already highly unequal economic order with this hyperproductive
technologies that under capitalism, as I've said for a long time, new technologies are
not used to lessen the burden on people or secure basic necessities.
You end up just competing with these technologies for jobs.
And so there's a future in which automation is, and the technology behind it is controlled
by a few monopolies and huge mass layoffs occur there is no more quote unquote middle class
there might not even be a working class there needs to be a consuming class um but you know the
stratification of society where the people that right now there's monopolies like just in the tech
sector huge monopolies on almost every sector of american industry and american economics you have
shared monopolies or monopolies outright trusts cartels of various sorts um you know that is going to
be a huge problem because you can imagine a near world in our lifetime where automation comes
along productivity soars through the roof because of the efficiency of these various technologies
but the but the capital and the profit generated from these technologies are continued to be
siphoned upward to an elite and you have this sort of blade runner-esque possible future where there
is a hype it's like sort of techno feudalism again where you know you have this small elite this
aristocracy, this tech aristocracy that owns all the fucking technology, that owns all the
automation software and hardware, that, that are continued to be made incredibly rich.
I'm talking trillionaires, while there's a broad, vast underclass of people at the bottom of
society, depicted in Blade Runner is literally the ground level of society, right?
The rich people are up in the towers and the poor people are down in the crowded streets,
neon-filled streets.
And that's a very real, at least possibility.
Of course, there's going to be struggle against that.
And already one of the ideas is a basic income, but you could still fit a basic income very well into a highly stratified class society with elites and a sort of servant slash consumer class where we might not have to go to work and we get a basic income to be able to afford the basics of life, but that everything politically, economically, socially is still controlled.
by a really small elite of people and so we think we're being liberated from jobs but we're
just being liberated to just continue to sort of get by as we're getting by month to month
when our little basic income check gets not mailed to us but download it into our bank account
or whatever the fuck and just all the all the contradictions that's going to generate including
of course war and empire and surveillance and the ability for that sort of techno-fascism
to protect itself through domination and control of
of even pissed off, but ultimately powerless populations.
So what are some of your thoughts,
especially with regards to the advancement of the forces of production,
class society, et cetera?
So this is where it gets interesting with AI.
So I'll talk about existing AI that we have right now.
So again, LLM's image models, all of these stable diffusion,
all of these kind of approaches that we have right now.
So what is fascinating about them is that I all think they don't do shit
to advance the means of production, essentially. So the existing ones certainly are creating instability
and they are taking jobs, but they are taking very specific kinds of jobs in post-industrial societies
in a way that I think is quite interesting. So I write code for a living. I'm a software engineer.
I probably am slightly at risk of my job being impacted by this. I've seen the code writing AI kind of stuff
that can be done. I, in fact, I incorporate AI when I write code daily. I use it to help me write
code. Currently, as it exists, I am still a necessary part of that process because I have to
look through the code generated by the AI and make sure it's correct. And I need to understand
what it's doing to put it in the context of other code that already exists in the system, right?
The AI can't do the entire thing. But it sure does a part of it. And it could potentially
increasingly do more and more of it. Advertising has seen a big in part of AI, because there's
a bunch of bullshit copywriting jobs, just like there's a bunch of bullshit coding jobs in our
economy, right? Where people write texts that only exist to exist in ads or tweets or all these
other things. Those are at risk of AI. Artists are also at risk with AI because more and more
projects might just use AI art. You know, I'm a big fan of Magic the Gathering the card game,
and the big fear everyone has is that they'll start using AI to create their art instead
commissioning artists, right? That is like a very big fear. So there are currently disruptions
that are happening, but they're not happening within industry, right? They're happening within
these kind of post-industrial segments of society. But most tech at the end of the day is just
an extension of advertising or private advertising surveillance, right? So even the tech jobs
being disrupted are really a part of this broader consumer economy that is not really about
commodity production per se. So I do think if we find ourselves in this interesting place where
currently, at least with the existing technology, production is not really being changed. It is again
post-industrial jobs that are being affected, which is particularly interesting. We may see AI disrupt
kind of tertiary service economies as well. You know, the point could come where you don't necessarily
need people in retail. You could have a machine with an AI who does the checkout process, right? Something
like that could get disrupted, but production itself isn't really being infringed upon
within AI. And I find that very interesting for a couple of reasons. One, it means that I think
it does very little to progress capitalism towards something that could transcend capitalism,
actually. It's really just being used to create the consumptive part of capitalism at a higher
rate, but not the actual productive or socialized parts of capitalism. And it also is why I'm
very skeptical about the idea that even putting this technology under, like, socialist control
would make it useful because I don't think you need ads under socialism, right? In the same way. So a lot of the
applications of this technology, I think, fundamentally would it have a use case in a socialist
world? They're so tied into capitalism. Fascinatingly, actually, that makes me wonder whether
or not these technologies could ever have a productive role, even if put under Democratic control
of the working glass. So there's some weird stuff materially at play with those. So I don't know how
much they'll shape things, but there is the possibility that at some point this AI will get
integrated into actual production, right? Will get integrated into actually creating physical
commodities, at which point I think the much larger social disruption could occur, right?
That's when societies, beyond just Western post-industrialized societies, but more, you know,
global South societies where industry still exists and has been outsourced to, could see massive
disruptions. And that's where I think we would see something significant. We're not quite there yet.
haven't actually bridged that gap. Again, most of these applications are so myopically
consumption-based. It is kind of tragic, actually, that we created this really incredible
technology. We're like, let's write ads with it. That's the primary thing. We do. So, you know,
that bridge may get, that gap may be bridged eventually. It's unclear to me, though, what that
timeframe would look like. But yeah, you know, AI obviously brings up this discussion of a universal
basic income as a solution, but I think you're right. A universal basic income wouldn't liberate us,
right? It would in fact just maintain these consumptive models. It would just be now we have
this base quota for what we can spend to consume rather than that being tied in some way to
employment. So there's no liberation in that. And there's certainly room for the world to move
in a much darker place, actually, if we do see this automation actually connect to commodity
production itself. And then for the very process of commodity production, not only to be
owned by the ruling class, but also to exclude the working class, because they're
no longer necessary in it. And that would actually be a fundamental disruption of capitalism,
right? Because the core contradiction that Marx and Ingalls are teasing out is the socialization
of labor versus private ownership. Well, now you have private ownership and private labor,
right? You've actually completely eroded that. So there would be a fundamental transformation
that occurs there, I think, that would necessitate some sort of re-evaluation of how we oppose
this new thing. But I don't think we're quite there yet, again, because I think it really all
has these like very post-industrial use cases that limit how much damage it can do outside of
imperial core countries. Yeah, that's very, very interesting. Well, first, the idea of capitalism
without workers is just, yeah, that seems like, I mean, how can there be a bourgeoisie without
a proletariat and, you know, what that means and the, yeah, the consumptive model of capitalism
carrying on beyond the productive, you know, worker-centered form of it. I mean, yeah, those are just
very fascinating things to deal with. But, you know, you talk about the, you know, restaurants,
retail, service economy, and post-industrial societies, that's a huge chunk of our society.
You know, if those things do get automated, there's already this process of sort of reshoring industrial
manufacturing after the pandemic and the fragility of, you know, supply lines and all this stuff
that's sort of naturally occurring for variables outside of just automation or the advancement
of artificial intelligence. You know, is there a future in which the service, like there's a re-industrialization?
you know where service stuff like yeah retail and you go to you go to you know fast food and it's just all
automated um but yet you work at like kind of like an older version of like a factory still um right
producing things that have been resured that used to be in in the era of neoliberal globalization you
know spread around the global south are now coming back but i i still i don't know maybe you have
a clear idea in your head about this but i still feel like you know the the world of of production
and industrial manufacturing and stuff can still be deeply,
then in some ways already is being deeply impacted by these technologies of automation,
etc.
I don't think there's necessarily this clean line between, you know,
some of these service-oriented post-industrial, you know, jobs and parts of the economy
and these industrial ones.
Can you maybe give me an example of like production being sort of separated from these
other forms of the economy that are under assault by automation?
Like, what exactly do you mean there?
Well, I guess in my mind, AI doesn't have much to offer to the already existing automation of literal physical production.
So, for example, car manufacturing is very automated at this point, right?
There is a high amount of machinery.
There's even robotic integration into it.
But that's not new, right?
A lot of that machinery developed just with the very invention of industrialization, right?
And so from the very beginning, capitalism has always had this automation.
of industry and this increasing incorporation of machinery. So I guess in my mind, a lot of what can be
automated about production is already automated, just on a pure machinery level. And it's not clear
to me what an LLM adds to that, right? That's kind of the distinction that I'm making. So it's not
that there's not the possibility of automation. It just seems to me that AI is not useful for that
particular type of automation or for increasing that. Given that the machinery is already there and
workers have basically been reduced to cogs within that existing machinery, maintaining,
taking products that are output by that machinery, maybe doing some level of packaging,
although even most of that is automated with machinery at this point.
I don't know what AI adds to that.
So I guess it's not that there isn't automation within literal physical production.
It is that I don't think that is particularly disrupted by these technologies we are seeing at the moment.
I see, I see.
Yeah, I would completely agree with that.
A lot of my high school friends are like union steam fitters,
union electricians, union insulators. And yeah, precisely what you're saying. Like those jobs
seem to be even in some ways more secure than like the low level service economy jobs or even
some white collar jobs that are more susceptible to even just large language model taking
them over customer service. Yeah, retail or even like, yeah, like paralegal work, stuff like that.
So yeah, that's a good point. Okay, you've clarified that for me. But yes, that's, that's, that's
says nothing about what could happen in the future.
Yes.
Large language models themselves aren't certainly the thing, but, you know,
developments can still occur where even the people involved in that stuff continue
to get whittled down further, which at any level is still going to create a lot of
issues that need to be wrestled with and dealt with.
All right.
Well, I think to go towards the end of this conversation, I kind of wanted to touch on this
idea that you and I have both sort of talked about over text, which is neo-ludism.
sort of a neo-ludite movement or like at least this urge within us in the face of precisely this form of technological progress ran by these types of fucking people out of this section of this of the society a deep skepticism and a sort of like rejection of what they're trying to jam down our throat um the the most um you know salient way that that's taken form for me recently is just like um you know a sort of continued disgruntled adjutel
agitated relationship to just social media, the smartphone, being on the internet all the
fucking time, and how much of my life that that takes away. And, you know, I'm trying to get
away from social media. I'm trying to get away from screen time. I'm trying to do more
productive things socially and just existentially in all those hours that so many of us now
whittle away in the empty sort of scrolling that we do. Yeah. The, the, the, the
addictive natures, the purposefully addictive natures of these platforms, the, you know, jacking into
our dopamine system, these little casinos in our pockets for dopamine hits and, you know,
how it deteriorates our relationships, actually. I talk about how like liking and commenting on
your friend's life event has now taken the place of going over to your friend's house and
celebrating with them, right? And how that like, even in the world of hyperconnectivity, we're
lonelier than ever, we're more mentally ill than ever. We're more.
isolated than ever. And these are precisely the things I was talking about at the beginning of this
conversation, wherein there's this utopian vision of what these technologies will do. And then the
actual ways they manifest in a hyperalienated, you know, capitalist stratified society. Right. And,
you know, I'm just, I'm sort of like more and more just like rejecting all of this. I hate virtual
reality. It repulses me at like a spiritual and visceral level. And, you know, I'm not going to be like,
I'm not against technology as such. But I am against.
the way technology is advancing in these conditions, how they're oriented to like just steal
our fucking attention and our lives and our time on this planet, you know, doing the,
the silliest, most meaningless shit. And one of the ways that it really hits home for me is like,
look how much the average person today invests in these social media apps and scrolling and
just being a part of the whole sort of apparatus. And compare that to how little you get back
out of it. You know, maybe you've made some new connections cool. Those are often very shallow
connections. You know, somebody across the planet who shares your interest. That's cool. Maybe some
of those, I even used you as an example in my Patreon episode, some of those can develop into
real friendships. But most of them stay at the level of just completely shallow, meaningless
relationships that you don't actually need. Our social brains were sort of wired for like community,
face to face like just so much of our cognitive energy that we give towards like reading people's
body language and seeing where we stand in the social hierarchies of our actual community
and how that's just sort of being shallowized and obliterated with this with these new technologies
and just really wanting to turn away from it all I don't want your fucking meta headset
I don't want to fucking have you know a Facebook account and an Instagram account and a TikTok
account I don't want you to be mining my data and selling it for profit I don't
don't want any part of it. And I think a lot of us are starting to feel this way. And it kind of
falls broadly under this tent of Neo-Ludidism. But, you know, that's sort of a crude term in its
own right. And people mean different things by it. But what are your thoughts? What's your
relationship to these technologies as somebody that's embedded in the tech world? Yeah. And what are you,
what are your feelings around this idea of like rejecting some of this shit or the very least,
like drawing lines that we're not willing to cross, you know? Yeah. So this is what's true.
So yeah, I work in tech. I write code for a living. So I'm part of the problem in this very concrete sense. And the worst part is I love writing code, honestly. I've never had a job that I like more. I think it's fascinating. I love learning about technology. I truly enjoy the process of producing software. So, you know, I'm embedded in these very concrete ways. At the same time, the applications for what software gets used with are heinous, in my opinion, in most insights.
instances and just I really find horrifying. So for me, for a long time, I actually think I have been very opposed to anything that I saw as kind of a Luddite impulse. I've been very critical of kind of anarchist strains of anti-technology thought, which I think often like the post-civilization or anti-civilization anarchism can kind of become fascist in their own right. You get these anarchists who are like citing younger in these really interesting ways where I'm like, okay. I don't know.
I've always been critical of that, and I've tried to push back against that at the same time.
Yeah, I am feeling an affective repulsion to a lot of what I'm seeing.
I think something that drove us home for me the other day.
I don't know if you've seen that Casey Nystatt video where he's walking around New York with the Applevision Pro.
Yes, I actually have.
And the cool, like, big moment that he asked is like, oh, my God, I can't believe I'm watching Mr. Beast while I'm waiting to get on a subway.
Kill me now, dude. Kill me now.
Yeah, and I'm just like, oh, I want to, I want to fucking vomit.
If this is, if this is the future, I'm not on board.
Right, exactly.
Like, there's just something, I think, increasingly that is striking me as horrific about it.
And I think, I try to walk a fine line.
I have, I think, always been a Marxist, who's maybe a little more on the Althusarian.
Marxism is not a normative ethical system.
kind of side of things. And at the same time, I feel a moral repulsion to some of these developments. And so there's tensions here that I'm trying to walk. I also think Marxism has a high opinion of technology generally, right? Marxism generally views technological advancement as a good thing. It's a question of how that technology is controlled. And so I think I try to be careful because of that. But increasingly we get to technology that I don't see how even put under proletarian
control could be a good thing, right? So again, LLMs, I can't imagine what you do with those
under socialism. They write ads, basically, right? Like, that is the main thing they do. They
replace real knowledge with the simulacra of knowledge, right? I can't imagine a progressive use
case for that. So I think I'm increasingly seeing technology develop where I struggle to kind
to see, even if we change the material conditions in which technology is produced and in which it
is implemented. I struggle to see how it advances humanity. And I think, you know, when I look at something
like the Apple Vision Pro, I see something that alienates the user. I say something that puts a wall
between them and the rest of reality. And that scares me and it concerns me. And I think there's
such a tightrope that we have to walk here because it's so easy to then turn to valorizing this romantic
notion of nature, right? And I think that becomes fascist really quickly if you're not careful
with that in its own right to actually. I think that becomes reactionary in these really simple ways. And I think it also becomes naive because nature is not a beautiful pretty thing. Nature is the world of Ebola, right? It is the world of suffering and plagues and violence and carnivism. Like, you know, it's easy to overcorrect in that other direction. So there is really a line that we have to walk that I think we have to be careful walking. But there is technology that, yeah, I just increasingly am saying, I don't know. I think you put that under proletarian control and it's still a bad thing.
right? And I think that's the new thing that I'm finding myself experiencing, that I'm trying to figure out how to navigate. And it does kind of crop up as this like Luddite response of like, oh, I won't just see that thing smashed, right? For really going back to the history of what the Luddites were. Quite literally, like, I want to see that technology destroyed. And I think one thing that some of the anarchists have talked about with the Luddites that, you know, there are some anarchists just in Marxists even who have tried to reframe the Luddites as not a reaction against technology.
right, but as a reaction against the labor disruption of technology, right, as really, in fact, a labor
response. And I think that allows us to see this in a way that maybe gets beyond some of the more
potentially reactionary things. It's not that technological advancement is bad inherently.
It's that technological advancement, you know, has ramifications and we need to deal with that
on a case-by-case basis. And maybe sometimes that means saying no to certain advancements,
which I think is a statement that would make Mark's turn in his grave.
potentially, but which, you know, is increasingly where I'm coming to. I do think there are
just some things that we are being able to develop that also works could have never conceived
of, right? They're like kind of unthinkable to the frame in which he was writing that maybe
we just need to say no to. And that's kind of where I'm coming to. I don't have a good
systematized way of teasing all of that out yet, but it's an impulse I'm feeling much more
recently. Yeah, but yeah, my question with regards to the Mark's idea is like, are they
even like to your point are they even advancements you know are they lateral shifts and in certain
things that technology allows for but they don't aren't they're not actually inherently progressive
or an inherent advancement over anything and i think that speaks to your your questions and
conundrums regarding you know even under proletarian control what good are some of these things
and i think that's the thing that i i wanted to um assert is like that with the drawing of the
lines for ourselves is like just don't passively you know just be pushed
into this stuff like you know like you have your apple watch and you have your phone and now you
have your google glasses and now you have your vr headset and you're on every single social media app and
like you know that there's a sort of passivity there that people are just lured into and if this
stuff was good for us if it was edifying if it was if the enrichment outweighed the cons
i don't think we'd see that the mental health crisis specifically focused on young people
who are coming up with these things already on their hip by the time they they get self-awareness
and before they even hit puberty um you know where where their entire worlds are now funneled into
these machines created by corporations for profit and um i think we see the the the the ramifications
of that precisely in the low levels of community of connectivity of optimism etc and so i think
those are alarm bells going off that like hey some of this stuff it's not that we're against
technology outright but it's it we're against the way it's manifesting in these conditions and how they
don't solve but actually exasperate things like alienation, you know, an estrangement,
not necessarily from your own labor, but from your own life, from, from others.
One thing you said there, and I'm very kind of interested to kind of get down at this,
because this is actually mocked on the right, that, you know, there's this idea that, you know,
even with like physical fitness, that there's a fascist, there's a body fascism with just trying to
be healthy, you know, and that gets taken so far that, like, anybody doing anything or
making any part of their personality about being fit or something is like inherently fascist.
I'm skeptical of that. And in the same way, I'm kind of skeptical with this idea that you
gesture towards, which is like this return to nature being, you didn't say inherently
fascist, but I certainly agree it can take a fascist route. But I have this urge that's the
simultaneous urge of like turning away from some of this technology is my turning toward
nature in the sense of me personally being in it.
So the time that I would spend on my couch scrolling,
I want to go fucking, you know, swim in a fucking cold lake or, you know,
hike around the woods by myself or just go camping and just be under the fucking stars
and to immerse myself in nature.
And I don't think there's not, I'm not putting words in your mouth.
You did not say this.
Right, right.
There's nothing fascist about that.
And so my question to you is what about what form of return to nature with a V is fascist?
And I'm kind of like, can you parse those thoughts out a little bit for me?
Yeah, so I have a couple of thoughts.
This is so funny.
There was this Twitter thread like six months ago.
I know more than that now.
Maybe you're about how hiking is fascist.
God damn it.
I think it was very stupid, but got it something interesting.
Which is so, okay, there's two layers here.
One is a historical layer, which is that.
I think the notion of nature as a thing we might go emerge.
ourselves in is very modern in these kind of complicated ways, right? So I love hiking. One of my
favorite things to do. I love going out and riding my bikes in the hills around here. I love
getting out of the city and being surrounded by trees or hills and scrub, a lot of that in L.A.
Chaperil. That is one of my favorite things to do. But the idea of going out and immersing myself
in that is actually a kind of idea that really would not have existed prior to industrialization
and modernity, right? That division between civilization and nature in many ways is a product of
the enclosure of the commons, right? In many ways, one would have existed within nature,
i.e. the spaces that would be demarcated as the commas, just as a mere fact of survival for many
people prior to enclosure happening. So there's this level of modernity that is inherently
baked into that idea. And what's more, you know, one of the types of hiking that I like doing is I like
hiking up mountains, and sometimes that involves some, like, technical aspects of it, a little
bit of winter mountaineering. That, you know, prior to industrialized society, creating leisure time
would have probably been seen as one of the stupidest things you could go do, right? Why would
you go do that? It's kind of a fundamentally insane thing to do. It's actually, in fact,
the kind of thing that develops during, again, the development of leisure time and this European
kind of romanticization of nature, where we have to go conquer the madrigorn. And the I
and we have to conquer the Alps, and then eventually we have to conquer the Himalayas, right?
So within this modern notion of how we interact with nature, which really is a product of capitalism,
there always is this kind of romantic conquest idea, too, that always somewhat worries me.
So mostly, I just think the distinction between nature and civilization carries within it a lot of
ideological values that we often don't think about in a way that is very complicated.
And then, you know, to tie it to the question of fascism, fascism, you know, is complicated in the sense that it is contradictory to itself.
So within fascism, you have massively futuristic-based impulses and also traditionalist impulses, right?
So Italian fascism becomes much more marked by the futurist kind of side of things.
Futurism was a movement in Italy, in particular, with this focus on technological development and fascism setting production free, right?
really becomes this idea. German fascism has a lot more of the traditionalism emphasis,
right? And this return to a greater time with a huge focus on nature. In fact, something very
comparable to what we would think of as scouting became very central to German culture
leading up to the Nazi party. And many of the early Nazi eminaries came from this kind of
scouting movement, actually, that developed in the Weimar Republic. So there is this risk of a return to
nature having this romanticism in it, that especially in the German context becomes this kind of
like anti-bujois, like, oh, we need to escape bourgeois domination and get back to something
prior to that, something before that, which is at odds with Marxism, right? Where we actually,
we don't escape bourgeois domination by going back. We escape it through full proletarianization,
right? I think you see us in Ebola, too, right? Actually, this is, we got into this. This is how
Ebola is able to say, I am anti-boucho, and I am a traditionalist, right? So the return to nature
can be bound up with all of those things. Now, does that mean we shouldn't go interact with nature?
No, I mean, I love hiking. I still obviously want to engage with it. I have had profound
experiences out in the forest, right, that I don't necessarily think I could have outside of it.
But I always want to be critical of slipping into that thought process and that ideological framework
too simply because I think there's a ton of baggage to it that we have to actually wrestle with.
That's totally fair.
And maybe the difference hinges on this idea of like a social or civilizational return to nature and turning backwards to a romanticized period of the past versus like an individual, spiritual or existential, just especially in a hypermodern, hyperalienated techno capitalist society, a sort of.
escape into nature, not as a social program, not as a set of political demands, but as an
individual, spiritual and existential activity that is a rejection in some part of the hyper
technologized version of modern life. And, of course, when I go into nature, there is no,
there's no, and I'm pretty self-aware, there's no, there's no concept of conquest. It's like,
I want to be conquered by it. I want to melt into it, you know, like, but your point is very
valid of like this sort of modernity giving rise to this romanticization of nature and this need to
go back into it because we have been alienated from it. And so, you know, in the process of being
alienated from it, there comes the urge to go back to it. And, you know, it is like so we are
evolved as social beings, you know, like to be integrated into nature. And the further we get
away from it, the less holistic we individually feel, the more scattered I personally feel. I personally
feel and so yeah i think that that difference of not making it into a social civilizational program but
as like a existential or personal remedy to some of the pitfalls of today but the political
program for me might come and i'd love to get your ideas on this because we agree that we
there is no going back and i've said many times the only way out is through um which is going
forward is like um to overcome alienation ultimately in a sort of reintegration of civilization and
nature at a higher level, right? Overcoming this, this modernist dichotomy, this duality between,
you know, which is, which is part that that dichotomy is part of the psychology and ideology of
modernity, which is this duality, right? Man versus woman, civilization versus nature, reason
versus emotion, which of course are ultimately false dichotomies. There's dialectical
relationships there. And the further we alienate ourselves from nature by trying to conquer it,
you know the more alienated we actually become and so the idea is that a future civilization would not try to conquer and push out nature to like this this hinterlands where you can go out into nature you have to leave the city and go out into nature right but perhaps in a future there is civilization and nature that the lines are re-blurred and we return to to nature not in a romanticized fascist let's go back way but in a we've we've went through the lessons of hard civilization and this hard separation
we realize it's false. And so we want to take the best of human civilization and social progress and marry it to the best of sort of, you know, a dialectical and communal relationship with the natural world, which is, of course, the basis for civilization in the first place. There is no, there is no separation in the final analysis. And maybe our civilization in the future can reflect that. And maybe that's how we escape this sort of fascistic turn of hyper romanticizing the past relationship we had with nature.
Yeah, no, and I do think the transcendence of that dichotomy is ultimately something that Marxism seeks to do, right?
You know, if we want to get very foundational with Marx and Hegel, you know, I think of the 1844 manuscripts where actually I think Marx talks about nature a lot in this manuscripts, right?
And actually talks about like the fundamental aspect of the human condition being the objectification of nature, right?
Marx takes that as a starting point very much just pulling this out of Hegel, right? And I think there, yeah, you do kind of see even the material conditions that we talk about within Marxism, which then take the form of production, which is technological, are constrained by the natural world within which they exist, right? Nature's not fully transcendent. And I've hinted at this before. But what is climate change other than ultimately calling capitalism's bluff that nature and civilization are separate from each other, right? What is it other than the ultimate proof that?
that no, these two things have been in a dialectical relationship with each other all along,
and oops, they affect each other, right? And now we are seeing that. So I do think Marxism
ultimately, yeah, it does try to overcome that. Now, obviously, I think there are philosophers
who say Marxism doesn't overcome that because to objectify nature is still not to take nature
as a thing in and of itself, right? Nature gets instrumentalized still within Marxism.
So the whole debate you can have there. There are various reads of Marxism.
John Bellamy Foster, obviously, is a important thinker who tries to read Marxism in a more
ecologically informed direction. So, you know, a lot of ways that you can go with that. But ultimately,
yes, it ought to bridge that gap. And yeah, I think socialist society ought to bridge that gap,
too, if only by recognizing that, yeah, the distinction is artificial. The systems that we're talking
about exist both within civilization and nature as interrelated and interplaying with each other,
and that ignoring that has massive consequences.
It is kind of interesting, too, right?
Actually, that, like, that one Marx quote where he kind of gestures at what communism would look like is very pastoral, right?
Where you can be an artist and a fisherman and, you know, this whole list of things.
So the one vision we get does have this kind of very pastoral view of nature in it, which feels at odds with, like, the massive industrialization focus within Marx, right?
It is very weird for a man so in awe of socialized production to then imagine that, uh,
very, very different vision of what humanity will look like. I don't think, you know, I don't know
the clear answer, but I always just try to be ideologically careful with it, because I think it is
too easy to slip into some ideological mistakes. But ultimately, like, yeah, you don't have to
fall into it, even within, you know, going out and interacting with nature in very, like, intentional
ways. Yeah, conquest doesn't have to be what it's about. I like to stand on top of a mountain,
because I never feel smaller when I stand on top of a mountain. That's for me.
it is the profound humbling of the experience that I find valuable. But that doesn't mean that
there hasn't been, you know, these ideologies at play. And also the last thing I will say is the
conquest version of it also was still conceptualized as escaping alienation, right? That's what's
kind of horrifying about it. Mountaineering in particular, I think, is this interesting history here
because I think mountaineers by and large are narcissists. But generally, like, if you hear a lot of
the early mountaineers talk about what they're doing. It is like I'm escaping the alienation
of society and I'm conquering this peak, right? So I don't think it's necessarily so easy
to say that those things are separate from each other. There's always been this weird colonial
conquest ideology also tied with kind of an anti-industrial spiritualism within that
ideology as well. Yeah. And I can certainly see how those ideologies are tempting to
people yeah i can certainly see why you could even build entire movements around them right and i i
do think in in like in the destabilizing era of modernity with hyper technological progress and
all of these things coming faster and faster it feels like that there is um especially you know
one of the things i've been fascinated with recently is like going on to like you know gen z youtube
and and and listening to them wrestle with existential issues of like what are our futures
what do we want out of life you know listening to gen zers talk about their social media addiction and
trying to kick it and what you see a lot of is in this new generation which you know is very
different than millennials is this sort of exactly what we're talking about this romanticization
of traditional things like there is a there are movements they're they're usually right wing
coded but but not always um to want to re uh re engage with religion to to to you know some of these
some of these people's highest goals in life is just to have a family. They want children.
They want to live this more simple life. And of course, it is like a sort of natural reaction to,
especially, you know, at least like for you and I, we were born. We're like 90s kids. So we had
this period of childhood before, you know, everything moved on to screens where we were physically
engaging with the physical world and we were hyper social. And that set us up.
not to be immune to the lures of new technology,
but to have a sort of, you know,
bulwark against its full takeover of our minds and bodies and social selves.
But people coming up right behind us, you know, born after the late 90s or the 2000s,
they've sort of just been integrated with it since they were old enough to think and have memories.
And so I find it very interesting and also alarming in some ways that there is, there does seem to be this,
it's not traditionally conservative or even necessarily reactionary, although it certainly can tip into those shades, but it is this sort of re-romanticization of things that people have taken for granted forever, like just being able to have a fucking family.
Yeah, which seems impossible.
Or just loving religion again, you know, like I want to be a Christian. I want to go sit in church and have that stability and that anchor in my life.
you know i was i was thinking recently about how the the movements of the boomers on both the left
and right were very anti-social obligation on the on the left version of it um you know you had like
this sort of countercultural hippie 60s movement drop out you know do acid expand your mind
but you don't owe anybody anything and that's sort of a reaction to the conformity of the 40s and
50s leave it to beaver ass america right but how that actually grew up
was into the form of what we now see as neoliberalism.
That complete lack of social obligations on the social and cultural level
turned into the economic libertarianism that is the hallmark of neoliberalism.
And it is this sort of complete lack of social obligations.
Fuck you, go get, you know, I want to do whatever I want, hyper individualism in extreme.
And I see in the millennial generation, you know, and these are generalizations, of course.
But, you know, among many millennials, there's this sort of.
of urge to have a return to social obligation, to think about politics in terms of
collectivity, in terms of building institutions, taking care of people, housing, health care.
We don't want the disintegration of social bonds.
We don't want the disintegration of social obligation.
We don't all want to be hyper individualist striking poses into our fucking phone camera all
day.
We want something realer.
And I think that there's now a Gen Z sort of reaction that's coming in the form of a
romanticization of even traditional roles. And I find that incredibly interesting. I don't exactly
know what that bodes for our future. But as these cohorts with these sort of generational
ideologies of age into power and the sort of boomer libertarians on the left and right
and center age out of power, I think we are going to see changes. And I think that's going to come
with a heavy dose of political reaction. But as well as on the counter, a heavy dose, which we're
already seeing of a collectivist socialist approach to these problems as well. Like, no, we're not
just all individuals that can do. We are embedded in society. And when we don't have social
obligations, when we don't have community, we do suffer. And we want to build institutions
and a society that takes care of everybody because we're all in it together. And that is sort
of a contrast to the previous generation's hyper individualism. And it's a sort of natural
reaction. Do you have any thoughts on that? Yeah, I think I've seen this trend that you're talking about, right? And it both concerns me and interests me in equal levels, I think. So I think broadly, what I do want to throw out there for the listener is I think the complicated thing is that Brett and I, I don't think either you or I are opposed to tradition in the broad sense, right? I think we're both very impacted by tradition. Go listen to our mysticism episode. Like we, there are traditions that,
we find massively valuable and have integrated into our lives. So I don't think that tradition
inherently is reactionary, right? But unproblematic embrace of tradition and romanticization of
tradition is a surefire path to reaction, in my opinion. Uncritical, even? Uncritical?
Yeah, yeah. And I think we have to like, we have to do it very carefully. So look, I'll be
honest. I'm at odds with Lenin here, right? Lennon's position is atheism and was opposition to
kind of like the godbuilding project, right, within the early Soviet Union and those who wanted
to perhaps build kind of a secularized religion, right? I'll just go out there as a revisionist
on this one. I think Lenin was wrong. I think actually religion plays important roles and
tradition plays important roles in human society in a way that you can't just get rid of. And I think
what we are seeing with, yeah, a lot of these younger people returning to religion, returning to a wide
array of religions, too, is kind of a wild thing. The amount of young leftists I have seen who have
embraced Islam, like, the last few years has been kind of shocking to see. So there's this broad
return that is happening there. And I think that is indicative of the fact that, like, absent some
of these things in your life, you're in a difficult spot in a highly alienated society.
I recently watched an interview with an academic that I really appreciate Dr. Justin Sledge,
who runs a YouTube channel called Esoterraga. It's about mysticism and comparative religion.
a huge fan of his work. And he was being asked about his experience of religion, right? And he's
Jewish. He's part of the Reconstructionist movement. And he basically says, yeah, my view is religion
is a technology, right? It's a technology by which we can mark parts of our lives. It's a technology
which we can deal with death and grieving. It's a technology by which we can come together.
So what is the religious thing that we should embrace? It's that technology, right? But that
technology needs to be the useful parts and not the reactionary parts of it. And, you know, again,
I think Lenin and the traditional Leninist read would call this idea into account. They would say
you can't just treat it as an independent technology. It's ideological, right? So that's where
you would get that opposition. But I think the fact that religion has transcended in so many ways,
so many different material arrangements of society indicates there's something probably more
than pure ideology in it from my perspective. But I do want to acknowledge the tension with
in there. But to me, yeah, it does seem like there is something with religion that is a
technology that is worth holding on to, but critically, right? That's really what I want to get
back to. And so in the context that, like, Justin Sledge was talking about in that interview
and in the context of Reconstructionist Judaism, that meant, like, gutting parts of the teachings
and Reconstructionist Judaism of Vordechai Kaplan, you came up with it, took out the concept
of chosenness from all the prayers, right? Having seen that as chauvinist, just straight.
up made modifications to them. And I think that kind of thoughtful rebuilding of the technology
is probably the correct way to go about that. And so when I see young people embrace religion
just like without doing that rebuilding part of it, it does worry me a little bit in a lot of ways.
There is something within tradition and within religion that has value and meaning. And I would
call myself religious, right? Like cards on the table. I think there is value there. But you can't
do it unproblematically and you can't not wrestle with the tensions between it and Marxism,
I think. That's kind of what I want to stress. But there's a reason people are turning in that
direction. There really is a reason. And I think just insisting on utter atheism actually means
we'll lose those people, right? In a lot of ways. It actually means we won't have something to
offer that they clearly need and are searching for right now. So it's complicated. It's always going to be
my answer. But I think, you know, I hope that Brett and I in some ways can model this relationship
to tradition that is both critical and embracing at the same time. Because, yeah, that uncritical
embrace scares me in a lot of ways. Yeah. But I genuinely agree with everything you said there.
And there's that fascinating, yeah, that fascinating development where people on the left seem drawn to
Islam. There's a little bit of flirtation on the right with Islam because of it's like sort
patriarchal and traditionalist aesthetics, especially to like a, you know, a westerner.
But what I see on the political right, and I follow these things pretty carefully, I see
this sort of embrace of Christianity writ large, but specifically Orthodox Christianity is
having its moment. And I think there's like, there's something interesting because you have like
young right wing people who want to embrace their traditions, but have been sort of disillusioned
with like their, you know, version of it. So like Protestantism in the United States or whatever.
And so going to the Orthodox Eastern Christianity seems simultaneously like a return to your roots of Christianity, but also something that is not, doesn't come with the cultural baggage of a religious tradition that you grew up in and sort of rebelled against and know so well that you can't see it as being cool or different or new or, you know, something like that.
But yeah, I think that there's something interesting there that we should keep our eye on.
And then this general idea, which, you know, I don't want to overstate it.
And some people don't like when I talk like this, but there does seem to be something like, for lack of a better word, a religious impulse within human beings.
And that impulse does not go away when religion goes away.
And part of the thing we're seeing with, as we're talking about, the Silicon Valley TechnoCult, and I'm sure you could identify a bunch of other strains of this, is a sort of religious impulse without a God, a religious impulse without a religion.
And that gets put into, I think, less helpful things because it's sort of obscured. It's sort of mystified. The people that are in this cult don't see themselves as religious. They don't see themselves as worshiping anything. They see themselves as logical, you know, creators and inventors, you know, pushing humanity forward. But at the same, they're worshipping their ego. They're worshiping technology. They're hyper alienated and they're sort of turning that into its own set of virtues. And it's repulsive. And so I feel like in so far,
far as there is a religious impulse. I've said this before. It seems healthier to make that
incredibly conscious by taking that impulse and funneling it into the proper direction, religion
itself. Right. And when you don't do that, you begin to worship the self. You begin to worship
consumption. You begin to worship technology. You begin to worship your own alienation. And that seems
like a form of nihilism, like, you know, Nietzsche predicting this sort of various forms of
nihilism that come in the wake of the death of God. I think we're seeing that. And, and, you know,
people think in my new atheist phase, I had this sort of delusion that, you know, the future was
going to be religiousless, that everybody was going to come to terms with the fact that God isn't
real. This was all a fairy tale. And we're going to become much more grounded, scientifically
oriented people. I don't think so anymore. I think what we're going to, what we're going to see in
the age of artificial intelligence and quantum computing and space travel.
are new forms of religion, and they're going to run the gamut from incredibly sinister and mystified and weird and fascistic, cultish, and all the way over to, like, maybe new forms and deeper depths of human spirituality and conscious experience, you know, developing and consciously cultivating some of the better virtues of human nature, including, you know, universal love, etc.
So far from thinking religion is going away, I think it's going to become more diverse.
I think is going to continue being central to human beings.
And I think the people that trick themselves into thinking, they have no rituals, they worship
nothing, there is no God, they have no religious impulse, I think are the most susceptible
to the sorts of cults and scams and nonsense that we see in our modern world.
And that should make you at least take pause and think critically about these things.
Yeah, no, absolutely.
And this is like, I was just commenting this to my partner the other day, right?
is like, this tragic thing is that Nietzsche's right. God is dead, right?
And we have killed. I think, yeah, Nietzsche is completely correct on that question. And then
we just replaced him with all of the worst impulses repackaged. Right. It really is wild.
I think the death of God as like the sociological phenomenon almost is undeniable. Right.
Like Western society did indeed commit deicide, right? And as Nietzsche predicted, we then stood there and said,
what do we do now, right? And we failed to answer that question. I think Nietzsche also fails to answer
that question. Absolutely. But, you know, society on the whole, I think, has dropped the ball. And that's
what I was saying is like, yeah, this Silicon Valley shit is still life denial, right? It's still
nihilism. It's still refusing to step up to the plate in the wake of what we've done, right?
Which is really the challenge that Nietzsche poses for us, which, you know, I can only put forward what
I think solutions look like, which again, I think may mean dealing with tradition and religion,
but dealing with it consciously, right? Being honest about ourselves, about its limits, what it can
do, and engaging with tradition critically. I think that can be an important part of it. And I sure
prefer that to the techno-utopian bullshit, obviously, right? And again, I'm not going to pretend
this isn't at odds with like Orthodox Londonism. It definitely is. There are like real debates to be
had here. There's a whole historical debate within the bullshit mix after the revolution around
precisely these questions, actually. So we're not treading new ground necessarily. But, you know,
the impulse is happening. Young people are moving in that direction, whether we like it or not.
And I think that is largely in a reactionary manner. So we need to ask these questions and wrestle
with them in some way because, again, an uncritical embrace of these things is kind of what the
status quo is at the moment. Yeah, absolutely. And, you know, we can disagree with orthodox.
Like we're supposed to be thinking in our real time and, you know, there's nothing wrong with that. And the sort of person who insists that you must never deviate at all is a sort of like Mark says putting on the costumes of the past, fetishizing these ideas, copying, pasting, think critically in real time. We're in a new era. We're in a new period of time. There's new issues to wrestle with. And so if you're going to disagree with orthodoxy, at least know why, understand it, have good reasons for it. Engage with the thing in the first place. And I think, you know, hopefully you and I contribute to at least thinking through those things. And, and, and, and, and, and, and, and,
importantly in the way I want to end this this conversation is you know you can listen to this
conversation you can agree with some things you can disagree with some things I was sort of thinking
to myself yesterday like how we we start hating each other for disagreeing on things and I started
laughing out loud just thinking in my head like how silly and childish it is to be like they disagree
with me about like Stalin and I fucking hate them you know like they're the worst person ever like
they're they're suspect on every level their character is flawed and I'm like are we children or
can we think critically
and you know when i when i put out something that somebody disagrees with i think most people
quietly just wrestle with that with those disagreements yeah why do i disagree with that it's
helping me to hear this perspective because even if i disagree with it it's nice to be able to
you know exchange swords and understand exactly why i disagree and and hear the best articulation
of something i disagree with it and understand why i you know disagree with it or think through it
or vice versa why you do agree with something um and that that's what's very generative and that's what i
think you and I have always, that's the ethos we've tried to promote because we don't have
the delusion that we have all the answers. We don't have the delusion that the things that we believe
right here, right now are the things we're going to believe for all time. We're not dogmatist.
We're not weirdos. We're just human beings flawed and fragile who make mistakes, thinking
through these issues as best we can. And the point for the listener is to generate your own critical
thought, not to agree or disagree with us as such, but to understand why you agree or disagree with us
and to wrestle with some of these questions.
And that's the whole point of literally everything I do.
And that's the whole sort of approach I take to Rev Left into all of this stuff.
And yeah, some people aren't going to like that.
It really bothers some people for some reason.
But I think most people understand that this is maturity.
This is what adults do.
You know, we think through things.
We wrestle with complex stuff.
We disagree about things.
And we can come together as friends and as comrades and disagree about things in a constructive way instead of a destructive.
way. And hopefully Allison and I help advance that ethos among the socialist left. Yeah, that's
absolutely the hope. I think that's so important to say. Like, the idea of anyone even just agreeing
with us without wrestling with it is disturbing to me, right? On the same level as disagreement without
wrestling with it. You know, we're just two people. We are not authorities here. And hopefully, you know,
you treat what we have to say as a jumping off point, right? Not as the final statement. Absolutely. And
all I can say, my favorite episodes are these sorts of episodes with Allison, where we just
bounce ideas off each other and wrestle with stuff. It's so fun. And I know, I know for a fact,
our listeners really love these, these Red Menace episodes. So we're going to keep them coming.
Thank you to everybody who supports the show. We have this idea of trying to tackle the German
revolution soon on Red Menace. We're hesitant to make exact proclamations about when that date will be,
but it's something we're working on in the background. So that's going to come. And maybe
Allison and I will touch on something else in between.
But we'll come back. We'll have more episodes.
Thank you to everybody who listens and supports the show.
Love and solidarity.
You know,
I'm sorry.
I'm going to be able to be a new.