Tech Won't Save Us - The Religious Foundations of Transhumanism w/ Meghan O’Gieblyn
Episode Date: April 25, 2024Paris Marx is joined by Meghan O’Gieblyn to discuss parallels between transhumanism and Christian narratives of resurrection, despite the fact many transhumanists identify as staunch atheists. Megha...n O’Gieblyn is an advice columnist at Wired and the author of God, Human, Animal, Machine.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry.Also mentioned in this episode:Meghan wrote about transhumanism and religion for n+1.Paris wrote about the religion of techno-optimism in Disconnect.Amy Kurzweil wrote about her father’s chatbot of his own father.In China, AI “deathbots” are being used to help people grieve.Richard Dawkins now identifies as a “cultural Christian” (not Catholic, as Paris mistakenly said in the episode).Support the show
Transcript
Discussion (0)
To a certain extent, I think the whole rhetoric about AI rests on faith, right?
On this idea of like, just trust us.
We're the smartest guys in the room.
We're going to do this.
We're going to deliver.
And like, what are you going to deliver?
Nobody can even articulate what it is that we're trying to solve. Hello and welcome to Tech Won't Save Us, made in partnership with The Nation magazine.
I'm your host, Paris Marks.
And before we get into this week's episode, just a reminder that we are in the final stretch
of our membership drive for the fourth birthday of the podcast, which, of course, we celebrate this month.
Our goal is 200 new or upgraded supporters so that we can make a special series digging into the AI hype of the past year or so, the data centers that are required to power all of these AI tools, and the growing backlash to them that is happening around the world due to water use, energy use,
questions of control. And that gets to the bigger question of whether we even need this much
computing power in the first place in order to build a better world for all of us instead of
just padding the corporate profits of these major tech companies. We're still a ways from our goal
right now. So anyone who signs up can, of course, help us hit it and help us to make the series. If you do decide to do that, you'll get some stickers in the mail, a shout out,
some occasional premium episodes that we put together, including a bunch based on last year's
special series that we did on Elon Musk called Elon Musk Unmasked. And of course, it just helps
us to keep making the show, to keep it sustainable. So if you do enjoy the interviews that we do on
Tech Won't Save Us, the critical perspectives that we provide you, make sure to go to patreon.com
slash tech won't save us where you can help us hit our goal. Now this week's episode is with
Megan O'Giblin. Megan is an advice columnist at Wired and the author of God, Human, Animal,
Machine. I came across Megan's work when I've read an essay that she wrote a number of years ago now
on transhumanism and religion.
I found it absolutely fascinating. And given the moment that we're in where we have these
AI companies pushing this notion of artificial general intelligence, these ideas of mind
uploading, Elon Musk saying that he's going to have these implants for your brain that will
eventually allow you to transfer your thoughts and consciousness onto a computer,
and not to mention these tech billionaires who are still trying to live forever and see
technology as a means of doing so, and ultimately hope to replace the physical body with some sort
of digital computational alternative. So given all of that, I thought it would be great to have
Megan on the show to talk about transhumanism, but also the very long history of religious ideas that bleed into the transhumanist
thought of today, and that many of the leaders of this movement will not acknowledge are actually
there, right? Because as she explains, transhumanism doesn't just recreate these ideas of resurrection,
but through a technological means instead of a
spiritual one, it also even recreates Christian ideas of history and how history has evolved,
but instead through this more technological and transhumanist lens. Personally, I love these kinds
of conversations because I think it helps to demystify these big ideas that we often get from
the tech industry,
where they act like they're the first people who have ever thought of some of these things.
But actually, when you dig into the history, you can see that they're just recreating ideas that have been around for a long time, but expressing them in different ways that make more sense for
their particular ideologies or their interests. So as we have this new push for techno-optimism
and the growing power of Silicon
Valley, I think that these sorts of insights are important for us to have so that we can properly
understand where these ideas are coming from, the faith that is really inherent in them, even though
the people in the tech industry often wouldn't identify as religious, and how we need to have
a skeptical view when they're telling us what their potential future
could look like and what they want us to achieve. So with that said, I hope you enjoy this
conversation with Megan O'Giblin. And of course, if you want to help us hit our goal so that we
can make this special series on AI and data centers, you can join supporters like Joseph
in Vancouver, C.W. McGregor from Montreal, Raghav Inigyo in Madrid, Yorel in Umeå, Sweden, and Alf from Oslo in Norway
by going to patreon.com slash techwontsaveus where you can become a supporter as well.
Thanks so much and enjoy this week's conversation.
Megan, welcome to Tech Won't Save Us.
Thanks so much for having me.
I'm really excited to chat with you.
You had this essay published in N Plus One quite a long time ago now. But that is, you know, rather newish to me that digs into
transhumanism and its relationship to religion. And of course, you have a book, God, Human,
Animal, Machine, that I have had the pleasure to read as well. And I think it deals with these
really important topics that are really coming back in this moment as we hear the talk about AGI,
but also I think a real distinct shift in the way that
Silicon Valley approaches some of these questions. And so that's why I really wanted to have you on
the show. And I think just to get into this, can you explain to listeners what transhumanism and
the singularity actually are, like what those concepts mean? Because people might have a kind
of general idea, but there might be some specifics that they haven't kind of caught. Yeah, so transhumanism is typically traced back to this sort of niche subculture of West Coast futurism that evolved, I guess, in the 80s and 90s.
And it was primarily a bunch of tech industry people who were interested in how technology could eventually help humans sort of transcend over into the next phase of evolution.
So they're really interested in nanotechnology and cryogenics and all of these sort of very speculative technologies.
They communicated largely via mailing lists in the beginning. And I think probably the point at which that idea reached the mainstream was with
Ray Kurzweil's book, The Age of Spiritual Machines, which was published, I believe, in 1999.
Kurzweil really popularized this idea for a larger audience. And his version of transhumanism,
which I think has sort of become the most well-known version of it. I mean, he sets out this whole history of evolution through the lens of information.
And he's basically, you know, information emerged with the Big Bang. And then it became more
complex as, you know, plants and animals emerged, and then human minds came about. And this was this
much more complex form of information processing. And he believed that this process was exponential. It was happening at an accelerating rate, especially now that, you know, computational power is doubling, I think every two years.
And that eventually we were going to completely merge our minds with machines and become,
he called it post-human basically. So he believed that we were currently transhuman because we're
in the process of, you know, aiding and enhancing our intelligence and our power as humans through
technology. And once the singularity happened, which was this intelligence explosion, we were basically going to be post-human. Yeah, I mean,
it was really just this work of, I call it a work of secular eschatology. It has a very sort of
transcendent religious arc to it, this idea that all of history is moving toward this moment of
final transcendence. Yeah, that's great. And I want to come back to that religious piece in
just a second. But you're talking about this emerging kind of in the 80s and 90s and Kurzweil's
book really popularizing it in 99, which of course was the peak of the dot-com boom as well. I guess
it would not be surprising that someone like Kurzweil is kind of dreaming up and publishing
this idea of the history of humanity being this history of
evolution being related to information and information becoming more complex over time
and leading to more complex intelligences at the same time as computers are becoming popularized
and the internet is becoming more common. Like it seems like there's a clear relationship between
both of these things. Would that be right to observe that?
Yeah, absolutely.
And I mean, I think, you know, anyone who remembers that era of sort of the emergence
of the internet, I was very young at the time, but I mean, there was this very utopian strain
of rhetoric about the fact that we were all going to be globally connected.
It was going to enhance productivity.
It was going to democratize the world. And I think Kurzweil and other transhumanists, they were really sort of the most,
maybe the highest form of spiritualizing that idea. You know, basically the whole transhumanist
ideology rests on the idea that information is sacred. That sort of patterns of information
are what's going to outlast us.
You know, he was really interested in mind uploading this idea that all of our neural activity is just patterns that we can transfer to a computer and we'll be able to live forever.
And, you know, if you're a believer in that ideology, it makes sense that you
share as much data as possible, that you contribute as much to this future
through these technologies
that we've since learned have much more mundane uses, you know, to collect masses of user
data and, you know, further the advertising, you know, all of these sort of more less transcendent
uses that this technology has been put toward.
Yeah, I think that makes perfect sense.
And I was really struck
as I was reading in your essay in your book about these ideas that Kurzweil put forward in the book.
And for me, a lot of those things were new to me when I started to read about long-termism.
And I was like, oh, you know, they want to colonize these other planets and have all these
post-humans in these kind of computer simulations. And this was kind of the first time I had encountered a lot of these things. And then to read in your work that all of these ideas kind
of contained within long-termism, you know, other than maybe some particular orientation toward them
and moral justification for why that should be pursued really kind of comes out of Kurzweil's
work. And I'm sure some of this transhumanist thought before that.
And I hadn't realized that those ideas were kind of decades old already at that point.
It's funny because when I was writing this essay about transhumanism and also when I was writing my book, which was around like, I don't know, 2019, 2020, I kept feeling like, oh man, these
ideas are so dated, you know, cause I encountered Kurzweil in the early two thousands and I was
like really obsessed with, you know, I was on the message boards and everything, hearing transhumanists
talk about all these technologies. And, you know, by the time we got into like the early 2020s,
it just felt like who really buys into this anymore. And then, yeah, a few years later,
all of a sudden I'm hearing about long-termism and I'm like, oh, this is just the same
shit basically dressed up in a, in a different name, but it's like the same people. It's these
like sort of rationalist bros, you know, like effective altruists. And it's funny because like
part of my book, I think was about how this ideology about the future keeps getting recycled
and it keeps appearing and reappearing throughout history. But I think I didn't expect it to come
back so soon in this other slightly different form.
It's at least good for making the writing even more relevant. But you talked about
how these things come back again, and obviously you mentioned religion and how you encountered
Kurzweil's book in the early 2000s. Do you want to talk to us about why, for a little while,
those transhumanist ideas were something that really resonated with you in part because of the religious foundations that they had,
even though that's often not acknowledged by the very transhumanists that espouse them.
Yeah, definitely. That was a big part of it for me. I was raised in a fundamentalist Christian
home. My parents are evangelicals. I was homeschooled as a child, was taught, you know, six-day creationism and went to Moody Bible Institute when I was 18 to study theology for a couple of years.
And I actually left that school after my second year because I had had like a faith crisis and was trying to question the whole Christian ideology.
And I was living in Chicago at the time.
You know, I was living for many years just sort of on my own and working and identified
as an atheist at that point.
I totally left the church.
And yeah, a friend gave me The Age of Spiritual Machines.
And I read it and had like my mind totally blown.
I mean, the book was a bestseller, but I think there wasn't like a lot of conversation about
those technologies at the time.
They were very futuristic.
Again, he was talking about mind uploading, nanotechnology, all this stuff that wasn't
really part of the mainstream conversation.
And I read the book.
I was reading, you know, a lot of these message boards online among transhumanists.
And I think what really appealed to me, you know, it took me a while to
realize this, but like, it was very much this like millenarian Christian narrative that was
very familiar to me. You know, I grew up thinking like we're living in the end times, Christ is
going to return at any point, you know, we're going to be raptured. The dead are going to be
resurrected. We're going to have these glorious new bodies and live in heaven forever. And this was essentially what Kurzweil was arguing, but he, you know, without
any sort of appeal to the metaphysics or the supernatural. In fact, I think the reason why
it took me so long to like realize the parallels between them is that all of the transhumanists
were also like vehement atheists and rationalists. And even like the histories of the movement, most of the people who are writing about the
origins of transhumanism referred to Nick Bostrom's brief history of the movement,
where he very much traced it back to the enlightenment and, you know, these very
humanistic secular ideas. And so it didn't seem as though there was a connection there.
And it really baffled me for a while though, too, because I was like, why are, you know,
for example, like just getting into like the nitty gritty of these conversations about,
for example, mind uploading, like there's this problem about continuity of identity,
right? If you were to, for example, transfer all of your
neural patterns onto a supercomputer, or if you're to, you know, even to replace like every part of
your brain with a neural implant, is your consciousness still going to be there afterwards?
Are you still going to be you? And these were like basically the same questions that the early
church fathers were debating in the third and fourth century, which was for Christians, at least like the body was a really important part of the afterlife,
you know, which was sort of what distinguished, I think, Orthodox Christianity from Gnosticism,
which thought that the afterlife was just going to be spiritual. We're just going to be
souls disembodied. So there's this problem in early Christianity about like, well, you know, bodies die and they decay. So, you know, what happens? How are all of those parts going to be resurrected? And how is the person going to be the same person in heaven? And the transhumanists at the time were using the same metaphors that Kurzweil uses in The Age of Spiritual Machines is this idea that consciousness is a pattern. And he said, it's like the pattern that you see in ripples of water in a river. And the individual water molecules are always different, but patterns are the same. And that's basically what consciousness is. And that's why it can persist across substrates. And this was the very same metaphor that the origin of Alexandria used
to talk about the Christian resurrection, where he said, basically, yeah, our soul is a pattern
and our body is going to die and decompose, but basically the pattern is going to persist.
And this is sort of how he reconciled Christianity with Greek thought.
So these transhumanists are not reading, obviously, the early church fathers. How did
these same metaphors and these same ideas keep recurring? So part of the fun of writing that
essay, which I didn't write until much later for M plus one, was just reading about this strain of
Christian eschatology that I didn't know anything about
because I had studied fundamentalist theology, which was very narrow. But there have actually
been Christians throughout different points of history that have believed that resurrection
could happen through science and technology, basically. Going back to medieval alchemists
who were trying to create an elixir of life that was going to make the person who took the potion have a resurrected body through Russian cosmism. I wrote a little bit about this idea and the
theology of Teilhard de Chardin. Yeah, there is this lineage, basically. And I don't know how
in detail you want to get here, but there is a way in which it connects directly
to modern transhumanism. Like they basically took
these ideas from Christian theology, stripped them of all the metaphysics and created this
sort of religion of technology out of it. Yeah. I did want to get into that because I find that
absolutely fascinating. Right. And one of the things that really stood out to me, and I can't
remember if it was in a talk that you gave or in the book, but you were basically talking about how, you know, there is this clear history that you can see,
but when the transhumanists talk about, you know, the history of the ideas that they're drawing
from, as you say, they talk about the enlightenment or they go back to say, Julian Huxley mentioning
this for the first time, but there's this whole kind of religious history discussing all of these
ideas that they leave out of those discussions. And I think it was in a talk you gave in Sweden, you basically said that
in part, that seems like a deliberate act, right? To make sure that this history is not known and
is left out of the stories that they tell because they don't want this kind of relationship with
religion to be part of the thing that they are engaged with and that they are talking about
again as you say because a lot of them identify as rationalists and atheists and don't want to
kind of pretend that they are engaging in these kind of spiritual or religious ideas can you talk
about that aspect of it and i guess rather than huxley being the first person who talks about
transhumanism how this is something that exists before that as well.
Yeah. So I can't remember if it was Bostrom or, I mean, it's a widely published origin where people say the first use of the word transhumanism came, yeah, and I think it was 1957 with, yeah,
Huxley using it in a talk. And I had known that actually the first use of transhuman in English was in the translation of Dante's Divine Comedy. And it's in a passage where he's describing the resurrection. It's in heavenly body. And he's trying to emphasize
the fact that this is a singular moment, that nothing like this has ever happened before.
And so to emphasize that, he makes up an entirely new word in Italian, which is transhumanar,
translated basically in English to beyond the human. And the line I think is words cannot tell
of that transhuman change. And so I learned this actually, cause I was talking to a bunch of
Christian transhumanists, which is a whole nother weird subculture, but they really latched onto
this as like transhumanism has this Christian origin. And I was like, okay, well, how did it,
like this word appeared? And I think it was 1814 when this translation of Dante came into English.
And how did it get to Huxley from there? Pierre Teilhard de Chardin, a French priest and paleontologist who was really interested in evolution and sort of merging evolution with Christian theology. Catholic church as heretical called the future of man, where he laid out this vision of the future,
where he imagined that technology would help humans reach the next level of evolution and
actually bring about the resurrection prophesied in the Bible. And he had this image of like,
basically all of the world is becoming more and more connected with technology.
And, you know, he's writing in like the fort 40s and 50s about these ideas. So he's talking about radio and television,
but he somehow saw mass communication is making our minds start to become more connected and
more merged. And he believed that this was eventually going to create something called
the new sphere, which is basically like a lot of people said, it's a really prescient, you know, idea of the internet where human minds are going to be connected. And then
this was going to lead to an intelligence explosion, which he called the Omega point.
And at that point, humanity was basically going to like break through the time space
barrier and become divine. And this was going to be like, basically the resurrection.
This is how we were going to become gods. And that idea, obviously, I mean, like the,
the Omega point is basically just Kurzweil singularity, but it has very clear religious
connotations. He used the word transhuman, probably he got it from Dante, I imagine.
And he talked about how that stage after we sort of become divine, we're going to become
transhuman.
And Teilhard was friends with Julian Huxley and, you know, they exchanged a lot of ideas.
And so the thought seems to be that Huxley got that term from the priest, but again,
just totally stripped it of its religious and theological meaning and created this secular idea of transcendence. And then from there, I think it's a pretty clear
lineage to get to Kurzweil and contemporary transhumanists and long-termists.
It's so cool to see that history and see the relationships to it. And part of what made me
really interested in the way that you were telling this and these connections that you were making was because I also read David Noble's book, The Religion of
Technology, which goes into a lot of other aspects of what you were talking about, where, you know,
there were these Christian theologists or Christians who really believed that science
and technology would be, you know, a way to achieve these kind of Christian prophecies or
stories or, you know,
however you want to talk about them. I was wondering if you could talk about that a bit more specifically in relation to transhumanism and, you know, the real similarities that exist
between the stories that they tell about what our future is going to look like and how these
technologies are going to develop and how they are going to allow us to kind of transcend
this human body. And also,
even with Kurzweil talking about this kind of progression through history and how that relates
to the way that this is told in, you know, a lot of Christian theology. And of course, you know,
how those things relate to one another and how entirely similar they are, other than obviously
looking at different means of achieving something that seems quite similar. Yeah. I mean, there's many different versions of the Christian historical narrative, I guess,
in terms of the end times and what that's going to look like. The version I was taught in the
Fundamentalist College that I went to was called premillennial dispensationalism. I mean, it's a very pessimistic view of history and of the future.
The idea is basically that God reveals himself in distinct dispensations across history and that
there's sort of different ways in which we experience God throughout history, which is a
little bit abstract. But the point is that eventually all of this, everything that's
happened in the past and everything that's happening in the present is leading to this redemptive narrative that's going to happen.
And there was a big kind of split in American Christianity around, I would say the turn of the 20th century, if I'm getting the dates correct.
And then I think it became more pronounced after the world wars, but you know, there was this sort of split between
Christians who had a very pessimistic view of the future, which is where my family and sort
of the tradition I was raised in this pre-millennial view, which is that we're headed
toward this apocalypse and tribulation and, you know, God is going to destroy the world.
There's nothing we can do about it.
And eventually, like, we're going to survive because we're the elect, right? We're going to
get raptured and get to go to heaven. But it was basically this really dark idea of, you know,
history is on this downward spiral. And then there's also this post-millennial tradition of
Christians that you see in like the social gospel movement that's more concerned with making life better
here on earth. And this idea that we can sort of create, maybe not heaven, but the sort of like
millenarian utopia here on earth, if we try to live out the gospel and actually help the poor
and become socially engaged. So, and that's a much more optimistic view of history. And those ideas have always been
kind of in conflict. And I would say that like Kurzweil and people who are sort of techno
optimists feel to me like a very, I mean, in a weird way, sort of like a post-millennial view
in that they believe technology is going to make things better. That's at least the,
they at least pay lip service to that idea, right? It's going to extend our lives. It's going to make things better. That's at least the, they at least pay lip service to that idea,
right? It's going to extend our lives. It's going to take away suffering on earth. It's
going to create medical advances. It's going to solve all of our problems. And then, you know,
there's this other very, you know, dark side of the debate about the future and AI, which,
you know, is existential risk, you risk, which feels to me more like the
pessimistic apocalyptic view. So yeah, it's interesting how those ideologies are playing
out in that similar, in religious spaces and in secular spaces, it just feels like the same
conversations to me. And it also feels like those two worldviews really feed off of one another. And at times,
I think prevent us from having more practical conversations about like the real world harms
that the technologies are doing. Yeah, definitely. We're always focused on these,
you know, big, as you're saying, existential risks, especially when we're talking about AI
in the past couple of years, where the focus is, is the AI going to like, you know, be this kind of wonderful future for us? Or is it going to
end the world rather than talking about, okay, how are these technologies being implemented now?
And what are the effects of them? And what should we be doing to try to mitigate the negative
impacts of that? But that gets distracted by the much bigger picture of, are the AIs going to kill
us or enslave us or something like that, right?
Absolutely, or become God or whatever.
Yeah. I know I found this fascinating that Kurzweil actually reached out to you after
you wrote that N plus one essay. Did you get more insight into how he thinks about this and
his approach to it in that exchange? Or what more did you learn about how he sees transhumanism through that? Yeah, it was so bizarre. I had this essay
appeared in M plus one, which is like this, you know, fairly small lit mag. And then it did get
picked up in the guardian after that. So I guess it sort of reached a wider audience, but yeah,
I was just checking my email one day and I got an email from Ray Kurzweil. I was like, surely this is a joke,
but it was, it was from the actual Ray Kurzweil. He had read the article. He said he really liked
it. And yeah, he sent, it was very strange. He, he was talking a lot about metaphor,
which is odd. Cause I had been writing my book by that point. And I was, I was thinking a lot
about the question of metaphor and technological metaphors.
And he said, you know, anytime we're talking about something transcendent, we have to use metaphor because it's a reality that we can't access. We would have to transcend time and
energy in order to understand that. And our human understanding is limited.
And he said, basically, Christians were, you know, and other religious people are using pre-modern metaphors to describe the future. And I'm using technological metaphors. But basically, you know, I don't think he said it exactly, but the implication is we're talking about the same thing. We're just using different language, which to me was really surprising and interesting. It was something that I had, you know, sort of intuited from writing about this history.
And it was also a little bit eerie.
Given my religious background, I started to get, even doing this research years later,
like a little bit conspiratorial where it's like, how is it that these same ideas keep coming up?
Is it true that these biblical prophets and early church fathers somehow had this premonition
of what was going to happen in the future through the technology and they just didn't
have the language for it? But that's sort of what I'm understanding him
correctly. I think that's what he was saying more or less, right? That we're all just sort of trying
to describe something that's going to happen in the future that we don't understand yet.
Yeah. And then he offered to send me some of his books in the mail and he sent me a signed copy of
The Age of Spiritual Machines, which was kind of cool to get. But yeah, that was the, that was the only correspondence I've had with him.
Do you still have the book?
Yeah, I do. And he inscribed it. He said, Megan, enjoy the age of spiritual machines,
but it wasn't like capitalized or underlined or anything. So it was just, you know,
you could read it and insert it different ways.
It's fascinating. I love that when I was reading through the book,
but I think that point around metaphor is actually really fascinating, right? And you dig into this a
lot more in your book. Because I feel like time and again, we encounter these metaphors for many
things in the world, but in particular, I think with these sorts of discussions for how the mind
works or how the body works, where they're often related to technology or, you know,
the things that, you know, are really important to how we experience the world, right? If you think
back to like the industrial revolution, when we often thought of how we worked as being kind of
like a machine, right? And kind of the cogs that kind of worked within us. And now how we see these
metaphors that treat the human or treat the mind as though it's a computer or as
though it's some sort of digital technology. And we work in kind of a similar way to that,
which of course feeds into these transhumanist ideas. I wonder what you make of those metaphors
and how they affect how we see ourselves and the world around us.
Yeah. I mean, I was interested in that question of like, where do we get this idea?
And, you know, this is something I think everybody just intuitively assumes today is that their mind is a machine in some way or a computer. And we defer to it in everyday language. You know,
if you say, oh, I have to process something, that's using metaphorical machine language.
And like you said, you know, these are very old, these metaphors. If you want to go back to ancient Greece, you have this idea that the soul is like a
chariot.
And then all throughout the Industrial Revolution, you have these sort of mechanistic metaphors
for the body or the mind.
The idea of the mind as a computer really emerged in the late 1940s, early 50s with
the emergence of neural networks, which were based on the brain.
And there was this idea that we could create these, they were called at the time,
Turing machines that were sort of operating in the same way that our minds were. And the thing
with any type of metaphor is it like, it goes kind of both ways. So then shortly after that,
there was this idea that, yeah, the mind is also, you know, computational that the mind is,
you know, and some of the early theories, it was really like this sort of humorous idea that the
mind functioned according to like, you know, binary logic and things like this, but it's a useful
metaphor, obviously. I mean, all of like cognitive science and AI research has grown out of it.
And part of, I think the appeal of it initially was that, you know, if you think
about the mind as a machine, you can get away having to talk about consciousness or the soul
or these sort of like, you know, subjective experience basically, which is a hard thing to
talk about from a third person point of view of science. And, you know, I think that what's
interesting to me about it though, is that there's a point at which, you know, because there's sort of like a dualism built into computers, we have like software things like mind uploading, like, oh, is it
possible if my mind is just information, can that somehow be extracted from my body and, you know,
travel to this other kind of substrate? The irony for me when I was writing about this is like this
metaphor emerged as a way to get around metaphysics and have this like fiercely materialistic idea of the mind.
And it somehow the metaphysics snuck back in there where, you know, if you look at Kurzweil
or sort of any of these people who are interested in these futuristic technologies, it's almost like
information has become a metaphor for the soul. It's something that's going to persist after we
die. It's indestructible. It's immortal.
Yeah. It's very strange to me how that happens, but also like very understandable. I think that
dualism, I mean, that's like a cognitive bias that's very deep in us. Like it's in children.
It's like, you know, anthropologists have studied it in cultures all over the world. And I think
that it's natural that we extend that bias when we're
thinking about our technologies also. Do you think that that metaphor that, you know, the mind or the
body is like a machine or like a computer, do you think it leads us to be more open to ideas like
transhumanism or this idea that a brain or a human mind can be recreated on a computer when it allows us to, I guess, not think
so much about the biological barriers to that. Because if we think that the brain computes and
processes just like a computer, then it's easier to believe that, okay, maybe we're going to
recreate the mind on a computer or an AI is going to reach the level of human intelligence. And
we're going to be able
to kind of stick some computer hardware on the back of our brain and transfer it over to a machine.
Do you think that it leads us to be more open to these things? And do you think in part,
it kind of misleads us into believing that something like this is possible at all?
Definitely. Yeah. I mean, I think this is the whole idea of like functionalism, which is just that there's,
it doesn't really matter what the material is so long as the parts are doing the same work,
right? If you have a biological brain or if you have a computer, you could presumably have
consciousness emerge out of a silicon the same way from a human brain. And I think most people
like intuitively feel like there's something that's missed there,
but I think it also ignores the fact that we all evolved together through millions of years of
evolution. And whatever is evolving in machines is the type of intelligence that's evolving.
It's not anything like the sort of experience that we have of the world.
In fact, there's not really any evidence that there's going to be any kind of first person
experience in machines. And, you know, that's another thing that comes up in a lot of these
conversations about mind uploading, which is like, you know, if you start reading, it sounds like
great, like, yo, yeah, we'll be able to live forever in the cloud. And if you start reading between the lines, it's like, well, you know,
we can't really guarantee that there's going to be any sort of subjective experience. It could
just be this like sort of clone that looks and talks like you, and there's not any experience
there, which is like, okay, well, what is the point of, you know, people who want to live
forever, they want to experience that they don't just want to have some avatar or clone of themselves that's persisting after they die. Maybe some people do.
That was a moment of disenchantment for me when I was really into transhumanism. I think that was
the appeal for me. It's like, oh yeah, this is a way to live forever. And it's like, well,
the people who are writing about this don't believe that machines have consciousness. A lot
of them don't believe that humans have consciousness, really. I mean, that's sort of a superstitious idea. There's not really a clear
way to talk about it. But yeah, I think that, I don't know, what is the point of living forever
if you're not going to experience it? Yeah, I don't know if living out my life in the cloud
sounds so appealing to me. Yeah, me neither. I was really fascinated to learn though
that a lot of Kurzweil's interest in this seems to come from his desire to, you know, be able to
kind of recreate his father as, you know, a digital agent or an AI being or whatever you want to call
it. And I was particularly fascinated because I'd never heard this before that he has collected a bunch of writings and things like that, that his father
had and in the hopes that one day be able to be scanned and like some AI chatbot or something of
his father will be able to be recreated. Do you think that that is kind of part of what motivates
his interest in these sorts of ideas? Yeah, he's been very, I think, transparent, actually, about the fact that he has a very
personal motivation for this. And yeah, there was a documentary about him many years ago called
Transcendent Man, where he, you know, takes the filmmakers into this storage unit that he has,
where he's kept all of his father's father was a classical musician so he has all of his
music he has his letters he has his diaries and a lot of personal writing yeah he actually did
at one point use this to create a chatbot of his father his daughter actually amy kerswell
wrote a book about her dad called artificialificial. It's a graphic memoir,
which is actually really excellent. And she talks about sort of interacting with this chatbot
version of her grandfather. And we've seen, you know, a lot of sort of speculative, you know,
startups that are claiming to be able to resurrect in chatbot form people who have died so that you can talk to them, talk to some version
of them after they're gone. To me, and I think to most people right now, it doesn't really seem
especially appealing because I mean, part of the idea of communicating with the dead isn't just to
like get information about what they would say to you. It's about making some sort of interpersonal
connection that if the person isn't actually there, I don't know what sort of like emotional or spiritual benefit you're getting
from that. But I definitely think it's something that we'll see more in the future. There's
probably a market for it of some sort. Yeah, I definitely think so. And I think
we're already seeing it. I was reading a story the other day about how in China, I believe they're already kind
of making chatbots or something like that of the dead.
And, you know, I'm sure it's happening here as well to a certain degree too.
You know, we're already reading about AI girlfriends.
So I'm sure AI dead people is something that some companies are working on.
Next frontier.
Yeah.
Yeah.
I was interested as well.
You know, you were talking about how you became an
atheist after this evangelical upbringing. I would imagine in the time that you had done so,
you were probably into some of the new atheist kind of writers and that movement as well. Would
that be fair to say? Yeah. I went through a phase where I was reading Dawkins and Hitchens and Sam
Harris. And yeah, it was part of my deconversion journey.
Yeah. I was right along with you. I became an atheist in the mid 2000s. And so it was right at the time when that was kind of in full steam. And I remember watching Bill Maher's documentary
and, you know, Religious, I think it was called and kind of being all for it, like embarrassing
to admit today, but, you know, reading the Dawkins and the Hitchens and
all that sort of stuff too. And I was interested in, because I feel like this movement kind of
happened at, you know, a particular moment in time, but we've seen even as, you know, new atheism as
a movement has kind of faded off, it feels like a lot of those figures have continued along and now
are kind of key parts of this right-wing movement
that is increasingly popular in the tech industry as well. You know, people like Sam Harris and
Dawkins comes up as well. And even though, you know, it's not as kind of explicitly
championing atheism in the way that kind of new atheism was, it feels like a lot of these ideas
kind of still stick around and these figures have become key
figures in this kind of anti-woke movement or whatever you want to call it. I wonder if you
have any thoughts on how that has developed and how those ideas kind of seemed like a foundation
for some of what has come after. Yeah, I admittedly have not followed them as closely.
I know that they're like touchstones in the sort of rationalist community,
but it's strange. I don't know if this is actually true. My parents recently told me
that Dawkins is now Catholic. Is that true? Or that he's like a cultural Catholic?
Cultural Catholic. Yeah. There was an interview recently where he was talking about how he's not
a Catholic himself. He doesn't believe in God, of course. But yeah, he was really interested in
cultural Catholicism and it would be very disappointing to him if the churches in England went away and he still
loves to go to them for the cultural experience, just not the religious experience. And he was
basically making the argument that like, if churches were replaced with mosques, it would
be terrible to him, right? Because picking up on the Islamophobia that was always kind of there.
Right, which was with Sam Harris too, I remember. Yeah. I mean, it, to me, it's just, there's a cynical part of me that just says that they're
just sort of trying to capitalize on the way that public sentiment on the right, especially has
shifted in the last few years. And I do think it reveals some sort of bad faith about their
whole project. I think from the beginning, I kind of become disenchanted with them even before that. Well, I mean, part of it, I think, is maybe because I can see how, you know, even people who are very militantly atheist or rationalist, you know, that doesn't inoculate you against these really basic human desires to live forever, you know, or, you know, to find
some sort of like technological transcendence. If anything, I think it sort of puts blinders
on you in a way, because again, a lot of these ideas about technology in the future feel to me
like very much rooted in wishful thinking in a way that is precisely the
kind of wishful thinking that they accused religious people of back in the early 2000s.
No, I definitely agree with that. I was commenting, I think it would have been the end of
last year on a manifesto that Marc Andreessen had written, you know, the techno-optimism manifesto.
And it really stood out to me in that moment. You know, I feel like they've been kind of
increasingly pulling on faith in order to drive their technological project. But in his manifesto. And it really stood out to me in that moment. I feel like they've been increasingly pulling on faith in order to drive their technological project. But in his manifesto,
he was making all of these claims and basically saying time and again, we believe, we believe,
we believe. There is no tangible foundation to this belief. It was just, we think that this
technology is going to change the world in all these positive ways that
I, Marc Andreessen, am setting out, and you should all have faith that we can achieve this. And it
very much felt like this kind of religious argument, even though I'm sure Marc Andreessen
would say that he's an atheist and he doesn't believe in all that and whatever, but it still
seemed to be drawing on these very similar ways of arguing and ways of presenting this.
Yeah. Oh, that's really interesting. And to a certain extent, I think the whole rhetoric about
AI rests on faith, right? On this idea of like, just trust us. We're the smartest guys in the
room. We're going to do this. We're going to deliver. And like, what are you going to deliver?
Nobody can even articulate what it is that we're trying to solve. It's going to change everything. It's the future. You know, it's these abstractions that do
feel very much like religious rhetoric to me. And the people who believe in this feel to me also
like spiritual acolytes in a way, just with like the way in which they've just completely gone all
in and embraced this idea of the future. And I mean, Sam Altman, I think talks about the future in a way that's
like very Manichean, like, you know, this is, you know, the way history is going, people who are
on board are going to survive and people who are not, you know, are going to be left behind. He
said this a couple of years ago in a tweet, I think. And I mean, to me, it's really like this
idea of a spiritual elect that you see. That was the same thing like my family believed, which is that like we are going to, you know,
because we have honored God, the world is going to be destroyed, but we're going to
survive because we are the good ones.
And yeah, I think it seems like there's sort of a similar narrative there, which is like
we're on the side of progress.
We're on the side of the future and
people are going to fall by the wayside, but we're the ones who are going to make it into the next
stage of evolution. I feel like that's part of the reason why your writing really resonated with me,
right? You know, on the one hand, talking about kind of the circularity of these ideas and these
ideas coming back again and again, but also the fact that, you know, we kind of as Western society
have gone through this secularization. So we lost this ability to look up and say, but also the fact that, you know, we kind of, as Western society have gone through
this secularization. So we lost this ability to look up and say, okay, we're trusting in God,
you know, we're going to go to heaven. We have these religious stories that we tell ourselves
and what fills that void. It feels like in the tech industry, they've kind of built their own
kind of theology or religion of techno-optimism or whatever you want to call it, that gives them these narratives
that give their life meaning and allow them to feel that they're contributing to this bigger
project. I think we often joke about the cult of Elon Musk and the people who are really
behind him and just believe in whatever he says, but it feels like there's something broader in
Silicon Valley where, as you say, Sam Altman is drawing on this and especially the way
that he talks about the potential AGI and, you know, what this is going to be. It feels like
they are kind of, you know, building their own belief system, whether we want to call it religious
or whatever, in order to get their followers and their believers to stick behind them.
Yeah, absolutely. What was the phrase that Altman used when he was describing AGI,
like magic intelligence in the sky? I would believe that.
Yeah. I mean, I do feel like it's filling a vacuum that, you know, that we've seen so much
secularization and people, I think even people who are in, you know, institutional religion
today don't quite believe it as literally as they used to. And there's something I think even people who are in institutional religion today don't quite believe it as literally
as they used to.
And there's something I think really appealing about a literal future that's going to enact
a lot of those promises.
And if you're going to make the case, which have made these technological stories about
the future are a form of religious eschatology, it is like the crudest, most fundamentalist
version of that, which is again, this idea that like, we're going to live forever.
We're going to be saved.
Everyone else is going to die, you know?
And there's this whole other tradition of Christianity that I really respect and that
is not part of this at all, which is that like, we are fallen human beings.
We have limitations,
you know, and there's something beautiful about that. And like Christ came to earth to take human
form, to like take part in our suffering. And I think like the social gospel movement was really,
really grew out of that. And that's something that is, I don't know, to me, it seems like that
could actually provide maybe a counterpoint,
maybe not necessarily a religious narrative, but just this idea of like finding something positive
in our human limitations and the fact that like, yeah, we're not going to live forever. We're going
to die. And there's something tragic and maybe beautiful about that. And to like strip away that
whole aspect of human experience that I think so much of our history has been devoted to exploring.
It just feels like this very sort of crude, like the most sort of basically childish version of Judeo-Christian narratives that you can come up with.
I feel like based on what you're describing, you almost see that in some of the backlash to these ideas, right? That you have the Altman saying, we're going to build the AGI and, you know, the AI is
going to take care of everything and do all the jobs and whatever. And you have the Musk saying,
okay, we're going to go colonize another planet. And we really need to be focused on all this,
you know, the real long-termist ideology that we sacrifice in the present. You know, maybe we don't
pay attention to global poverty or we let climate change get worse than it would otherwise be because we need to be focused on this long-term future rather than addressing, you know, the here and now. And I feel like there's a growing number of people who say that makes no sense. You know, we should be caring for this planet and the people who are on it instead of going after your kind of wild technological fantasies, which sounds a lot more similar to the kind of social gospel
thing that you're talking about there.
Yeah, definitely.
I mean, long-termism, like when I first started reading about it, I think it even more than
transhumanism because they are thinking about, ostensibly thinking about things like climate
change, but also like kind of dismissing it feels like really similar to the pre-millennial
Christianity that I experienced growing up,
which was also, yeah, not interested in climate change. Who cares though? Like God's going to
destroy the world anyway, you know? And this idea of like, we're going to invest all of our
resources and all of our energy into these future human descendants who are not even going to be
human. They're going to be like digital beings, I think is the idea, right? That this like really extreme utilitarianism,
it does feel like it's a way to like escape historical responsibility, you know, to put
all of your energy into this like afterlife that you're not going to experience, but is,
you know, going to make you a good person somehow and ignoring the really real and more urgent injustices and problems that we're living through. one of the major kind of long-termist thinkers coming out of that rationalist and transhumanist tradition wrote about this thing called the simulation hypothesis that people will probably
have heard Elon Musk talk about, right? That we all live in a simulation and whatnot. And in your
book, you talk about that in particular in relation to creationism, right? And this, again,
kind of these ideas of religion, you know, in a sense coming back where you're not only thinking
about how you're creating new humans or kind of uploading the mind or what have you, but actually
creating a whole new world that you have complete control over that you are like the God of. How do
you see that kind of simulation hypothesis and, you know, the way that Bostrom approaches it?
Yeah. I also was really obsessed with the simulation hypothesis for a long time. And it is,
it's a technological creation myth and it appeals to the same cognitive biases I think that we have
as humans where we tend to see everything as designed and everything is having a purpose and
a telos. And I think it makes a lot of sense to people right now for that reason. I think a lot
of, even just people that I'm friends with will casually
just be like, oh yeah, of course we're in a simulation.
That makes total sense.
And I think it's also appealing because you can think about an afterlife, right?
Maybe if we're just software, we're not going to just die and be done with ourselves.
Maybe we'll be extracted and put in another simulation at some point.
It also makes the world seem like it has meaning
and purpose that it was designed
by maybe some sort of benevolent engineer.
The funny thing is it doesn't really explain anything
in terms of where the world came from
because presumably whatever civilization created us,
where did they come from it's just
it's like you know you can keep going back and back and back so it has definitely become very
popular um thanks to bostrom and a lot of other people who've written about it yeah and you know
you even see uh the pop culture depictions of it where you know like that black mirror episode
san junipero where these people basically die and then are able to kind of
live in the simulated world for as long as they want, I guess. And how these things tend to come
up time and again. I think to wind down our conversation, I wanted to ask you after looking
into this history, after looking into transhumanism and the metaphors that we have around technology and the mind and the
body and all these sorts of things that you have explored through your work over the past number
of years, I wonder, should this make us think differently about the technological stories
that were told and where this is all going? The big thing for me is just realizing how much of these projections about technology,
even when they're rooted in data and hard facts, I'm putting scare quotes on that,
are at root, you know, come from a lot of wishful thinking and a lot of inherited
cultural narratives that seem to keep finding their way back into the stories that we tell
about the future. And, you know, I think the thing that I've seen through a few, you know, I'm old enough now,
I've seen a few cycles of technological utopia with the rise of the internet,
with the rise of social media, you know, this is going to, you know, topple autocratic regimes.
And there's always this sort of very utopian and I think also very spiritual
dimension to those stories.
The people who are telling them believe them to some degree, but they're also used to get us on board.
And again, to share our data, to accept these technologies as somehow predestined or foredained that these are, you know, this is where history is going. And if you don't believe in God and
you don't believe that there is, you know, a telos to history, you have to take responsibility for
the fact that like we are building these technologies. I mean, as humans, we are,
right? We have a choice. We're making these decisions. And I think the hardest thing for me
is just watching, you know, the people who are building
these technologies sort of treat them as though they're inevitable, that they're just the next
stage of evolution. And then also, you know, the people who, you know, I talk to who are like,
not necessarily thrilled about the technologies who just sort of complacently accept them because
this is the future, this is where everything's going. And it's like, no, we don't have to accept this fatalistic story. But if it's true that we're really directing our evolution
or directing technology, then we have choices to make. I think it's so important to recognize
these histories of these technologies and where so many of these ideas come from, especially in
this moment, because of the way that these people who rule the tech industry are using them and
are deploying them in order to, you know, try to carry out particular futures. And so that's why,
you know, I think your work is so important and why it was a real pleasure to have you on the
show today. So thanks so much, Megan. Thanks so much for having me.
Megan O'Giblin is an advice columnist at Wired and the author of God, Human, Animal, Machine.
Tech Won't Save Us is made in partnership with The Nation magazine and is hosted by me,
Paris Marks. Production is by Eric Wickham and transcripts are by Bridget Palou-Fry.
Tech Won't Save Us relies on the support of listeners like you to keep providing critical
perspectives on the tech industry. You can join hundreds of other supporters by going to
patreon.com slash tech won't save us and making a pledge of your own. Thanks for listening and
make sure to come back next week. Thank you.