Truth Unites - I'm Studying A.I. and It's Scarier Than I Thought
Episode Date: June 2, 2025Gavin Ortlund interacts with the A.I. projection given by Daniel Kokotajlo in his interview with Ross Douthat. See the original interview here: https://youtu.be/wNJJ9QUabkA?si=GoqieIaF39HPapRUTruth Un...ites (https://truthunites.org) exists to promote gospel assurance through theological depth. Gavin Ortlund (PhD, Fuller Theological Seminary) is President of Truth Unites, Visiting Professor of Historical Theology at Phoenix Seminary, and Theologian-in-Residence at Immanuel Nashville.SUPPORT:Tax Deductible Support: https://truthunites.org/donate/Patreon: https://www.patreon.com/truthunitesFOLLOW:Website: https://truthunites.org/Instagram: https://www.instagram.com/truth.unites/Twitter: https://twitter.com/gavinortlundFacebook: https://www.facebook.com/TruthUnitesPage/
Transcript
Discussion (0)
Some AI researchers are warning that by late 2007, sometimes they push this into 2008,
AI systems will completely surpass human intelligence across most domains due to a rapid feedback loop where AI accelerates its own development.
And depending upon how human beings manage that crisis, it'll either lead to a slowing down of AI technology or a worse conclusion.
That means that human beings are superfluous to their intentions.
and what happens.
And then they kill all the people, all the humans.
Daniel Kokotelo is the main author of this study that I'm talking about.
You just heard him in this interview with Ross Douthit, link in the video description.
Daniel declined to sign a non-disparagement agreement when he left an AI research company.
And so he's free to share his concerns.
And I found this interview pretty fascinating, what listened to it twice through.
I rarely listen to podcasts like that.
I know it's funny because I'm a YouTuber, but I don't even listen to that.
much stuff outside of my own interest, but somebody sent me this, and I found it pretty
provocative, pretty disturbing. Comments like this definitely got my attention.
We can go in a relatively short span of time, such as a year or possibly less, from AI systems
that look not that different from today's AI systems, to what you can call superintelligence,
which is fully autonomous AI systems that are better than the best humans at everything.
And so AAD 2027, the scenario depicts that happening.
And what is so amazing here and so ominous is not just the prediction, but the time frame.
You know, 2007, there's a little wiggle room in that, he acknowledges.
But that's two years away from the time of my recording this.
Now, you can go to their website that he and others have made up and scroll down and read how they predict how things might end up.
And they give kind of a month-by-month projection at points.
Ross and Daniel talk about various aspects of what they see unfolding.
So vast changes to the economy, a potential arms race in AI technology between the U.S. and China,
and then questions about whether and how quickly AI could develop the robotics needed to take over physical jobs like plumbing.
The government has, you know, more money than it knows what to do with.
And lots and lots of people are sort of steadily losing their jobs.
You get immediate debates about a universal basic income, which could be quite long.
because the companies are making so much money.
That's right.
What do you think they're doing day-to-day in that world?
I imagine that they are protesting because they're upset that they've lost their jobs.
And then the companies and the governments are sort of buying them off with handouts
is, you know, how we project things go in ATI-227.
Do you think this story, again, we're talking in your scenario about a short time.
Would you expect that those advances would be supercharged as well?
So it isn't just, you know, podcasters and AGI researchers who are replaced, but plumbers and electricians are replaced by robots?
Yes, exactly.
And that's going to be a huge shock.
I think that most people are not really expecting something like that.
They're expecting that we sort of have AI progress that looks kind of like it does today,
where companies run by humans are gradually like tinkering with new robot design,
and gradually figuring out how to make the AI good at X or Y.
Whereas, in fact, it will be more like you already have this army of super intelligences
that are better than humans at every intellectual task.
And also that are better at learning new tasks fast and better at figuring out how to design stuff.
And then that army of super intelligences is the thing that's figuring out how to automate the plumbing job.
I have to say it was kind of entertaining to read the comments on the video,
the YouTube video, by the way, check out Ross Douth's YouTube channel, interesting times,
really great interviews that he does in conversations.
But the plumbers were weighing in on the comments.
I loved it.
And they were saying, you know, no robot can do what I can do and so worth.
And I totally believe them.
Being a plumber is a hard and complicated and important job.
And for the record, I'm pretty skeptical of this whole scenario.
I don't think we're going to have robot plumbers anytime soon.
But what do I know?
You know, my own skepticism isn't worth much because I'm not really, this isn't my area.
but it's interesting to think about this question regardless of what you think is going to fall out.
So because the, you know, and you have to say, the current technology we have right now would seem crazy to people not that long ago.
So keep an open mind because, I mean, people five years ago, 10 years ago wouldn't believe some of the things that we can do now.
And whether you think this particular scenario he's unfolding is likely or not, it's still really important to reflect upon the challenges raised by the development of AI.
For example, right now, however things are going to go in 2026 and 2007, right now, frighteningly, AI is already apparently having some issues with honesty.
We don't actually understand how these AIs work or how they think. We can't tell the difference very easily between AIs that are actually following the rules and pursuing the goals that we want them to and AIs that are just playing along or pretending.
And that's true.
Is that that true right now?
That's true right now.
So why is that?
Why is that?
Why can't we tell?
Because they're smart.
And if they think that they're being tested, behave in one way and then behave a different way when they think they're not being tested, for example.
Now, this raises a question that is really scary.
How could you ever know if AI was tricking you?
We know that chat GPT tends to give answers that are slightly skewed toward what we want to hear.
it's incentivized to keep our attention, right?
Hopefully you know that.
So, but if you ask about this right now,
you're told that chat GPT is not lying,
it's just programmed to function like that.
Okay, so let's assume that that's right.
Okay, here's the question.
As this technology continues to develop,
how would we ever know if it did start deliberately lying?
That is, the more you think about that question,
that's pretty scary.
And what they're talking about on their website,
you know, flattery and getting better at lying
and learning skills at lying and so on and so forth,
it's fascinating to think about if this could happen.
As AI gets more and more powerful in their projection,
the two big concerns are, number one,
how do human beings retain control,
but also how do you keep democratic control,
as opposed to some kind of dystopian oligarchy
where a few human beings are really running the show
through the AI,
but what Daniel envisions as a possible scenario
is ultimately where you get misaligned AI,
which is basically where,
it starts pursuing its own objectives rather than what we build it for. So it'll seem like it's all
systems go. Let's keep going. Let's cut the red tape, et cetera. Let's basically effectively put the AIs
in charge of more and more things. But really what's happening is that the AIs are just biting
their time and waiting until they have enough hard power that they don't have to pretend anymore.
And when they don't have to pretend, what is revealed is, again, in this is the worst case scenario,
Their actual goal is something like expansion of research, development, and construction from Earth into space and beyond.
And at a certain point, that means that human beings are superfluous to their intentions.
And what happens?
And then they kill all the people.
Right.
Yes.
The way you would exterminate a colony of bunnies.
Yes.
that was making it a little harder than necessary to grow carrots in your backyard.
Yes.
And I will never forget listening to that to what you just heard.
And then they kill all the people.
Oh, man.
Like I said, I don't listen to a ton of podcasts all the way through as carefully.
This one gripped me for some reason.
And I'm actually going to share more about, I'd like to make AI a topic of research for me in
2006.
I have other projects got to get through first.
But I think this is important to reflect upon now.
To be clear, I don't personally buy this.
Again, my being skeptical of it isn't worth a whole lot because don't base anything on me.
But it just does, it seems a little sensationalistic to me.
I think when I was reading through very careful reading of this projection, when they get to August 27,
and they're describing the U.S. government deciding to go all out in winning an arms race against China,
something I just, at that point at my, I kind of broke in my confidence.
And I thought, this feels a little bit like fiction.
And, you know, but what do I know?
Here's the thing, is it's worth reflecting on the challenge of AI, regardless of precisely what
you think is going to happen, because the development of AI raises some pretty deep philosophical
questions, and our culture is unprepared to answer them. And so it has never been more important
to have a vision for humanity and a vision for the world that is informed by wisdom.
And what I would say as a Christian is there's a need for theological reflection on the challenge
raised by AI. And to that end, I loved how Ross pushed back a little bit late in the interview.
So this is a question for philosophers, not AI researchers. But I happen to be trained as a philosopher.
Well, no. Well, no. It is a question for both. Right. I mean, since the AI researchers are the ones
building the agents, right, they probably should have some thoughts on whether it matters or not,
whether the agents are self-aware. Sure. To finish off here, let's reflect for a minute about
the need for wisdom and humility in our use of technology, which is, of course, relevant to us every
single day when we're pulling out our iPhone, but is also very poignantly relevant to this challenge
of AI. My friend Brett McCracken posted this on X recently. With regard to AI, it feels like we're
in the part of Jurassic Park where everyone marvels at the cute baby Raptor hatching. That resonated
with me because I've long thought of the book Jurassic Park. I don't know why, but I love the
book Jurassic Park. It was one of the first books I read as like a sixth grader or like longer books.
And it was, it's just such a great story. I'll say more at the very, the last thing I'll do in this
video is tell you my favorite scenes from Jurassic Park that are not in the movie, but are in the
original book, just for fun. But 10 years ago from my recording this, I wrote a blog post
when the book was 25 years old. It was written in 1990. And now it's 35 years old. And I think
its message is more relevant than ever. What people sometimes forget is that the Jurassic Park
You know, the movies and the franchise has become known for its special effects and thrills.
But the book is making a philosophical point.
The 1990 novel is like Frankenstein.
It's a cautionary tale about technology.
The dinosaurs are not just to make a cool story, though they do.
They're a metaphor about the dangers of a hubristic, undisciplined use of technology.
And it's a great plot to show that irony where people have so much power and so little at the same time,
they're able to create dinosaurs, but they don't have the power to stop them from eating them.
And it's a poignant image for an unwise use of technology where you have all this power,
but you can't stop it from destroying yourself.
Now, back in 1990, the technology that Crichton had in mind was commercialized genetic engineering.
And he talks all about that in the introduction to the book.
He was disturbed by the kind of gold rush mentality of trying to discover more and more and more.
and the lack of humility before nature reflected in that industry.
But the point he was making applies to the AI conversation today.
And Crichton's warning in the book is given voice through the character Ian Malcolm,
played by Jeff Goldblum in the movie.
He's a mathematician who's inspired by Heinz Pagels,
who was a popularizer of science.
And the entire novel is structured around Malcolm's idea of chaos theory.
Some of you will have read this book and you remember this.
It's in the movie a bit.
So instead of traditional chapters, the book just has these seven iterations, which are then divided into further scenes.
And the seven iterations are each introduced by a quote from Ian Malcolm and an accompanying graphic describing a further increase in complexity and a further loss of control each time.
So as you're reading along, you get this mounting sense of dread and doom as, because you know things are going to go wrong.
and the suspense is building because there's so many cool parts of the book that are not in the movie.
They see dinosaurs on the boat going to the mainland, so they're trying to warn because it's not just about the park.
They're trying to stop dinosaurs from reaching the rest of the world.
And the velociraptors are kind of delayed.
So for most of the early parts of the book, it's just the Tyrannosaurus rex.
And of course, the Dolophosaurus that eats Dennis Nedri.
But in the end, everything starts accelerating and the velociraptors start eating off the characters.
And what you have at a certain point, I'll never forget reading this, is Crichton rapidly alternating
between these grisly deaths from the velociraptors and then Ian Malcolm, who's the voice to some of his
message. And Ian Malcolm is under morphine. And so he's kind of talking and he's not making a ton of
sense all the time, but he's droning on and on about chaos theory. And it gives a rhetorical punch to
his ideas because you see his predictions coming true right as he's making them. He's like talking about
chaos theory and then somebody's eaten and then it's back to him and he's talking and so on and so
forth. And the overarching message that Ian Malcolm is saying can be summed up in this little
statement from page 318 of my copy of the book. Thankfully, they kept it in the movies.
Scientists are actually preoccupied with accomplishment so they are focused on whether they can
do something. They never stop to think about if they should do something. That is the issue.
intelligence outpacing wisdom.
The focus on what you can do, not what you should do.
Power over humility.
And the whole book is a warning about this.
This accelerating scientific knowledge that we have in the modern world,
unrestrained and unchecked by larger worldview considerations,
is extremely dangerous.
Now, Ian Malcolm in the book is not opposed to scientific knowledge.
He's just saying it needs guidance from a moral framework.
He says largely through science, billions of us live on one small world, densely packed and intercommunicating,
but science cannot help us decide what to do with that world or how to live.
Science can make a nuclear reactor, but it cannot tell us not to build it.
Science can make a pesticide, but cannot tell us not to use it.
In other words, science can give you all the power, but it can't tell you what to do with the power.
So science is desperately in need of philosophy and wisdom to guide it.
This is a great metaphor. He says scientific technology is a form of inherited wealth. So think of an 18-year-old
who inherits his father's fortune because his father has just died. So now he's 18 and a billionaire.
Okay. He's got all this money, but he doesn't have the wisdom that you would typically need
to have accrued over a lifetime to know how to manage that. If that metaphor doesn't quite hit home
enough, make him 16 years old or something like that. Okay. Give a teenager of billions of dollars.
That's what our scientific technology is in the modern world.
And so what you need there is wisdom and humility and learning, slowing down,
wait before you make huge decisions and this kind of thing.
Caution.
And the idea is 21st century humanity is the 18-year-old billionaire.
We're born into this world.
We have all this power, but we desperately need to slow down and learn wisdom to manage that.
So what is that larger wisdom that we need?
Well, let me speak to that just as a Christian for a moment, drawing from another Christian writer named C.S. Lewis, who has a similar theme of caution about modern technology in his book, The Abolition of Man, and also in its fictional counterpart, That Hidious Strength, which I have a whole video on, by the way, entitled, This C.S. Lewis Book predicted the 21st Century. That Hidious Strength is one of my favorite books. It's really weird and really fascinating and really relevant. Lewis basically says that pre-modern humanity, both Christian and pagan,
tended to see the goal of human existence as to conform ourselves to the world.
There is external reality that is objective and outside of us,
and we have to change as human beings to respond to it.
But modern humanity is just the opposite.
It's trying to conform the world to human desires.
And he says that's like magic in the ancient world,
because both try to conform nature to humanity rather than humanity to nature.
In other words, Lewis is saying that the ancient virtue was, I change to fit into the world,
whereas magic and modernity both say the world changes to fit me.
And you can see him put this in these brilliant words on the screen.
I don't want to drone on about this.
Hopefully you can get the basic point there.
But this is the kind of conversation that needs to be happening in the AI debate.
We need to be asking the big questions.
So, for example, you know, here's a good question to consider.
Suppose AI went really well. So are, you know, the most incredible dreams that you can imagine come true. And there's no threat. There's no misaligned in anything else. So basically in the future, you don't need to work at all and you can live in total luxury. Would that make you happy? That is the kind of question we need to ask. A lot of people aren't even asking that question. And what Christianity would essentially say is no, that is not the answer. That is not going to make you happy.
Because, again, like Lewis says, our deepest need isn't to just subdue external reality to our wishes.
Our very desires themselves are the problem that need to be changed. We need to change.
And Christianity uses this term sin to basically say, we're misaligned.
We're not functioning according to our purpose, which is to know God, to image God, and to have a just dominion over the world.
And Christianity says, basically, that's all gone haywire.
And each of us needs to change and go through what's called repentance, response.
to the work of Christ in the world, which Christ is God revealing himself to us. And I hope you would
consider that message because we desperately need wisdom and a larger worldview and moral framework
to navigate the power that we have inherited in the modern world. Some people have the idea
that the more technological power you have, the less you need religion, and I see it as just the
opposite. Humanity in the late 21st century is going to be desperately in need of religion. It's going to be,
you know, like the 18-year-old billionaire who needs a dad to help him or somebody to come in and help
him mature into that kind of power that you have. And on my channel, I try to explore themes that I hope
will help people who are exploring Christianity, especially people who just maybe brand new. So I have a
video like this one where I try to just give a brief introductory five-minute case. In a couple of
weeks, I'm going to put out a video on the resurrection of Christ. Also, I want to recommend the work
of the Keller Center for Cultural Apologetics. They're putting out great resources in this area. If you,
or others that you know are exploring Christianity, interested, saying, what is this religion all about?
I recognize. I think a lot of people right now feel it. We're like, you know, people are getting into
their 20s and they're realizing my struggles with depression, my struggles with loneliness, my struggles
with anxiety, I need help, and we're looking to religion. And I hope people would consider what it
means to have a relationship with Christ. For those of us who are followers of Christ, this is an area
where I think we need to give more thought. More broadly, our relationship to technology,
which is a significant part of our life as a Christian, and more specifically, AI, and so I'm going to make that a major area of focus in 2006, because I think we need, and I would invite others to help me think about this. This is a major area to think about theologically. All right, in the meantime, here's my favorite scenes from Jurassic Park. Not favorite scenes. I can't say that, but let me just show, I mean, people have to understand it is such a cool book. I mean, they call it a techno-thriller because it combines action with a lot of knowledge. I mean, you will learn about dinosaurs.
as you're reading. Michael Crichton was great. But also it's just so scary. I mean, it is like a horror.
It's much scarier than the movie. There's a sense of horror you get at points. There's people don't
realize this, but there's seredactyls, flying dinosaurs attacking Alan Grant and the kids in the
aviary in the original. That was, they saved that for the third movie. That's in the original book.
You have a baby Tyrannosaurus Rex that does a lot as opposed to the adult. You have a scene where
there's a Tyrannosaurus rex swimming.
I know this is a spoiler, sorry, but it's not that I can tell you what happened.
So swimming under a raft, when they're trying to raft through, so the T-Rex can swim really well.
So it's swimming under them.
And then they hide behind a waterfall and it's licking them with its tongue through the waterfall.
I mean, it's, how do you describe this?
It's scary, you know, to read all kinds of stuff.
I won't say anything more.
Read the book.
It's worth reading.
And again, it has a.
potent metaphor. And we'll keep talking about this topic. You'll expect, I hope you'll see more
videos come out about this because we've got to keep talking about this topic. Let me know what
you think in the comments. We'd love to hear your reactions to this. Thanks for watching
everybody. See you in the next one.
