The Sean McDowell Show - AI, Faith, and the Future: A Conversation Christians Must Hear
Episode Date: November 18, 2025What should Christians think about AI? Artificial Intelligence is reshaping culture, business, education, and even the way we think about human identity. In this roundtable conversation, I sit down wi...th 3 Biola professors–and bonafide experts on AI-to explore how believers can navigate the rapidly changing world of AI with wisdom and clarity. WATCH: The End of the World? John Lennox on AI and the Book of Revelation (https://youtu.be/mAfRswipXiE) In this discussion: Michael Arena, Dean of Crowell Business School Mihretu Guta, Professor of Apologetics/Philosophy Yohan Lee, Department of Math and Computer Science *Get a MASTERS IN APOLOGETICS or SCIENCE AND RELIGION at BIOLA (https://bit.ly/3LdNqKf) *USE Discount Code [smdcertdisc] for 25% off the BIOLA APOLOGETICS CERTIFICATE program (https://bit.ly/3AzfPFM) *See our fully online UNDERGRAD DEGREE in Bible, Theology, and Apologetics: (https://bit.ly/448STKK) FOLLOW ME ON SOCIAL MEDIA: Twitter: https://x.com/Sean_McDowell TikTok: https://www.tiktok.com/@sean_mcdowell?lang=en Instagram: https://www.instagram.com/seanmcdowell/ Website: https://seanmcdowell.org Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.
Transcript
Discussion (0)
Looking for a simple way to stay rooted in God's Word every day?
The Daily Bible Devotion app by Salem Media gives you morning and evening devotionals
designed to encourage, inspire, and keep you connected with scripture.
Plus, you'll enjoy Daily Bible trivia and humor, a fun way to learn and share a smile while growing in your faith.
Get the Daily Bible Devotion app for free on both iOS and Android.
Start and end your day with God's Word.
Search for the Daily Bible Devotion app in the App Store or Google Play Store,
download it today.
Life Audio.
Air conditioning changed things, watches change things, social media change things.
Your concern is that AI is going to affect what we think and how we experience being
human and shape us in ways we don't even really anticipate.
What's it going to look like to be a human being in the year 2035?
Well, we're going to lose social intelligence, which you can argue has already begun,
right, with the air of social media.
So the ability, as you said earlier, Sean, to sit eyeball to eyeball to eyeball to engage in a meaningful,
relational, communal interaction is going to begin to dissipate if we're not careful about it.
I think we need to take this extremely seriously because it's really causing serious damage already.
There are college students who cannot read.
There are high schoolers who cannot read.
There is always going to be a secular push to roboticize things and replace humans where they
shouldn't be replaced.
so we have to be mindful of that.
AI has taken the world by storm.
What technology is on the horizon?
What is positive with AI technology?
And where should we be concerned?
How can we wisely and biblically navigate our cultural moment?
With me today are three experts from different fields, business, science, and philosophy
who are leading the way in how we think about, develop, and engage AI technology.
Gentlemen, thanks for being willing to have this conversation.
Thanks for it.
Yeah, great to be here.
Let's jump in.
Briefly explain.
I'm going to have you introduce yourselves and just talk about how AI has revolutionized your field.
Johan.
Yeah.
Johan Lee.
I'm the Associate Dean of Technology here at Biola School of Science Technology and Health.
Computer science.
I teach computer science.
I teach AI.
And the way it's revolutionized it is, on the,
the one hand from a student perspective, the discipline of computer science has expanded dramatically.
And so for students who are really interested in computer science, it's about learning more
and delving more into math and physics, to be honest, to be able to help shape and build new
AIs that don't exist yet.
How long have you been at Biola?
Two and a half years now.
Okay, so briefly tell us a little bit of the backstory experience and training you have.
that you've brought to your position now.
Right.
Yes.
So my undergraduate degrees, believe it or not, in neuroscience,
predominantly in the areas of learning and memory.
So it's almost like I was destined to go into this case.
Then along the way, a PhD in big data analysis through bioinformatics.
That's where I got my first exposure into machine learning and AI.
And at that time, it worked fantastically well in a controlled experiment.
And so to see what it is now has really, I just can't imagine.
I couldn't have imagined that it would have gotten here this quickly.
Which makes us think, where's it going in the next six months, 12 months, six years, but we'll get there.
Michael.
Yeah.
Michael Raina, I'm the dean of the Kroll School business.
So I'm a corporate guy.
I grew up in the corporate world.
I came here about three years ago, most recently from AWS.
and it's completely Amazon Web Services.
Got it.
And it just completely disrupted this field of what I studied, which is management science.
And, you know, thinking about productivity, thinking about human efficiency, thinking about, you know, how do we get the, you know, sort of the muck on a work and get to the work of the work?
So it's revolutionized the field.
So the word revolutionized is not an overstatement when it comes to business.
Absolutely not.
Okay.
And I think we've only begun to see what's possible.
We're going to get in that.
I'm really interested to see how you think it might transform business.
Morettu, our newest hire in the Apologetic Program.
We did a video with you, but I want you to introduce yourself again in case some people
miss that and talk about kind of what, how AI is revolutionized philosophy, if you can even use that term.
Yes.
My name is Mehretu.
I'm Associate Professor of Philosophy in Apologetics here at Bayerla.
I'm a family guy, so by all I was where I studied before and I came back to home.
You did the M.A. Phil program.
Yes, M.A. Phil and also science and religion as well.
So I have a PhD in philosophy of mind.
So I work very closely to research on AI because I cannot really ignore AI because there are claims like AI can't be conscious.
We can upload emotions.
AI is no different than you.
and we can actually prove that you're inferior to AI.
The whole premise of transhumanism is really kind of to dump human biology
because it's defective design to upgrade human biology to a new level.
And, you know, computer science was born out of philosophy.
So philosophers are the four of the pioneers of computer science.
So how can I ignore AI?
I mean, I can honestly, like, I'll be outmoded.
that I'll be left over, like if I turn my back against AI.
So I work on AI research when it comes to philosophical issues.
Like, can we really upload emotions on AI?
Can consciousness be uploaded on computer gadgets and circuits?
Well, I also teach philosophy in neuroscience.
I teach AI as well.
It's impossible for me to ignore.
Multi-disciplinary approaches to all sorts of issues having to do with AI and human brain.
So philosophy of mind have been wrestling with these issues for a long time.
But now, like in the past maybe two or three years where we have tools like chat GPT,
it's felt more urgent and pressing.
Is that fair?
Well, people are predicting that we have now successfully kind of answered Turing problem.
You know, Alan Turing actually introduced in 1930s and 40s.
We haven't.
We haven't.
We haven't even remotely answered that.
And that's a mimicry.
Like, okay, you can come up.
computer, put it in a classroom, and then put another human being somewhere.
And, you know, you can play that toy, you know, exercise.
But there are metaphysical and ontological setbacks in our way.
You know, we can never be in a position to create consciousness on gadgets because we do
not have that metaphysical property.
Okay, now hold that thought.
We're going to come back to you.
This is really important.
We need in metaphysics and ontology.
Will consciousness, well, AI become conscience is a huge question.
and lean in a little bit for me, if you will, on the mic, just get a little bit closer.
Tell us maybe one or two publications.
I want people to understand and appreciate the kind of work that you're doing
that you've done in the past two or three years on AI.
Yes.
I am editing a book on AI consciousness and unconsciousness with Wiley.
It's a group of international thinkers, you know, from discipline,
from neuroscience, from medical science and computer science and all of that.
Hopefully that will be published in 2020.
and I'm actually almost in the near future
will be able to kind of sign a contract with Bloomsbury
on AI and Human Flourishing.
People don't know Wiley in Bloomsbury.
They're highly respected, leading publications.
You've done work with Cambridge.
I'm just bragging on you because you're in my department.
I want people to understand what you bring to the table.
I have another book coming out with Rutledge.
Excellent.
I'm submitting next month.
I have so many other publications.
Good stuff.
Well, I'd not start, but in terms of moving into the conversation, give us an example of maybe one or two areas in your field where AI has just revolutionized the way we do business or science or something that blows your mind.
You mentioned maybe a couple years ago we never could have imagined this.
What's something you paused and you're like, when I was in grad school, I don't think I would have believed if somebody had told me this was possible with AI.
Do you have an example of something like that or am I overstating it?
No, you're absolutely right.
I think for me, the one that consistently impresses me and gets me excited is machine translation.
So that's the ability of AI to translate linguistics from one language to another language at a level of extremely high sophistication.
So give an example of how they would do that.
My first thought goes to like Bible translation and how amazing that could be.
That we've been laboring for years and all of a sudden we had this tool.
Yes.
And I mean, maybe this is outside of the field.
Are people saying within our lifetime, within a short period of time, we'll have the Bible in every language?
Does that seem doable?
That's the hope, right?
Okay.
Now, of course, everything is harder than we expect, because what predicates that capability is whether or not we have enough of a corpus, a large body of knowledge in both languages so that you can make effective cognates, if you will.
And so in some languages, they are considered high resource languages like English, Spanish, etc.
But then some languages are what we consider low resource languages because we don't have a lot of material that spans a particular language in a digital format, such as on the Internet, in text, in PDF, or in audio recordings like this.
And so when it comes to low resource languages, AI struggles quite a bit.
but in high resource languages, you get the difference between like conversational translation
to business translation to academic translation and then sometimes highly nuanced written verse.
And so in that case, it could be incredibly powerful.
So I hope within my lifetime we get the Bible in every single language on earth.
That'd be pretty amazing if we could do that.
I'm curious how else in science in your field this technology would help.
Because for me, I did my dissertation about 10 years ago, and I was studying the death of the apostles.
So I found early sources in French, German, and Armenian, tracked down people, and I paid them to translate it word by word for me.
Obviously, AI could do that in the click of a button, but how else is this tool being helpful in the world of science and beyond?
Yeah.
in the world of science
the great thing about science
is a lot of it is just predicated on numbers
and numerics.
And so because of numbers,
numerics,
analyses,
you are able to then analyze things
at a scale that we just haven't been able to do.
And then one of the challenges
in a lot of experiments over the years
as we collect a lot of digital data
is that you can't compare apples and oranges sometimes.
But when now you get thousands
and thousands of analyses
all the different possible analyses
that have been conducted
in a particular area
of, let's say, medicine, for example.
Then you can actually get smarter, quicker.
You can find out where there could be a white space
and an interesting area of discovery
that has not been plumbed to the depth
and then find novel approaches
that we never considered before.
That's what I find most exciting about this.
I saw an article about how AI was analyzing
like whale language and communication
and was like, what, this never crossed my mind.
that we could analyze the depths of it, and it seemed to break new ground from my understanding,
limited understanding of whale language.
That was the narrative, but it seems like that's just the beginning in many ways.
Great example.
Michael, you said it's not an overstatement to say that AI has revolutionized business.
Give me some example of how AI just blows your mind.
I'm still trying to catch up with the neuroscience and the whales.
So, you know, this may sound elementary compared to the.
those things. It's, you know, it is the great equalizer from a knowledge standpoint. So just think
about this. It's up-leveling people who are at the bottom or the beginning of any given knowledge
set, and it's moving them up to the median or above instantaneously. I mean, just to give it, like,
real examples. Yeah, do it. You know, I studied this a lot, like onboarding someone into an
organization and to learn what that organization does and what to do in their job. You know, on average,
you know, that's about a 24-month to 30-month horizon.
For me or you to come into an organization
and be at like the median of the knowledge set of that organization,
people who are using AI in an AI-native environment can do that in three to four months.
So, I mean, it's just eradicating, like bringing people up to the,
just the average in such a way that they're being productive,
more frequent or more often up to proficiency,
and they can then start to focus on other things.
So, I mean, that's just one example.
So what would that look like?
I imagine 24 to 30 months.
I'm talking with colleagues.
I'm sitting in business meetings.
I'm just absorbing it by being there, asking questions.
Maybe there's some formal training.
What does AI do differently that that's what a quarter of the time roughly?
Well, I mean, you can think of knowledge as being both taught and cot.
Like there's, you know, there's this, you know, form of knowledge where you can read it,
you can study it, you know, and that has an uptake of whatever time frame it is.
But then there's also the things that are caught.
Like the things that are modeled, the things that, and AIs really help to accelerate the
learning curve on both of those.
So it's been quite dramatic.
So is this just one more question?
Is this like the leaders of businesses are using it to educate or people are just getting
online and asking questions and using AI to more quickly learn or is this some combination
of both?
It's a combination of both.
And I think we're still trying to figure out exactly how to do this.
So some organizations have figured it out and are already accelerating that learning process.
Others are catching up to figure out how to do that.
But at the end of the day, it's made us – and I hate to say this next to my technology friend,
but it's made us all technologists, right?
Like I can go in and vibe code a website.
And, you know, I know coding about as well as I can speak spam.
which is just a little.
And, you know, it's kind of equalized the playing field for all kinds of different professions.
Oh, that's so interesting.
In some ways, you're right, it makes us all technologists, but it makes the experts who used
to be the older, seasoned, wise sages of a community.
We saw this shift with Gen Z where older generations are like, how do you use social media,
how do I set up Facebook in the day, how do I do YouTube, and younger generations became
the expert, so it flips that on the head in many ways in business as well. Very interesting.
Now, when you talk about revolution in philosophy, these things move a little bit more slowly
and methodically. So I wouldn't expect there to be some technology that just flipped it on its head,
but maybe give us an insight of since AI has been here over the past two or three years,
how these conversations have heated up in the world of philosophy. Yeah, I think, so you don't
see us like on a white gown, you know, sitting behind a kind of sophisticated lab equipment
and so on. But we really think about AI because it really is giving us insights into the
nature of consciousness. So everything that you see around you, any AI tool that has ever
been created is an instance of the complexity of human consciousness. And nothing else. This
is literally the outcome of a three-pound wing organ inside your skull.
this mysterious organ that God has created and designed.
Like, we are trying to understand what the nature of consciousness is.
So, okay, technology.
Want to keep God's word with you wherever you go?
The King James Bible Study KJV app by Salem Media makes it easy to read,
study, share, and pray daily with a timeless KJV translation.
Enjoy features like offline access, audio Bible listening, smart search,
and tools to highlight bookmark and take notes,
all designed to keep your Bible studies simple and organize.
Best of all, it's free to download in the,
Google Play Store.
Growing your faith every day.
Search for King James Bible Study, KJV, and download the app today.
It just will be inventing, enjoying, introducing, we're using that.
It's great.
But what did it take for such invention to be realized, for example?
I think it took your consciousness.
But what's consciousness?
We have no clue.
We have no clue.
It's one of the most elusive properties that we all are carrying.
And that's the property that's allowing you to invent all those magnificently impressive and astonishing things.
Now, I did.
Let me jump in here.
I did the M.A. Phil program in the early 2000s, and I had metaphysics, consciousness one, consciousness two.
And there were leading thinkers, people like Jaguar and Kim, who would say things like,
we don't have a clue how you get consciousness.
You just said we don't have a clue.
With the advent of AI, are a lot of philosophers more confident that they can solve that problem now?
or is there still a sense of like we have no idea how consciousness could emerge by itself in a naturalistic world?
So there are diverse opinions.
So physicalistically minded philosophers might think, okay, yeah, you're not different to begin with.
You know, you're just a biological machine.
And we're modeling that, inventing computers.
So there isn't kind of very surprising thing about you.
But our own finitude, we're not there yet to come.
crack the code, but you are a biological machine.
There is, you, you have nothing by way of like soul.
You're not a soul.
You're not spiritual being in any way.
But we will get there and we will answer this question.
But the problem is like the origin of consciousness, is it being caused by the
complexity of nervous system or is it something that enters into the constitution
when the nervous system is ready?
Like, just like, as you would expect, a guest into your home, when, you know,
okay, everything is in place.
Now you open the door and the guest appears.
Is that how I have actually published defending that view?
And we literally have no clue.
So JP, myself and some, you know, dualist philosophers,
we are incredibly skeptical of what the physicalists are really telling us is the case.
It's not as simple as that.
But the most surprising thing about this stuff is everything that you see around you
is literally the result of consciousness.
what exactly is consciousness?
I think it feels like God has thrown away the key
and then he tells us that just enjoy.
I mean it doesn't mean that we cannot understand
the nature of consciousness or I'm not saying we can't make progress
but up to date we haven't made any progress
in answering the origin of consciousness or the nature of consciousness.
But we know what it is.
It's too familiar but too elusive.
That's interesting.
I love it.
As a philosopher, I have so many questions for you,
but we'll come back to it.
We're going to get to some of the positive about AI.
We're going to get to how we might think biblically about AI.
But I want to know what concerns each of you most.
Like for me, loss of jobs would concern me.
I'd love your take on that.
I just heard a podcast this morning about how Hollywood certain figures are like ready to just kind of pull the trigger on using AI in a way that will result in a lot of loss of jobs potentially in that world.
I'm concerned about the loss of the ability to know truth.
from fiction, especially some of the videos that come out. I mean, I was speaking at an event
and somebody just for fun was like, I'm just going to make up an introduction for Sean from
Trump. And it's just to be fun, chose the president. And I sent it to a few people that I could,
wait a minute, is this real? Which made me laugh that they thought the president would give me
an introduction. I'm like, do you actually think this is real? But they pause for a minute. So like the
inability to know truth from fiction concerns me, some of the dehumanization just in
the sense of when I'm all for saving time, but when AI does things that humans should do.
Like I remember the first time I heard a guy that broke up with a girl texting. I'm like,
you've got to be kidding. You've got to look somebody face to face. You owe that to them.
So I have a bunch of concerns, but I'd love to go kind of just field by field and tell me what
concerns you most. And we'll just keep the pattern of going this way. Johan, go ahead.
Yeah. No, I mean, for me, it is that dehumanization. That's the thing that concerns.
me the most. I mean, these are
technologies. These are tools. These are products that
are meant to be solving a problem and
providing some form of human benefit and
advantage. And when products like this
cause a 13-year-old and a
16-year-old to commit suicide,
no, that's a fail.
In the most dramatic terms, right?
And so
these are not just nascent
technologies that you just leave on the shelf.
I mean, these must be used with discernment.
So getting into the
inability thing, because as
technology gets more complex, as
capabilities get more complex, then
because of profit motive or efficiency,
you're going to start seeing a lot more synergy.
Now
it's AI for self-driving cars.
Will it be airplanes
one day, right? Will it be
next? You know, rocket ships, you name
it. Anything that is typically a crude
vehicle could that then be
replaced with an AI?
It may be, and there may be tremendously good reasons for it.
And I can be
a proponent and also I can also be a
critic. But the part that I think that really
matters is humans need to be able to
handle the corner cases when technology
fails. Humans need to be able to solve and understand
what's going on when it doesn't work. I mean, we just
had a massive global outage of
a tried and true technology for the last 15 years that cost a lot
of time and money, right? And again, time and money is just time and money.
It's not at the same scale of values,
in life.
Sure.
But if the more that we offload our thinking to an AI, then our students and our future
operators and engineers and scientists of the world are going to be less equipped to do
the math and figure out when did the code break, when did the technology break, what was
the corner case that it mistook.
That's important because for anything to be successful, you have to be able to support it
well, you have to be able to operate it well, you have to maintain it well, and you have to
be able to troubleshoot it well, and you have to be able to know how to take it offline.
and replace it with something in the meantime for basic business continuity and then most often health and human safety.
So we need to be experts in the technology when it breaks because it always breaks.
So it sounds like you're saying there's dehumanization on two levels.
Number one, when we export things human should do to technology, we lose something of what uniquely makes us human and flourish.
On the other hand, the more we export to technology, and that technology,
fails and it has failed and we see it fails, then we're caught in this cycle where we don't
have the ability to really fix it and catch it because we're so dependent on it, which makes
me think we could come back to this Maratu that oftentimes people would say, what are you
going to do with a philosophy degree, right? Like, what's the point? I think I heard Lee
Schroble and a talk here about like 20 years ago. He told a joke about how, you know, what is a
philosophy major and a medium pizza have in common? And he's like, neither can feed a family
of five. Like the joke is you can't do anything with philosophy. If you're right, actually learning
to think critically, uniquely human skills might be gaining in their value in a way we didn't
appreciate in the past. That's a really interesting way to look at it. What concerns you in the
world of business? I got to dive in on the human skill thing here first. I mean, I think that a philosophy
degree is going to matter more than ever in the future.
100%.
I think that a liberal arts education think critically to reason, to be able to dive deep into,
you know, what's happening here and what are all the unintended consequences is more
essential to our survival than ever before.
So I'm not afraid of the tech.
I'll start with that.
I'm afraid of the humans.
And what I mean by that is I'm afraid of the thinking.
of the humans who design and build the tech.
And I'm afraid of us as users of the tech and how we may misuse, abuse, and frankly give
away our freedoms as a result of it.
So for me, the existential crisis is on the human side, not the tech side.
We can build garbrels on the tech.
But we in the liberal arts, philosophy, critical thinking, you know, psychology, we've got to be
thinking about all the human consequences.
One of the, you know, yes, there will be, you know, as the business guy, yes, this is going
to affect jobs.
There will be a pretty radical job dislocation.
I mean, we're seeing it.
We're seeing it already.
In the long term, I'm not as fearful about that because we've shown the resiliency to
recreate new jobs and come up with new ideas.
So I think that's a threat in the near term.
I think the long-term threats on humanity.
There was a great study done from Elon University,
and it asked this simple question,
what's it going to look like to be a human being in the year 2035?
And the experts all said things like,
well, we're going to lose social intelligence,
which you can argue has already begun, right,
with the error of social media.
So the ability, as you said earlier, Sean,
to sit eyeball to eyeball and to engage
in a meaningful, relational, communal interaction is going to begin to dissipate if we're not careful
about it.
Yeah, cognitive dependency.
You know, certainly, you know, there have been studies where, you know, if we overrely on
AI, our brain doesn't have the same cognitive load, doesn't have the same synapse connections.
And those are all true.
But the one that scares me most from that study is the loss of our human identity.
and I just think about that for a moment.
Just play with Sora 2.
And I can pick anything, right?
I can pick anything.
I don't know how many of them played with Sora 2,
but I can put myself in the middle of a story as hero.
I can make myself the hero.
I can make myself anything I want to be in this artificial world
and give up my God-given identity,
you know, what created in his image for something that the world is,
So those are the things that, you know, are incredibly alarming to me.
So the business angle, we're moving from a non-AI to an AI world.
And probably those in the middle, we're going to see more loss of job.
But your sense is we might kind of settle in that once we have people who haven't been trained 30, 40 years ago in such a different world, that we'll adapt economically would be your sense.
But the bigger concern is, let me frame it in a way, Tom, if you agree to this.
I suspect all of you agree this, that technology is not neutral.
And I don't mean morally neutral.
That's another question we could talk about.
But technology affects us.
Like the airplane made the world different when you could travel to Europe in a way people couldn't before.
Air conditioning changed things.
Watches change things.
Social media changed things.
Your concern is that AI is going to affect what we think and how we experience being human and shape us in ways we don't even really.
anticipate? I think it could.
It could. Okay. And I think our responsibility is to start thinking about it much more
holistically. So as a business person, I'm training every business student that comes
through a Crow School business in AI literacy. Otherwise, they will be at a significant
disadvantage. But just as important, more importantly, I'll say, foundationally below that,
is how do you think ethically about the use of AI? So one corner of what we call the triangle,
the top being AI literacy, another corner being, how do you think about this from, you know, as a
biblically centered institution for a set of biblical principles with an ethical lens? And then the
other is how do we teach you to, how do we teach the enduring human skills? Because it turns out that
the technical skills have a shelf life and that's shrinking radically because AI is augmenting that.
But the human skills, how do you engage in a conversation? How do you think critically? How do you
How do you influence other people?
Those things are enduring.
So if we, so my hope, on my good days when I'm hopeful about this, if we can teach AI
literacy and do it with a set of ethical principles and teach people how to be better
human beings that can influence one another, positively influence, you know, then I'm
very hopeful.
On the days where I think we over-index towards AI literacy only without those two foundation,
traits, that's when I get scared about.
Last question, quick fault.
Do you have a sense of how much that's being done in the business world?
It's interesting.
It's, if you ask the business world this question, they will say, yes, by all means,
we're thinking about this ethically.
If I were to ask, you know, those of you who have studied ethics, are they thinking
about it that way?
The answer is they haven't been trained to.
So the short answer is, I don't think there's like mile intent here.
with a few exceptions.
Yeah, that's fair. That's fair.
But I think what it is,
and this is the liberal arts education
of why it's so critical these days,
I think we have not taught people.
Need a daily spark of hope and direction?
Let the Daily Bible app from Salem Media be that spark.
This free Android app delivers an uplifting verse each morning,
plus reading plans,
devotions, and trusted podcasts from leaders like Joyce Meyer and Rick Warren.
Prefer to listen instead?
The Daily Bible app reads verses,
reading plans and chapters allowed, handy for the headphones moment of your day.
Choose from versions like ESV, NIV, NIV, KJV, and more,
and bookmark favorites to revisit later.
Share inspiring messages with loved ones right from the app.
Feel God's presence in every notification.
Search for Daily Bible app on Google Play and begin your day with hope, purpose, and peace.
How to think ethically, how to ask these questions,
and we've tossed them into these technology engines,
and they're building things based on their own moral code,
and usually based on, Johan said this already,
with a capitalistic mindset that speed matters disproportionately.
So all these other things are coming along later, maybe,
but they haven't been trained how to think about these things.
Oftentimes I've seen is ethics follows the technology
because it's just too powerful and transformative.
You're trying to do differently here, which I appreciate.
as a philosopher and apologist
Maritou, what concerns you the most about AI?
I'd like to begin by thinking,
John and Michael,
honestly,
I'm profoundly grateful for your observations.
This is one of the areas
where I'm vocally,
in a kind of extremely kind of critique
of the relevance of AI.
The critical thinking is really irreplaceable.
It's not something that we can negotiate over.
It doesn't matter what you do.
It doesn't matter what your discipline is.
It literally doesn't matter.
That's a requirement.
That's a prerequisite for you to succeed in anything that you do.
So I think AI, if it's not handled carefully, if we are outsourcing everything to AI,
the end goal of that kind of commitment to AI is going to be disastrous.
Yeah, it will make us data clerks.
Completely.
So organic thinking should be taken very, very seriously.
So organic thinking shouldn't be handed over to AI.
imagine there's irony here.
So these people who are writing these programs
or algorithms and so on,
they are using their mind, right?
They are effectively using their mind.
But they are telling the rest of us,
no, no, no, no, no.
Don't read the book.
Here's the AI summary.
Don't even, like, put your finger on the book.
You don't have to even know the color of my book.
You pretend as if you're an expert.
Okay.
That's the disservice.
You need to read a book from cover to cover
if you really want to be a genuine expert.
And also our conception of knowledge has to change.
Knowledge has to be pursued for intrinsic reasons,
not only always for financial gain.
Or instrumental.
Okay, explain the difference between intrinsic versus instrumental value of thinking.
Intrinsic knowledge is a knowledge that you can pursue for its own sake.
There's nothing attached to it.
Like, oh, I want to make X amount of money.
Or I want to be famous.
or I just want to impress people.
Those are very, very instrumental, you know, pursuits.
An instrumental value or knowledge is a knowledge such as, for example, I have a money.
Let's say I want to have a degree in computer science because I just want to make money.
I'm really laser focused on making money.
That's it.
I really don't care.
So I think we need to take this extremely seriously because,
it's really causing serious damage already.
There are college students who cannot read.
There are high schoolers who cannot read.
A study came out of MIT.
I don't know if you were referring to that study.
ChatGBTGBT users, brain conductivity is almost undetectable.
And brain users, which means people who really struggle in a sweat over reading, writing,
an organic way, it's like California wildfire.
You can actually see the brain is lit up.
There is a neuroscience, as you know, as Joan knows and Michael, actually you also know neuroplasticity, for example.
Neuroplasticity is the concept that how much you challenge you brain is proportional,
how effective you would become in terms of having this rich cognitive life.
The less you challenge yourself and the outcome that you get is proportional to how less you're challenging yourself.
So I think that in business school, for example, there has to be a philosophy of business
where students not only kind of kind of taking this scale, which is extremely important.
AI literacy is extremely important, but they need to also be equipped with business philosophy.
Like what kind of person should you be as a businessman?
How should you interact with other people?
How should you handle difficult situations?
Not only like solving tech-related problems, but human beings,
are problems themselves, right?
They are incredibly difficult creatures to deal with in some situations.
So how can you manage and navigate that environment?
This cannot be done without critical thinking, Pagelties.
Amen to that.
And by the way, you're a dualist, you and J.P. Moore,
and two of the leading defenders, the idea that we're body and soul today.
So when you talk about neuroplasticity, we have a soul, an immaterial component.
We have a physical component.
And they interact.
what we do with our minds affects our bodies,
what we do with our bodies can affect our minds.
When we export some things
we're supposed to do with our minds,
we actually see the negative effects
in our bodies,
hence using chat GPT in certain fashion,
affects the processing and development of the brain.
Imagine that the memory capacity of chat GBT users
is shrinking down to seconds.
Like they don't even record what they have posted them
five minutes ago. Why?
If you haven't sat on something,
If that's literally it's not something that came out of your own effort,
why do you expect to remember?
Let's suppose I have Mike in the math class.
You know, you are a math genius, and I hired you, I gave you money,
and I am there, I'm a student, and you are my chat GBT.
And I show you the calculus problem.
Mike, okay, here is it.
What should I say?
And you tell me, and I write that down.
And then I got an A on a calculus exam.
Do you hire me as a mathematician in your school?
No chance.
That's literally what chat GBT is all about.
You put the prompt and it speaks out to you information and that's not yours.
If you write an essay by downloading from chat GBT, you haven't written anything.
You need to admit that.
So I think we need, as Christians especially, like the life of the mind means, loving God with your mind means literally is sweating, spending countless hours doing your own, think genuinely with integrity.
And that's not a loss.
and that's exactly how you can show your own integrity.
That's how you can prove your own genuine expertise.
You can't cut the corners.
You'll be caught.
Like if you fake your way, you will have no hiding place.
Like if you stand in front of me as a mathematician
and if you want, if you have faked your way,
I can make your life miserable in half a second.
Here is a whiteboard.
Here is a marker.
Go ahead and show me derivatives and blah, blah.
Just show me.
and that's a reckoning moment for you.
Making your way is not going to help you.
And people need to understand that.
This distinction.
Oh, yeah, go ahead.
I just want to follow up on that because, you know, this whole duelist, I've actually
studied your work and JP's work.
And I think this whole dualist approach, it's thinking, but it's also experience and the
inner experience.
And I think about it this way.
Like the turning test actually was B, you know, in a university just south of where we
are right now. But it was beat because it mimicked human emotion. But it never felt. It never experienced it. It
doesn't know what love really is. It just knows how to mimic those emotions. And I think that's the real
danger is that this inner experience, both learning and then emotional is something that AI will
never mimic. And that's who we are, you know, as image bearers. And we've got to be very careful
to, you know, not give that up. Can I ask something? This is a really,
point because what Michael was referring to is what we call first person experience, the ability
to introspect your own mental states. For example, right now, as I engage in this kind of
conversation in the back of my mind, I'm literally running so many other things. I am aware of
those things, but they are not relevant for me. I'm not going to let them come to the surface of
my consciousness and be part of this conversation. That ability is incredibly mysterious. Imagine you
literally are aware of what's happening inside you.
Mind, not brain, mind.
Brain is important to just facilitate.
We're not our brains.
So first person data is 100% unmanageable
given any neuroscientific studies.
Like, you peer into my brain as much as you want.
You can come away with blood flow and neurotransmitters
and electrivities and proteins and boring water and so on.
But where is my information?
where did I store?
You can't see my beliefs, you can see my desires,
you can't see my plans, my regrets, and so on.
If I am not so, where are those things?
If you have access to my physical brain,
why can't you actually tell me
where my everything that information is?
Where?
Well, memory, because, you know,
memory is storable and identifiable.
So let's not forget that piece.
I mean, that's been discovered for a little bit of 30 years.
The memory is.
Yeah, so I have actually a great skepticism when it comes to memory.
Yes, we can know certain parts of your brain, hippocampus and synaptic connections and so on,
but no one has ever been able to actually show me where is that information.
You can show when this structures get damaged, yes, exactly.
You will not be able to remember and that's correlation.
That's not causation or identity.
Those two things have never been shown in any discipline.
the ability for memory formation and where those are located has been shown.
Yeah.
More importantly, the thing that you brought up, the two of you brought up,
and this is a little controversial against Michael,
is because, you know, friction is really important.
Like, I love friction.
I know that goes against some of the things that you say at times.
But, yeah, because friction is where we learn,
but more than that, it's when I think about what's at the end of the line, right?
at the end of the line, if you're not,
if you're not delving deep into these faculties,
developing these faculties, these capabilities, these skills,
and again, I'm not just all about skills here.
People then are leading their lives to misery,
whether they know it or not.
You know, we're talking about the Bonavito right now, right?
What is the good life?
What is the richness of life?
What's the point of living?
You know, the more you offload,
like you said, the more you fake,
you're just leading towards misery.
Whereas, you know, why are we enthralled as human beings?
Why do we have joy as Christians?
It's because amidst all the friction,
we've learned to do really, really good things,
and our product is not about being productive.
It's about actually doing things in this material,
real, natural world,
and recognizing that a lot of times these things that we do are good.
and they're beautiful.
And that brings joy and satisfaction
amidst all the challenges and all the muck.
And so for me, it's always about
we want to train our students
because let's be very clear.
You know, as academic missions,
what are we in the business of?
We're not just in the business of academic formation.
We're in the business of spiritual formation as well.
And so what does that mean?
Helping develop people to live and have
and enjoy a better life
because in our beliefs,
what do we believe,
that the chief end of man is what,
to love the Lord and know him as fully as possible,
and then enjoy him and what this means on this planet.
And this is where, like, the evangelist in me kind of kicks in, right?
Like, when you do street evangelism for the first time,
it is absolutely terrifying.
It brings you that reckoning moment, like, oh, my gosh,
what am I doing on the beach here on 3rd Street promenade in L.A., right, in Santa Monica?
but when you do it and because you believe in it,
you literally believe that you are here to bring good to the world,
that's living.
That is pursuing and then recognizing that all the things in your life experience,
the memories, the emotional context,
can be brought to bring a tremendous good to people around you
that you've never met before.
And what throws me off half the time when I do this is
I'm actually surprised that people want to listen to me.
that people actually care that I am out there on the street, trying to share with them, the Word of God,
that there is hope in this world, that there's a good out there.
What blows me away is not the doing, you know, and just like Michael said, it's not about the tech,
it's about the person.
What blows me away is that people are actually receptive.
Hey, Yon, let me jump in and ask you this.
Before we shift back, one of the arguments that Morettu was making is a common argument for the soul,
that there's third person access.
You could study someone's brain,
but then there's first person access
that you can only get by asking the person to reveal it.
So I could know in principle
maybe your emotional state, angry, anxious, sad,
but I could never know what's driving it
unless you tell me.
So there's an interaction between the mind and the brain
in terms of maybe where memories and how they're stored,
but the content of that would require a first person reveal.
You agree with that.
Yes, and I have to because neuroscience has indicated from what are called near-death experiences, NDEs.
Love it.
People who have essentially been brain dead, potentially physiologically dead on the surgical operating table,
and they experience an ability after they were resuscitated and revived that they can completely recall the events that were happening in that operating room while they were.
unconscious and unable to see, visually, unable to hear, whatever it is, right?
And enough of those instances have been surfaced in academic literature, not on the orders
of tens and hundreds, but literally thousands.
That's not a trend.
That's a pattern.
That is observe phenomena.
And so whenever you observe phenomena like that, as a scientist, I must reconcile that
that is a real thing, right?
And it's been reproduced and reported literally thousands of times.
So that indicates there is something in the first person distinct from the physical biological reality of the brain that is able to be aware, able to hear, listen, cogitate, cognate, and also process, and then be able to indicate that there was an emotional component there.
There was a physiological sensory component there, completely devoid of the human corpora.
And so, yes, the evidence indicates that I have to believe that there is a first person.
And the key to that is, and then I'll come over to you, the key that is people come back and have information.
Yes.
They could not have had while their brain is not registering any activity.
One more thought that.
Looking for a simple way to stay rooted in God's Word every day.
The Daily Bible Devotion app by Salem Media gives you morning and evening devotionals designed to encourage, inspire, and keep you connected with scripture.
Plus, you'll enjoy Daily Bible trivia and humor, a fun way to learn and share a smile while growing in your faith.
Get the Daily Bible Devotion app for free on both iOS and Android.
Start and end your day with God's Word.
Search for the Daily Bible Devotion app in the App Store or Google Play Store and download it today.
We're going to go back to memory.
I've recently given a kind of interview on memory stuff.
You're absolutely right.
I mean, there are hippocampus, for example,
is implicated in facilitating memory
and synaptic connections and so on.
So here's a conundrum about memory.
You did a book, let's say three of us read a book
and we threw that book away and we met like this
and all of a sudden start talking about it.
What went from that book to wherever it went?
You didn't chew up the book.
You didn't.
Protein synthesis.
You produced a memory of the book.
That's a model.
That's a theory.
So we don't confuse modeling.
theory as if it's explaining what the mysterious aspect of how memory actually works.
So we read, okay, I don't show out the book.
I have no idea how the information went from that book, wherever it went, no idea whatsoever.
And so we have this incredible capacity to compartmentalize our own memory lives.
Like, if I were to ask you about your life history, you're not going to confuse that with your
responsibility that you have here at bio-al.
You're not going to talk about your students.
You're at ease, you pull that information out.
Within the context that required you to do that kind of stuff.
So your family life, blah, all, everything.
Peer into my brain using any kind of technology you like.
You're not going to be answered that question.
You're not going to show me a file.
You're not going to show me, oh, his history is over there.
His history is just one inch away from the hippocampus toward the limbic system and so on.
You're not going to be able to do that.
So there is a deep mystery here.
I also don't believe that the bearer of memory is your physical organ, your brain.
Your brain has actually a limited capacity, spatial wise,
but you have potentially infinite, you have a capacity to memorize infinite amount of information.
Potentially, I say it.
Not actually, potentially.
So the brain as a physical organ is a limited space.
So if you have an ability to contain potentially infinite amount of information,
then it's not the brain that's doing the magic.
something deeply mysterious.
Something beyond the brain.
Beliefs are not born by the brain.
Desires are not born by brain.
Memory is not born by brain.
All non-physical properties are not born by brain.
Brain is important.
I'm not doubting.
Healthy functioning of the brain is absolutely necessary.
But it's not sufficient.
But it's not sufficient.
Good stuff.
All right.
I want to know where each of you think.
Maybe AI technology is headed.
And you could pick six weeks.
You could pick six months.
pick six years. What's something you see that's maybe on the horizon or maybe in your case where
this conversation is headed. Where do you see some of the technology headed?
That's a big question. Honestly, I need a lot more time to think through that because, you know,
being in the business of AI research, you're focused on what is in front of you. And so my ability
to do long-term projection is a little bit on the weaker side.
fair but when I think about immediately like where is it headed um obviously it's retooling industries
right now um it is unfortunately playing far too big of a part in um our media and consumptive abilities
um the concerns that muratoo he and i actually share this concern it's it's you know when
you don't read that book cover to cover you just miss out on a ton um primary sources are critical
because you have to do the mental frictional exercise of, wait, did I just, you know, I read this paragraph, it's in English, it's really hard, what did I just read?
That exercise of going back and rehearsing and trying to muck through that or go through that harder equation from, you know, into triple integral calculus, you know, those bits are there.
So there's going to be a friction, there's going to be a tension there.
On the other hand, there's the really exciting, wonderful things.
So 6G wireless is coming very, very soon.
The ability to transmit and consume information at scale,
much quicker, greater volume and velocity is going to make our interaction with digital spaces
much more immersive and consuming, which comes with all the negatives, as you can imagine.
Of course.
Other areas where AI has taken us is a mechanized reality,
where, you know, when you think about Fourth Industrial Revolution now and transhumanism,
there is always going to be a secular push to roboticize things and replace humans where they shouldn't be replaced.
So we have to be mindful of that.
But where it could be incredibly helpful is in developing and discovering new, you know,
whether it's potential advances in medicine, new means of better managing resources,
material resources on the earth,
how to do extraction for energy better, faster,
with a much smaller destructive footprint,
all of these.
So for me,
I love AI in the sense of its discovery potential
as a tool for increasing the human condition economically.
Like,
the area that I get really excited about is AI
to better train robots for elder care.
Like, we're in Southern California,
A massive demographic explosions is going to happen soon.
And skilled nursing facilities, the first thing they bring up is our biggest challenges is turnover.
We don't have enough skilled nurses.
We have more patients than we can to meet the needs of the way that we would want to as individuals.
And so robots for skilled nursing to assist people who cannot afford it, folks who cannot get the level of discrete care necessary,
that's the thing that gets me excited.
But then all the negatives come with that.
And so I'm constantly schizophrenic between what I need to manage and what I need to advance.
That's like a whole show in itself because we talk about not wanting to dehumanize and a robot can't replace presence.
But can it help with somebody who literally has no one or not anything to help them as a part of the process?
That's where it could potentially help out.
That's amazing.
I just saw a study on AI this week.
They were talking about trying to replicate the hand
and how it's one of the hardest things
because there's so many sensors on the hand,
which to me is an argument for intelligent design,
how it would have happened by chance.
But I digress.
Tell us a little, Michael,
how you see what's maybe on the horizon
in the business world.
It's a really hard question to answer
because I don't think any of us really know.
And we've never seen a technology move
with this velocity.
I mean, it's just incredible.
Ever, in your lifetime.
Any technology we think of.
We have broken Moore's Law, right?
I mean, Moore's Law would say that, you know,
compute capacity is going to double every 18 months,
every 18 months to two years.
You know, the, you know, we know just from NVIDIA alone
that the, you know, the last Navidia chip increased inference
by 30x from the previous one.
I mean, we've never seen technology like this.
So the speed and velocity,
And we've only really seen generative AI.
I mean, that's the buzz, right?
That's what we're talking about.
And it's actually hit us in places that we didn't think it would hit first, which is like software development.
And, you know, some of those areas, you know, I mean, I'm very long on the, you know, the discovery capacity of AI.
I think deep minds doing some incredible stuff.
I mean, there was one study recently where in 17 days they discovered, I think it was a
I think it was 400,000 new hard materials.
And that would have taken us as human beings,
800 years to discover.
17 days.
So 17 days of, you know, 800 years collapsed to 17 days.
So like the breakthrough possibilities on things like,
how will you power your next vehicle?
And, you know, the medical breakthroughs.
I mean, we've already been able to use AI in a fairly cruise.
format from where it will be to, you know, increase, you know, breast cancer detection by 17%.
And, you know, so you start to think about these medical areas, I feel really good that my,
on the opposite side of that, so it's less about the tech and it's more about the concentration
of where the tech's being built, you know, I'm, I'm concerned about this thing that's now being
called, you know, abundant AI. You know, how do we, how do we make AI more abundant, like more accessible
to everybody. It sounds good, right? It sounds great, except for the concentration of the people
who are trying to do that is really locked in the three, four, five different organizations that are,
you know, in strange ways, invest in billions of dollars in one another. So it's become a bit of a
closed society. And I think we as believers need to be concerned, even the term abundant AI, right?
We know what abundance means. And, you know, we know. We know.
you know, what John 1010 says that Christ came to give us life abundantly. So the deception,
even the deception of that is scary. So, so, you know, on a medical discovery breakthrough
side, it's, it's going to be astounding. On the misuse, limited, you know, sort of isolation
of power and who has control over those things, I'm super scared. How much of it, if you can say,
in the business world is really the technology and the use of it is driven by money.
I don't know if I can give a percentage, but I will say that it's the first, second, third
thought.
That answers my question.
Fair enough.
Of people who are applying this, right?
And I have never, I've been driving change inside corporations my entire career.
I have never seen a change mandated upon people more, with more velocity and aggressive.
from the top down than what I've seen this.
And the human beings aren't ready to, you know, adopt it completely because we're going
to give up something that's essential to us, you know, the conversation we were having
before.
You know, I've built my identity in this work that I've mastered and I'm being forced to
use a solution that will put me out of business from that perspective.
So, yeah, the commerce part of this is driving it and all these other conversations we're
having, they need to catch up. And I think that's our obligation. Our obligation is to lean in more
aggressively on the ethics side, on the, you know, the consciousness side, all these other things
because otherwise commerce will drive this. Man, that's definitely a, that's a scary thought.
We'll come back to how each of you are leading the way in that regard. I'm going to throw a wrench
your way, Mara. You said something earlier. This has kind of been in the back of my mind.
What I've been hearing people say, maybe for the past few decades, is when we'll like computers
and thinking, catch up with human beings.
You said it earlier, there's a push to show
that it's surpassed human beings
and we are inferior.
Is that a shift that you've seen?
Is that where kind of the conversation is going?
Like, catch up, clearly AI is superior than human beings.
Yeah, I think, okay, if this is exactly what proponents
of strong AI want to do.
Define strong AI.
A strong AI is the idea that computers are not simple,
tools.
Peters can ontologically be superior to human beings.
Their own inventors and creators.
So they're not just used the way Michael framed earlier.
They're not just imitating human thinking.
They're really doing it and going beyond it.
That would be weak AI.
So weak AI chase playing computers and computers can allow you to do XYZ.
That's fine.
They are our servants and we can get things done.
The strong AI is entirely predicated upon philosophical assumptions.
that we can create superintelligence.
Superintelligence is kind of a hardcore evidence
that you are inferior to your own gadgets.
That's exactly what they do and they write about it
and they argue for that premise.
And I think that's not possible
because I'm actually working on a mathematical model
but I think for the past seven years or something like that.
I call it n plus one.
So earlier I said that everything that we've invented so far
is a reflection of our creativity.
AI is not like something that descended from heaven,
you know, no matter how complicated it is.
Or evolved naturally.
Yeah, yeah.
Yeah, it didn't come through natural selection
or kind of evolutionary stages.
It took ours and ours,
use of collaboration among experts and so on,
and they write algorithms and programs and so on.
I mean, it's literally ironic to say
you look back at what you've done and say,
yeah, you are superior to me.
What does that mean?
You're already ahead of that
because it took you to bring that into being.
So it's impossible to narrow that gap.
That's not a physical gap.
That's not a technological gap.
That's ontological gap.
That I made a physical gap.
So, for example, at no point,
none of us would say,
oh, now we are equal to God,
shoulder to shoulder,
and God is inferior to us
because we created these gadgets and he didn't.
Who did give you the infrastructure,
the cognitive infrastructure that allowed you to do this?
God, if God is now mere physically superior to you,
so you're not going to be able to be in a position
where God would become all of his utterance inferior to you.
We are gods to these gadgets.
It took us.
We brought them into existence.
It doesn't matter what they end up doing.
So we shouldn't be surprised.
And I am laser focused on human beings, by the way.
I always appreciate human beings.
human beings are incredibly gifted creatures.
But when they tell me that a computer gadget is superior to me,
what are you talking about?
You are doubly super intelligent.
It took you to bring this into existence.
So that argument doesn't really fly.
You can talk about smartness in a functional sense.
Yeah.
There's a distinction between functional smartness and natural smartness.
So computers can be, yeah, functionally smarter than you,
because they are so fast, they can contain and analyze a huge,
body of information in seconds.
All right?
That's functional ability.
Functional ability.
Want to keep God's word with you wherever you go?
The King James Bible Study KJV app by Salem Media makes it easy to read, study, share,
and pray daily with a timeless KJV translation.
Enjoy features like offline access, audio Bible listening, smart search, and tools to highlight
bookmark and take notes, all designed to keep your Bible studies simple and organized.
Best of all, it's free to download in the Google Play Store.
Growing your faith every day.
Search for King James Bible study, KJB, and download the app today.
You have a natural smartness that allowed you to make such a thing be possible in the first place.
So therefore, you maintain that status, that ontological status,
with zero fear of that status being hijacked away from you by any future AI.
So I'm not going to lose a slip over it, by the way.
And many people make this hyperbolic statements or AI blah, blah.
I honestly love people when they make such argument.
It has a zero basis.
They are almost me saying like, I'm not giving this interview in English.
While you're doing it, contradicts itself.
It's pretty contradictory.
Fair enough.
That's the importance, again, of critical thinking.
What does it mean to be human?
Do we have a soul?
And if AI is derivative of us, what does that say about us?
I'm going to ask you each of a question.
What would it mean to think biblically about AI?
I'd love to apply biblical stories, biblical principles, ideas to AI, I think would be helpful.
One point comment I'll make while you're thinking about this, Maratio.
I've always thought it's interesting is that in sci-fi movies, we make robots, they learn to think, they rebel against their creator, and then we have to save them from the rebellion.
sounds a lot like Genesis 1 through 3.
Some people say we're going to have strong AI and they're going to rebel against us.
We're going to have to stop the rebellion.
I'm like, we've kind of heard this story before, which tells us there's nothing new under the sun.
But with that said, what would it mean in your field or broader just to think biblically about artificial intelligence?
So I teach computer science courses and I start every class with a,
Averse of the day and an AI of the day, actually.
Really?
Yeah.
Interesting.
Because I'm trying to show students the correlates.
Like, what does scripture tell us to help frame what an AI can do or an AI is doing right now?
Because the students of today will be the engineers, the algorithm makers, the inventors, the entrepreneurs, the entrepreneurs and the leaders of tomorrow.
So for me, it's a sobering reminder that I have responsibility of training up that next generation.
So when I think biblically about these things, you know, that's an example of a tactical, practical application, right?
Here's a verse. Here's an AI. How do these mesh?
And by the way, you're not just thrown a verse on there. Sometimes integration is like, hey, you read a verse that has nothing to do with AI.
Yeah, no. I mean, like, I was fortunate since I'm somewhat new to this faculty. I get to take faculty integration with David Turner. He's amazing. It's fantastic.
Integration is not an afterthought.
Integration is the beginning.
And so if you take that approach, you –
and I think that's just something unique about Biola.
But when you take that approach, it makes you think deeply,
and then you recognize, oh, my, oh, my, this verse is directly applicable to this technology.
Good.
How now do I equip students to essentially do the right thing, be ready to do the right thing,
out and defeat the machine in terms of being able to outright, out code,
out, develop a better algorithm.
That's always something that I'm always trying to.
to teach students.
I had a student this last summer who basically I used as a guinea pig to essentially
determine if I could teach an undergraduate junior rising senior the fundamental mechanics of
how to build an AI large language model system from the ground up.
And as that progressed, and this is a very bright student, she made this definitive
exclamation to me last week.
And I said, so did you, were you able to de, do you?
mystify the AI. She said, oh my gosh, it's just math.
Interesting. So she had that revelation moment that, you know, that light bulb going off
because over the course of the year, I taught her how to look at the definitive paper behind
one of the key algorithms that drives the world. Extract the mathematical equation from it,
run the calculus from it by hand, then go through the code, get the actual code that's out there,
it's open source, walk through that one by one,
learn how to manipulate it,
and have it running on her GPU machine
for the last 30-some-odd days in a row.
And she said, it's just math.
And so for that student,
have that light bulb moment go off.
That was critical for me,
my position as an academician as a professor.
It was also critical because then
the confidence switch went on.
This student now feels if she's any AI
she can design, she can build, she can defeat, she can expand.
And so going back to the ontological piece that Marito is saying is, we have to take that
responsibility to whom much is given, much is expected in a turn.
Philippians 430, right, having the strength to face all conditions, we need to empower one
another and the world at large that they can achieve far greater than their potential,
far greater than where they came from, far more than they expected.
when encouraged and being fed towards what is ultimately good,
you know, the beautiful, the true.
You know, if you continually feed people,
the good, the beautiful, and the true,
one develops better taste.
When you have better taste,
you become more demanding of what things should be
versus the way they are.
That, to me, is the best way to change the world
because as Christians and God, as he's model for us,
it's about being generous.
we have something the world does not have.
The more we give it freely, lovingly, openly,
with a critical mind to show people
how to reach for steak Diane instead of make nuggets,
we will tremendously impact people for the better.
And so that's kind of where I try to take AI.
I try to take AI in a direction that is irrefutably superior,
good, pleasing, beneficial to the world.
And when we do that, we can change the conversation.
I love that because it hints back to what you were saying earlier, Michael, about we approach this holistically.
So one, it's like, what's the worldview behind AI? Well, it's actually really just Bath. I actually hadn't even really thought about that, but you're right. It's just an algorithm. Like, no, that was helpful to me because I don't understand how a lot of that stuff works. But you're also like, how do we create, which is a biblical idea? Absolutely. How do we give and how do we use this technology to love our neighbors? That's thinking biblically about this tool that we're.
we have. So I love it. What do you think it mean when we think about business? What would it mean to
think biblically about the intersection of business and AI? Boy, there are so many places to start.
I think, first of all, I know this guy pretty well. I want to argue about friction in a bit,
but I know him pretty well, and I know he isn't just picking out a verse, right? So, you know,
AI and verse of the day, it sounds like a really great starting place. And I think as Christian leaders,
You know, we need to realize, like, this isn't dystopia and this isn't utopia.
Like, there's something in between those two things.
It's not either or.
And, you know, we're image bears.
And we're called, you know, to steward over the earth as relational, creative, moral beings.
So as I think about this as a Christian business leader, and I think about, like, what is our responsibility?
Our responsibility is to engage it, to understand it, to better,
know what it does and what it can't do and to steward over top the math, right? I mean,
at the end of the day, it's as simple as we've got a biblical obligation, in my opinion,
to lean in, scripturally, to lean in, to understand this and to steward over it so that
it doesn't drift astray, which it will if we don't. And, you know, we've seen this before,
you know, we call it social media, and we buried her head in the sand.
We were asleep at the will, and it's generated the loneliest generation on planet Earth.
I don't want to repeat that pattern.
So as a business Christian leader, you know, I want to lean in, I want to understand it,
I want to use it, I want to engage it, and I want to better understand it so that I can
shape its human value at the end of the day.
And I, you know, so that would be the advice I would give is, you know, don't bury your head
and ascend ever again, use it for all the positive reasons, and then shepherd over it to limit
all the negative consequences, you know, obviously not all, but to do our best to limit the
consequences in unintended consequences.
Maratu, I'm going to come to you in a second, but without opening up a big debate, just
highlight for us, where is the difference between the two of you over friction, just so I understand
what's going on?
Well, first of all, Johan and I have lots of debates.
Okay.
Which we have here all the time.
That's a healthy.
And it's super constructive.
And I'm the entrepreneur.
I'm ready to run.
Like I'm ready to go implement this stuff.
And Johan's reminded me, but there's this thing called security.
And, you know, occasionally we need to think about data and privacy, which, of course,
I'm going to think about.
But, you know, so that's friction, right?
And there's certainly friction to think about, you know, the cognitive process.
And if we make it too easy on our students, they come out of this environment unprepared.
But I do believe there's a difference between healthy and unhealthy friction.
Healthy friction is challenging our students and ourselves to think about, you know,
doing this and doing this well and learning in the process.
So we have high judgment when we need it when we're in these corner cases, these edge cases.
But the unhealthy friction is the muck.
The stuff that just gets in our way, the stuff that is actually dehumanizing.
So I separate those two.
That makes sense.
And I'm sure we'll have another coffee conversation.
I love it.
At some future point.
And that's actually to make to your point earlier, there's an instrumental good about that friction to disagree well, to discuss, to debate.
That's a part of how you learn.
And I remember with smartphones, moment we had Google, people debate something.
They're like, well, let me just look it up.
And I was like, no, let's debate it.
Let's try to see if we can remember this.
That's something that's lost.
So I appreciate that dynamic.
That's awesome.
What would you say as philosopher-apologist
that means to think biblically about AI?
I think, first of all, to have a very clear conception of the fact that these are tools.
Any tool, by definition, could be useful and could be a very dangerous, destructive
stuff in your hands.
So we all have tools.
I mean, there are knives in our kitchen, you know, table, and then we can kill someone.
So we can cut up food and, you know, feed our families and so on.
I think we need to have that clear conception of technology in our mind first.
And then I think about using technology biblically is really kind of to connect it to our own calling.
And I really share what both of you have said is incredibly helpful.
and I think we are in total sync.
Incredibly important approach.
Like we're cautious.
We're not also like megaphone this Hansoff approach.
We're not saying hands off.
We're not also saying this is our Redeemer.
Let's go and grab it.
I think that discernment is what I think is a biblically kind of implementing
and using these technologies to advance the cause of the gospel
and help people solve problems and so on.
And there are unintended consequences.
let's say self-discovery, self-identity is being lost.
If I don't do something, like if I literally don't put my heart into something,
if I am trying to cut corners and then AI, think for me, make my bed, cook for me,
and I open my refrigerator, what do I need to eat?
And we're literally turning ourselves into what there is a movie that's what the Disney, Disneyland movie were.
I don't know, but AI making my bed.
I think I would be okay.
with that one.
I know your point.
Without resourcing everything.
The very notion of literally the meaning of life is predicated upon enjoying your daily
routines.
That's what it means.
If I hand over everything to these tools, then what am I supposed to do with the time
that's in my hand?
Like, I come to job and I strike conversation with my colleagues and I make fun of my students
saying this is what that means.
And we're really losing sight of how the technology is supposed to help us and how it's not supposed to help us.
That discernment is literally what I think is publicly being kind of sensitive as to how we are supposed to deal and engage with this technology.
I'm going to ask you a question. You can say yes, no, or more complicated. I want your quick take on this.
You mentioned a knife, which can be used to stab somebody or for,
surgery, which is good. So a knife is morally neutral. It's how we use it. Is AI morally neutral?
Absolutely not. No, not even close. It's entirely, entirely rooted in worldview.
So we might think these companies are trying to help us. Look, they are getting across a very
subtle message to all of us, such as you are not better than the tools. There are many, many people who are
arguing that you literally are not better than the tools.
You're not only a biological machine,
you're less in a biological machine.
The whole point of transhumanism is just to prove that point.
And digital immortality,
extracting your mind and uploading it on gadgets
and you won't die.
You won't get sick and so on.
What would that do to our conception of the body of Christ?
And the centrality of the body that you have in resurrection,
we're looking forward to that.
Chop bots, we're using chatbots.
Churches are using chatbot and so on.
Why don't a human being response to the questions that the members of congregations have as opposed to why are you directing them there?
So there are so many subtle things that we might lose sight of.
They might really be inconsistent and incompatible with what we believe from a scriptural standpoint.
I think that careful discernment is really, really important.
This could be a whole fall of conversation.
And I'm going to have to get your quick take on this because you reacted to this.
Yes.
Like if AI is not morally neutral, then how and when can we use a tool that affects us morally and our worldview?
We can't answer that now, but even thinking about that should shape the way we approach tools.
But you reacted to my question.
Tell me what was in your mind.
The reason I believe, like generative AI tools are not morally neutral is because it depends on how what – if it were neutral, you would have a perfect balance mathematically.
statistically and numerically in terms of its opinions and how AI is trained and fed.
Need a daily spark of hope and direction? Let the Daily Bible app from Salem Media be that spark.
This free Android app delivers an uplifting verse each morning plus reading plans,
devotions and trusted podcasts from leaders like Joyce Meyer and Rick Warren.
Prefer to listen instead? The Daily Bible app reads verses,
reading plans and chapters allowed handy for the headphones moment of your day.
Choose from versions like ESV, NIV, NIV, KJV, and more,
and bookbark favorites to revisit later.
Share inspiring messages with loved ones right from the app.
Feel God's presence in every notification.
Search for Daily Bible app on Google Play
and begin your day with hope, purpose, and peace.
Training and feeding comes down to what you select to train and feed.
And so in AI, when I talk about it being math,
it's because AI is essentially massive,
tables with words and concepts that have a numerical weight attached to them in terms of a probability.
And that comes down to frequency, how much, how often, and how something is tuned to say,
to respond back with positive, negative, or something in the middle.
It goes back to training.
The scripture that comes to my mind is how you raise up a child.
And so in the Garden of Eden in Genesis, we knew only good.
as human beings we knew only good
when
Satan deceived us we only gained
net bad
because if you only knew good
and then you're getting taught the knowledge of good
and evil you
and if you
from an absolute math
perspective only knew good
then the only net gain you got when you
learned good and evil was you only learned evil
and so
when that works out mathematically now
how you train
an AI for example
example, usual first step is you completely absorb everything inside of Wikipedia.
Right?
That's a usual first step in terms of training a large language model.
You need to understand syntax, language, friction, I'm sorry, syntax, language, sequence,
diction, et cetera, semantics and syntax, right?
When that happens, it's accumulating all of this data to then start making weights to
be able to say something is more important, less important.
Higher priority, lower priority.
That's, again, this is now getting to the rudimentarymentary.
pieces of it. But what if you're now only feeding it
poetry of violence? What if it now you're only feeding it
the depositions from the Hague during
sexual violence wrought upon women in unarmed
conflict? Then it knows only those things you taught it.
Flipside. Now let's say you taught an AI starting
with the Bible. Greek, Hebrew,
coin A. Now you teach it
Augustine, Calvin, that AI is going to know things, it's going to be able to understand,
respond, and analyze things innately from a perspective of what is good versus poetry
of violence, depositions, pornography.
What you train in AI defends how it operates.
And so that question is AI morally neutral.
No.
These were built on high volume, high resource languages, which are predominantly.
English, Wikipedia, then books, then what's available on the English, predominantly
Western English internet, because the U.S. internet presence is considerably larger than that
of the UK, because we invented the internet here.
And so there is a non-random proportional difference.
There's a map in terms of the higher frequency of what has been fed these tools.
And so we cannot be in a place where AI is morally neutral because the rest of the world doesn't
have the digital maturity that the U.S.
much less English-speaking predominant nations.
Now, is the next version of the Internet predominantly going to be potentially more non-English?
Yes, because there's a billion more people in the PRC, a billion more people in India,
with a lot more languages than we have in the U.S., but lower probability,
lower propensity, and lower abundance, if you will.
And so the next versions of the Internet will be predominantly non-English-based.
And so when we disobey God's command to multiply, go forth and multiply, when we have fewer families,
then fewer people of that cognate language are present.
So in one sense, if you were just to think of it from an arms race perspective, the people who have greater population who produce more in digital format in their literacy, in their content, in their media, that could dramatically shift the responsibility.
the responses of an AI.
Could, could, could, could is key.
But this is the stuff I think about way tonight.
And not, it gets worse than that, actually, right?
Because not only were they trained on biased data sets from a morality standpoint,
but that bias is amplified as more people with a, you know, a bias that maybe counter
to our worldview are retraining it.
So there's something called bias amplification.
So not only is it already biased based on the training sets,
but that it's perpetuated over time.
So I think it's dangerous from the start.
And again, I keep saying that's why we as believers need to engage
because we have a counter-world view that is more necessary than ever in the world.
Can I add something?
Sure.
I think we both have raised a very important point,
especially when you talked about computers being like number-crunching tools
and nothing else.
I think that speaks volumes because there's a huge confusion.
Let's say computers can have their own free will,
and computers can have consciousness, blah, blah.
It literally shatters that argument.
Okay.
Here's a predetermined tool.
You set the tone already from the get-go,
how the tool is supposed to operate.
And the general public doesn't really know that.
Okay, oh, yeah.
ChatGBT is conversing with me like Mike.
Oh, there must be some engine in there, so on.
people are falling in love these days romantically.
Well, that's shocking ignorance.
Yeah, shocking ignorance.
But when you lift the hood, it's not that surprising.
Yeah, the sophistication is surprising, but that's a reflection of your mind.
That's a reflection of how human beings can go, how far they can go in thinking,
and the sophistication that they bring into the table.
So I think what you just said is a very good response to this hallucinatory,
comments that people are making, oh, emotions can be uploaded.
First person data can be absolutely proved by AI and so on.
There's literally nothing there.
There's nothing there. People should really know that.
And computers cannot generate not a single thought of their own, because that's not an agent.
What you haven't put there is not going to come out of that.
And I think that's a very good point.
That's a mathematics-based number-crunching tool and nothing else.
Well said. I like that. Last question is I'm curious because my son is in business and my daughter's doing nursing, which obviously intersects with science and your colleague. I want to know what is what is Biola and or Talbot doing uniquely to lead the way as it comes to AI? What are we doing?
Yeah, I'll speak from Biola. I guess we are building our own AI. That's the first bit. And we are taking the,
approach that I described. What happens when you build an AI from the ground up from a completely
biblical perspective that is ultimately God-honoring, edifying to the human? Because the Bible
and all of the commentaries, analysis, is the most sophisticated and the most infinitely sophisticated
written piece on the planet. And the more you delve into the scripture, the more of the
sophistication starts revealing itself, that is infinite because it was inspired by God.
So, you know, mathematically, when you take an image and you do a projection from something that is
infinite to then another surface, you imbue some of that.
And so there is an unbelievable richness there.
So I'm very excited in this day and age of creating AIs that are biblically informed and
can be astonishing in the limitless expansion there.
So that's one.
The other is, on a practical note, because we're called.
called to go out into the world. We're called to love one another. And so that, so the first
bit I was talking about is kind of my expression and how I try and live out loving the Lord and
my God with all my heart and minds and strength. The, the second part now on the horizontal axis is
how do we use AI to uplift and serve and love the widow, the orphan, the sojourner, the infirmed,
essentially the definition of biblical justice. And so that is why I am really focused on how to
brilled AIs for robotics for serving those who are affirmed.
That is a really challenging area, but I find a calling towards it because I can't think of a
better way of serving humanity.
I love it.
Great, great word.
Tell us a little about business.
We're done a lot.
The way we think about this is engage, won't surprise you, educate and elevate or think
you're thinking about how to use AI righteously.
So we've launched an AI lab across campus, so it's interdisciplinary, which is really important
because what it's done is it's embedding the thinking from across the university, and it's not
just technology or not just business driving the way.
You know, we're thinking about this from all the humanities perspective.
We're thinking about it much more broadly, and students are engaging.
They're engaging in dialogue.
Like they're building, they're developing.
They're, I learn from the students.
You know, every Monday morning I get like a tutorial.
So I don't have an AI of the day like Johan does.
But I, but I have a 30 minute tutorial of, you know, hey, how can you use N8N to automate your workflows?
And I get like this, you know, reverse teaching from the students.
But we're holding them accountable for thinking about this differently.
And, you know, the way I like to describe it is, you know, our tech stack rests on the
foundation of ethics. And our tech stack starts with, you know, can AI do this? That's an easy question.
Should AI do this? That's a much harder question. And if you don't have a worldview, you're not
biblically grounded and you don't have a set of ethics that you're making that judgment through.
And then ultimately, if the answer to those two questions are yes, how should it?
Exactly. So that we're engaging in that every single day. Every class inside the business school is
teaching AI, but it's teaching ethics side by side with it. You know, we've got an AI studio that
we're designing to partner with kingdom leaders and to help entrepreneurs, you know, locally,
to engage and to be a bridge to the honor world, you know, and, you know, sort of the version of
Acts 1A, like how do we take this beyond Jerusalem and start to reach the honor parts of the
world? So we're partner with ministries. We're helping, you know, we've worked with
Samaritans person, global media outreach to help them build tools.
So lots of things.
But the short of it is we're engaging very, very actively, but we're doing it in a way that
elevates students thinking so that they don't just leverage AI and start to generate
AI slot as opposed to, you know, creating kingdom impact.
Love it.
Great stuff.
Marazzi.
Yeah.
So in apologetics and philosophy, we take philosophical challenges.
that come out of disciplines like artificial intelligence and neuroscience and transhumanism
and, you know, currently like half-buthan issues.
We have to respond to those things from biblical standpoint.
It's our responsibility to give biblically grounded responses to people and help them
to have a biblically sound understanding of, you know, the discipline of neuroscience and so on.
And currently I'm teaching the philosophy of neuroscience, and I taught philosophy.
if AI last semester and I'm going to teach philosophy of physics next, you know, the spring semester.
So all of these courses are going to equip believers and they are connected to AI, to generative
AI. And there are so many claims being made that are contrary to what we think is the truth.
And we have to lovingly, respectfully, bring these issues to the table and invite those people to
have sound conversations with us, and that's our responsibility.
And I think we're doing great in that.
And as you know, Sean, we are, you know, upgrading apologetics in so many different ways
and conversations are going on.
And my hope is in the future to have equivalent apologetics lab, just like mimicking your style.
Let's do it.
Let's do it.
And that style is going to be in sync with what you guys are doing.
It's an apologetics lab where we literally wrestle with currently hot-button issues and cutting edge.
research that are coming out from different disciplines,
what kind of responses can we produce biblically grounded,
just like Mike was saying.
And I really liked everything that you both have said.
I mean, this is just incredible.
You rarely find such a balanced understanding about AI
or technology in general.
There are two extremes.
Go and grab, this is your Redeemer or hands off.
We are somewhere in between.
No, no, no, no.
We grab a little bit of from here.
We also push back.
And that's our uniqueness here at Biola.
And that is the most important position to be in.
And so I think I'm hoping and looking forward and praying to have Apologetics Lab.
Well, Maratu, we're thrilled to have you here when we were looking for new faculty.
We knew who you were, just knew the excellence that you had bring.
But people asked me, what are the big questions coming up in apologetics and theology and culture?
It's always AI.
So I'd have somebody publishing the way you are, speaking the way you are, teaching the way you are, teaching the class.
serves our students really well. And that's true for all of you. Thanks for so much time.
Thanks for your great work for the kingdom, what you're doing here at Biola. If you're watching this,
you can email us at think biblically at biola.edu questions from this episode. If there's other
areas related to AI, you'd love us to cover, let me know. Or if you're watching this on YouTube,
comment below, I'm going to have my team watch this. And if there are certain areas,
issues that came up today, you're like, wait a minute, I want more help to think biblical
or think apologetically about that,
let us know and we will circle back.
Fellows, this was fun.
Thanks for the time.
Thanks for having us.
Hey, friends, if you enjoyed this show,
please hit that follow button on your podcast app.
Most of you tuning in haven't done this yet,
and it makes a huge difference
in helping us reach and equip more people
and build community.
And please consider leaving a podcast review.
Every review helps.
Thanks for listening to the Sean McDowell show,
brought to you by Talbot School of Theology
at Biola University, where we have on campus and online programs in apologetic, spiritual
formation, marriage and family, Bible, and so much more. We would love to train you to more
effectively live, teach, and defend the Christian faith today. And we will see you when the next
episode drops.
Hi, I'm Beckett Cook, host of the Beckett Cook Show. I lived as a gay man in Hollywood for many,
many years until I had a radical encounter with Jesus 13 years ago. Since then, I've gotten my
master's degree in seminary and published a book called A Change of Affection. On my podcast, The Beka
Cook Show, I sit down with fascinating Christian scholars and thinkers to address the lies of the
culture and bring the biblical truth to bear on those lies. To start listening now, go to
Life Audio.com or search for the Becca Cook Show on your favorite podcasting platform.
