Theology in the Raw - Should Christians Use AI? A Dialogical Debate w/Doug Smith and Jay Owen
Episode Date: May 5, 2025Doug Smith offers an insider’s perspective on the power of technology, informed by over three decades of software development experience and a lifetime of Bible study. As a software developer, Doug ...has served Fortune 500 companies, startups, universities, government agencies, and media personalities. He’s currently an Android-focused engineer with Covenant Eyes and a proud ambassador for ScreenStrong.org. See his recent article on AI and ministry https://renew.org/should-we-use-generative-ai-chatbots-for-ministry/ and his recent book [Un]Intentional: How Screens Secretly Shape Your Desires, and How You Can Break Free. Jay Owen is the Founder and CEO of Business Builders, a marketing agency in St. Augustine Florida, and the executive director of communication at The Church of Eleven22. With over 20 years of experience growing businesses, Jay now helps others leverage AI technology for sustainable growth. Follow Jay's work at https://aiwithjay.com Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hey friends, welcome back to another episode of theology in the raw. This is going to be a
conversation or a dialogical debate about the question of should Christians, Christian leaders
in particular, use AI. And I have two guests to dialogue about this really important question.
Doug Smith is a software developer who has served fortune 500 companies, startups,
universities, government agencies, and media personalities. He's currently an Android focused engineer with covenant
eyes.
He's also the author of the book unintentional, how screens secretly shape your desires and
how you can break free. Jay Owen is a founder and CEO of business builders, a marketing
agency in St. Augustine, Florida, and the executive director
of communication at the Church of 1122.
With over 20 years of experience growing businesses, Jay now helps others leverage AI technology
for sustainable growth."
Man, that I love this conversation.
They were both very thoughtful.
I honestly feel like both raised really good points.
They clarified their perspective.
We talked about a lot of things related to AI, ministry, discipleship, the relationship
between AI in particular and technology in general. So please welcome to the show, the
one and only Doug Smith and Jay Owen. All right. Welcome, uh, Jay and Doug. Uh, welcome back Doug. You've been on, uh, once
or twice, I think. Yeah. At least once. Yes. Thank you. Okay. Yeah. Yeah. Uh, and Jay,
first time, thanks. Uh, thanks so much for joining us on theology and raw. I'm really
excited about this conversation because as I said offline, um, well, I'll just say it,
say what I told you guys. Like I feel like in this area, most people's opinions
are outpacing their actual knowledge,
but that's, it's 2025.
So everybody does that, right?
Like everybody's an expert on vaccines
and foreign policy and all these things.
So I'm so glad to have two people
that have actually done some deep, deep thinking,
reading and implementation of this question. So, I've
already told the audience what we're going to do. So, why don't we just jump in? And
Jay, why don't you just spend a few minutes just unpacking your position? Like, where
are you coming from in this conversation?
Jay Haynes Sure. I'll just kind of stage it and maybe
define a few key terms. And Doug, feel free to adjust or modify, make sure we're all talking
about the same things. We're really talking about AI, AI and ministry. Should we use it? Should we not? Where are the boundaries? What are those guardrails?
What should we pay attention to? And Preston, I think you're right. I think certainly everybody's
opinions outweigh their knowledge. Me too, especially in this world of AI, because the
pace at which technology is advancing is like nothing I've ever seen. I mean, it is, you know, software cycles, you know,
used to be in quarters or in years,
and now it feels like they're in weeks and days instead.
And it's the wild west.
So we should pay attention.
I think, you know, that's part of, you know,
where Doug has kind of drawn me into this conversation
is going, hey, we should pay attention to these things.
So just to define some terms for people out there listening, when we talk about AI, we
could be talking about a lot of different things.
So let me just lay the groundwork a little bit.
The first I would say is kind of like a specific artificial intelligence, a specific AI.
We've been using stuff like this for a very long time.
Something as basic as your GPS, you could argue, is a version of artificial intelligence.
It's taking information that it knows.
It's making a decision to help you get from point A
to point B as fast as you can based on distance,
mileage, types of roads, traffic,
all these kinds of things.
So it is artificially using intelligence
that we would otherwise have to use
to help us get from A to B.
Same thing's true for your Netflix recommendations,
for bank fraud detection.
These are very specific, tailored, highly guard railed AI. But what
we're really going to spend most of our time talking about today, I think, is what would
be termed generative AI. This is the chat GPTs of the world. That's the one that kind
of made it all famous a couple of years ago. There's many other iterations of that. And
then one leap beyond that is what people would call AGI or artificial general intelligence. That's the stated goal
of a company like OpenAI. And that is really like almost sentient in its ability to think
and process. And things do get a little scary there, if we're being really honest. But let's
focus on generative AI. So generative AI specifically in the context of ministry. What is it?
At its most basic form, these large language models, not to get too nerdy for the audience
out there, but that's what it's called, these generative AI models like chat, GPT, or others
you may have heard of, really are just predicting the next right best word.
It's in essence a much more complicated version of that.
You know when you're typing on your keyboard on your phone and it'll kind of guess what
the next word is?
It's kind of doing that. That's in essence, you know, very, very, very, you know, kindergarten
version of what the technology is doing. And it's doing that based off of its training
models and off of a very large knowledge base of information that's being provided by, you
know, whatever company is constructing that model, which is something we should pay attention
to. And so the main question today is, should we use it that model, which is something we should pay attention to.
And so the main question today is, should we use it? I think this is the main question we can adjust as we go if we need to. Should we use it in the context of ministry? My position is that we should.
I see it as a five-talent situation. I see it as an opportunity for resources that have been given,
and if we don't use those resources appropriately, I actually believe that we're being a poor steward of the technology that has been gifted to us in the same way
that we would be if we were just like, we're not going to use the internet. Now, there's people
out there that believe that in the world today, that believe we should use no technology at all.
That's their position. No problem. Many churches operate many different ways. My position is we
should use all the resources that we have to the maximum
of their potential for the glory of God. I'll caveat it with this. We should pay attention.
We should pay attention to what the AIs are returning to us when we ask questions, and
we should do so in the same way that we should pay attention to any preacher that's on any
stage, any book that we read, or any other information that we let come into our eyes
or ears. We should validate that against the actual word of God
and go, is this true?
And so that's kind of like the starting ground,
if you will, from where I am.
I think that it is a Matthew 25 talent situation.
I do not believe that we should bury it in the sand.
I believe that we should use it to the maximum benefit
to reach as many people as possible.
I believe it's an issue of the Great Commission.
So that's kind of where I would kick us off.
Thanks, man.
That was super helpful.
Yeah.
Doug, you want to give you a...
Yeah, that was fantastic, Jay.
Thank you so much for introducing us.
Thank you for this opportunity to have this wonderful conversation.
So as you can imagine, I take a different view.
I've been researching our relationship with technology as a software engineer for a lifelong
software engineer and in the context of being a Christian and a dad and a husband and a Bible
student and really trying to understand what the industry is doing and how the industry
changes us and how technology specifically changes us.
As Jay said, the statistical LLM core, what I call the way that it works, is like this autocomplete
on steroids, right? That's how it works. But it is not grounded in reality. So, the next predicted
word is just that, statistically predicted. And therefore, there's no true grounding to anything
real. It sounds really amazing. It's almost magical. I call it like it's driving incantational thinking.
We're kind of used to prompt and a quick response,
and that's by design.
But wrapping that, the natural language UI that's around that,
the reason chat GBT and Claude and all these
are so compelling is that their conversational UI is on purpose.
So they're actually designed to mimic the chatting that we have as back and forth
with as people. And so if you've used them, it talks about itself with first person or pronouns.
I'm here to help you. I'm sorry I made that mistake if you call it out for a mistake.
I mean, it's always trying to... And that is by exquisite design of the user interface. They
use the top neuroscience, top behavioral psychology.
They are using the same thing to try the same dopamine exploitation loops that we've talked
about with social media. It's all there in that UI. Then around that, I would wrap this idea of
the stories that we're supposed to believe about generative AI. We're supposed to believe that
it's intelligent. It reasons. It knows. It thinks.
It's getting better all the time.
It's going to, you know, the world is changing,
and when we're supposed to all adopt it, these are stories.
This is what I would say are propaganda of big tech.
All of these wrap in what I would say is an extremely deceptive set of tools.
So our default as a culture is typically, it's a tool. I can use it however I
want to. But most people don't recognize that these tools are designed to be used. They're
designed just like social media to exploit our weaknesses in the way our brains work.
And this tool has an extra Christian ministry problem in that deception is at its core. It
pretends to be intelligent. It pretends
to know the answer. And all the answers, it has no idea whether they're true or not.
So when Jay talks about, you know, we need to have discernment, we need a little word
of God, when we're using these tools over and over again, that level of discernment,
the level of critical thinking, and these are my core problems, my core concerns, is
our critical thinking, our discernment, our wisdom are degraded because we've, instead of being creators and writers
and thinkers, we're sitting back as consumers accepting what the tool creates and our ability
to discern is really hampered.
So there's a lot more, but I think that's probably enough to get started.
Could I just get off for a clarifying thought as I hear you talking? Jay, you seem to say using AI
as an extension of our good stewarding of just technology as a whole. I'm hearing you, Doug, say
this is a different category than just simply technology. Would that be a fair assessment?
Yeah. I think what I would say, Preston, is a lot of times these conversations do have
the idea that we, technology writ large. You don't want to resist the printing press. You
don't want to resist railroads, blah, blah, blah. But generative AI, I'm trying to focus
on what generative AI in particular is. It's unique characteristics. The fact that it's
not grounded in reality, but pretends to, the fact that it's deceptive,
those kind of things.
And those are my concerns that I'm really trying to focus on specifically.
I mean, I'm an Android developer.
I'm not anti-technology.
I'm anti this technology, especially for ministry.
Yeah, Jay.
So just the way that I would challenge people to think about it, and we can add to this
discussion too, is it's a question of like, how are we using this generative AI?
Because there's many different categories by which it's such a broad thing, right?
I mean, there's the most basic level, which is I'm just chatting with it back and forth,
like I would texting a friend, right?
I always kind of describe it to some people as, you know, you're on the, who wants to
be a millionaire back in the day with Regis Philbin, and you need to know X, Y, or Z, you phone a friend. You know, you have Susie who
really knows a lot about history or Jimmy that really knows a lot about math and you're
going to call them. It's kind of like that. Now, historically, obviously it's hallucinated
and not a lot of those makeup stuff out of the blue. That's getting way better. But there's
other questions of how can we use generative AI in a way that will impact ministry that
is nothing to do
with the conversational approach. For example, I get to wear a lot of hats. I've run a marketing
agency my whole life that I built since I was a kid, and also I helped lead digital
discipleship at the Church of 1122 here in Jacksonville, Florida. And one of the things
that we have done is when I came on staff here almost four years ago, I had a lot of
people going, hey, have
we ever thought about translating sermons to different languages? And I said, well,
yeah, but let's get some fundamentals in place first before we get there. We do want to do that.
We're in Florida. There's a lot of Spanish-speaking people here. But there's a high cost in both time
and money to pull off multilingual translations of, let's just take sermons specifically.
Historically, that has been true. But because of generative AI, as of the beginning of last year, we started, every week we translate
our sermons automatically. It costs us about a half an hour of somebody's time, plus another
half an hour to review the actual transcript that was produced. It voice clones, transcribes,
translates the sermon, and then publishes in a different language that now reaches thousands
of people that we weren't previously reaching. That's a specific use of generative AI
that wasn't technically possible prior to this.
And so now all of a sudden,
like there are not the same risks in that specific use case
as what you're talking about
because we're not having this dialogue back and forth.
It can mistranslate
like any other translation service could for sure.
That's why we have humans go back and review them.
But the opportunity that that creates for us, if we were to go, we're just not using
generative AI, we would be missing this opportunity to translate to another language and potentially
into multiple languages.
So how we use the technology matters a ton too.
That's a great, yeah.
So Doug, are there certain uses of AI that are different in kind? Like the transcript from this podcast, I think it's done five minutes after the podcast is
done.
I could read this entire, it's not perfect, you know, whatever, but I think that's AI,
right?
Or just like interpreting or translating a sermon, if that's AI, is that outside the
purview, Doug, of the kind of AI you're talking about? Or are
you categorically against just anything that is AI you think is not helpful?
That's a great question. And the translation use case is probably the one that I'm probably
the most kind of on the edge. I will answer that. I just want to come back to a couple
things like the chatting, the phoning a friend. I'm definitely concerned with that. I just want to come back to a couple of things. The chatting, the phoning of friend, I'm definitely concerned with that. There was a recent podcast where
Kerry Newhoff interviewed Craig Rochelle. Craig Rochelle, pastor of Life.Church TV,
it's a great segment in this podcast. He talked about this tension he feels. He doesn't want to
use it too much. He thinks it's going to be a blessing and a curse.
But one of the quotes I have from him is he asks it dozens of questions a day.
And then he also puts his sermons in there and asks it questions about his sermons, like
what would someone think about this?
What would so-and-so, how would so-and-so reject, how would so-and-so object to this?
What are the weaknesses?
This is a conversational dialogue of trust and intimate
relationship. He's feeling the tension. He can kind of feel it and he's like, if I use it too much,
I got to stop. I'm very concerned about that. I think that is a disaster for ministry. I think
the trust that we put in these kind of tools because they can be so deceptive and so wrong,
trust that we put in these kind of tools because they can be so deceptive and so wrong.
Which we haven't gotten to yet, but the worldview of the creators of these tools is also concerned from a Christian standpoint. So, the translation use case then is difficult because A, it can lead
to using it for other things. I'm sympathetic. I'm definitely sympathetic. I'm an elder at a church.
I'm sympathetic about overworked and underpaid ministry people, and they just want to reach more people, you
know. But because they can be so confidently wrong, especially in the translation, like
even when you translate and you put up the little video, you got to say, this was translated
by AI, it may be wrong. Well, what if it's wrong about something significant? What if a human translator isn't going to
mistranslate the plan of salvation or some essentials of the faith? Sorry.
But they do. They do, though. People have mistranslated the word many, many times. There's
many translations of the word that we probably would argue against that human beings sat down
and translated. That's fair. That's fair. That's fair.
I think, yeah, that's fair.
I'm letting a foot in the door of this technology
I'm pretty concerned about.
And so I'm pushing against it strongly, intentionally,
because most people are adopting it without thinking,
because, oh, look, it's new.
It's cool.
And wow, it's amazing.
Oh, my goodness, I can't believe how this is incredible.
And so I'm like, whoa, whoa, whoa, whoa, whoa.
Look how you're being changed by this. And even if you decide, whoa, whoa, whoa, whoa, look how you're being changed by this.
And even if you decide, oh, well, translations,
okay, well, but yeah, and then, and then, and then,
and pretty soon we have an intimate relationship
with a chatbot.
What about, okay, going back to Craig Grichel's scenario,
he's trying to see how's the sermon gonna land
on a wide variety of people, what objections are gonna have.
I think we don't know whether it's accurate or not. What if he did that? He used AI to anticipate
objections. And then what if he also took a survey of all of his people? And what if they actually
had the same kind of thoughts to AI agenda? What if AI was accurate there? Would you then say, well, if it was consistently pretty accurate in terms of the kinds of things
people are going to think of or not think of or object to, I mean, is it that it's going
to, is your problem just intrinsic to that very process or is it, it's going to be wrong
or it very well could be wrong
in interpreting actual people's thoughts and experiences?
Yeah, super cool.
That's a great question.
The problem with it is because, as I mentioned,
because it's not grounded in reality at all,
it's almost a little more dangerous
the more accurate it is because then our trust
in it becomes more clear.
Oh, okay.
To me, because you're building,
again, you're building this relationship.
So as you do, it's just like
with anything we use. It's quick consumption, quick consuming of an answer to the prompt. You
get the response back. It's amazing. Okay. With that, we've lowered even our ability and discernment in critical thinking around those things.
The more it gets accurate, if you were to do that test and you were to say,
okay, now I can trust it, the problem is that again, it doesn't know anything.
We're supposed to believe they're always getting better, but there's really some contention on the
new models of whether or not scale is really eliminating what are called hallucinations.
But at the end of the day, all of it, I mean, it's just crudely said, they're BS generators.
They have no idea what they're saying because they have no connection to truth.
So that does sound like it is the intrinsic nature of how it analyzes things, not necessarily
whether it's accurate or not.
Yeah, I mean, we can say this and make the argument, but I just don't think it's true.
And here's why.
It's the same argument that is being made in creative spaces.
It's the same argument that's made with developers that AI can't write great code, AI can't design
good things, AI can't write as well as a human being.
None of that is true.
It might not be able to write as well as a human being. None of that is true. It might not be able to write as well
as a really good developer yet.
Give it 12 to 18 months and I guarantee you that it will.
It might not be able to design as well
as a good designer yet,
but the new models just came out last week.
Here's the beauty of it in the church.
Generative image generation.
Let's just go there specifically.
Now I'm blessed to work at a very large church
with a very large staff and lots of resources.
I've got teams and people to do all kinds of stuff.
It's a real blessing.
The vast majority of churches all around the world do not have that.
They have maybe a pastor, most of the time is not even a paid person.
And so how do they communicate well, create good materials, and distribute those appropriately?
Well, designers throughout time have gotten all upset when new technology comes out.
So when the camera came out, the painter said, that's not art.
When Photoshop came out, the painter said, that's not art.
And now we're in this generative phase where yesterday I sat with my creative team
and showed them how I took an existing social media post and had Chad GPT completely redesign it,
new font, new style, new structure,
and it's using generative AI.
There's no risk there from a knowledge perspective.
It only accelerates productivity
and it creates opportunities for smaller churches
that could never otherwise produce
the same high quality design that we're able to produce
because we have a staff that can do it.
And now they're able to do these things. Same thing was with
writing construction. I get the concern from an intellectual perspective. Look, since the
beginning of the year, I've been teaching an AI class for a group of about 35 high school
homeschoolers, ninth to 12th grade. And it's been so fun because in the class, I have a
really broad spectrum of thought,
mainly based on their parents because that's where kids end up. Some of them are highly anti-AI and some of them are very pro-AI.
And it's interesting we get to the end of the class because now they're doing these presentations to showcase their work.
So one of my sons, for example, played a short film, a little tiny 90-second video that he put together.
This story that unravels and it looks like a really well produced illustrative story.
Well imagine the creative unlock of that for a child like that, who he by the way is very
dyslexic.
So something like AI has a huge value for him as am I, who my spelling and grammar is
never going to be good.
When I was in elementary school, I had the highest average in every grade level except for
anything to do with spelling and grammar and I was like off the charts the worst.
I remember sitting as a seventh grader crying over my grammar books because I
hated it so much and I did not understand why I was so bad at it. But
guess what? All of that now that is a disability arguably for my 17 year old
and for me is not anymore.
Because I can still write the thoughts.
I can dictate to the computer what I'm thinking in my brain that I want to do, and then it
can just clean it all up for me as a copy editor.
So there's so many use cases beyond just knowledge reference.
And the one place I definitely would agree we should not use it though, if we want to
go there, is like writing a sermon.
Very dangerous. Very dangerous for
somebody to just go, write me a sermon. In the same way, I have seen guys pull sermon transcripts
from other people and recite those sermons word for word, including the stories that aren't even
theirs. That's just as bad as going to, I'm going to write me a sermon on Matthew 18.
sermon on Matthew 18. If you've ever wanted to dive deeper into the ancient languages that the Bible was originally
written in, now is the perfect time to start.
Whether you're seeking a deeper understanding of scripture, or you're simply curious about
the roots of the text, learning biblical Greek and Hebrew can unlock insights you have never
imagined.
I cannot tell you how much the text was opened up to me
when I started to learn the original languages.
And look, I understand not everybody's able to do this, okay?
But if you do have the time,
then I would highly, highly recommend
learning Greek and Hebrew.
Kairos Classroom has designed an entirely new curriculum
that makes learning these ancient languages accessible
and, yes, exciting for anyone,
whether you're just getting started or you already have some background in
biblical studies. I want you to hear from a Kairos classroom student and fellow
theologian or our listener about her recent experience.
Hey, my name is Joy and I found Kairos classroom when I heard about it from
Preston at theology in the raw. When I signed up for Kairos classroom I was
looking for something affordable,
flexible, and fun
without sacrificing on quality.
I'm six weeks into
Greek 1 and it has absolutely
ticked off all the boxes.
If digging deeper into the original
languages of the Bible is something
that intrigues you, I'd really
encourage you to visit the Cairo's classroom
website and check out if any of their programs would be a good fit for you. It'll be
time well spent. So I would encourage you to take the first step in your journey
to gain a deeper understanding of Scripture by booking a free, free trial
lesson with Kairos. Just go to kairosclassroom.com, okay? kairosclassroom.com. Click on the book a free trial lesson button
and pick a 30-minute time slot that is convenient for you. It's 100% free, no excuses. And it's a
great way to meet a teacher and gain a better understanding of how easy, convenient, and fun
learning with Kairos will be. So, I use Logos Bible, a lot. Oh, almost every day. A little shout out to Logos.
They're not paying me to say that.
They should.
So, they just introduced, they have a whole AI component now. Now, here's the thing. AI
searches all of the books that you have in your portfolio, whatever it is.
So I actually did it, it was a passage I was studying.
I said, give me a summary of whatever it was, Ruth.
In it, within seconds, it gives me a whole paragraph
with sources cited, like with the footnotes.
So I can go click on the book and where they set it
and how they're compiling it,
but they're drawing on
solid resources. It's not just out there in the middle, you know, in the AI sphere, whatever, nor is it just random people. It's actually scholars who have written these books.
Now, I didn't quote that. I still went and cross-checked it, whatever, but then I went back
and looked at the passage and studied it. I'm like, this was extremely accurate. Like, I don't know
if I would say it. I might tweak some words or whatever, but it saved me a ton of time in doing the research.
Doug, is there something, that's a little different, I think, what we're talking about
because we know the database from which we're drawing the knowledge from.
Is that different in time?
Hopefully.
Hopefully.
It's risky.
It's risky.
Man, there's so much there.
This is great.
I just love this.
Thanks, guys.
Where do I start?
All right.
On Logos, if the database really is that narrow,
because I've got tons of documentation
around the hallucinations of citations
and the summarizing, that kind of work, it can be very,
it can just make stuff up. It has no idea.
Now, they may, I have a ton of respect for Logos. I get it. But the one thing I just want us to
remember, it's been two and a half years since ChatGPT 3, since we all ever heard of an LLM.
Like two and a half years. How in the world did we survive without these tools? How did we get
anything done? And so always, always the justification is it saved
me so much time. In two and a half years, we didn't figure this out some other way.
The industries that are putting this into everything, their survival depends on it.
They bet hundreds of billions of dollars in their industries to make sure that we're all using it.
That's why they're putting it in every operating system and every code system and in all these
places, but it's replacing essential human behaviors. Writing is one of the worst, I feel
like one of the worst use cases ever. I'm sympathetic to dyslexic and the things that you're
talking about, Jay. Very sympathetic. And yet, our creation of words, the formation of words is
like one of the most spiritual. I found this book called
In the Beginning Was the Word by Dr. Vern Poythress, who's just this whole theology of
language and it's this wonderful gift of God and God spoke the world into existence.
Letting a machine pick our words for us, to me, is like this, it becomes idolatrous. It becomes this thing where we're trusting
and then truncating.
As we know, one of my big inspirations
is Marshall McLuhan, who taught us the medium is the message.
He taught us how when we adopt a new technology,
it extends some capabilities, but it amputates
those very capabilities.
So, you know, yeah, we're driving in a car, but we can no longer walk, you know, the distances
that they did in Bible times.
Okay, we're all grateful for that.
But if we're allowing a machine to write for us and think for us and save us time, but
we're not doing the hard work, what's getting amputated?
It's our very thought processes and our very ability to articulate our own thoughts.
My, it sounds though like your concern there
is more on the, well, I don't know, I could be wrong here.
Like that almost sounds like an advancement of tech,
a fear of the advancement of technology.
Like, you know, like with the printing press,
like now people are not gonna, you know,
their cognitive abilities to like recall things
and memorize things audibly is gonna go to crap,
or you're gonna put the written word in the hands
of people that don't know how to handle it and stuff.
Like it seems like with every, or GPS now,
and I literally can't, I still use GPS to get across town,
a route that I've got a thousand,
like that skill is gone,
you know, nice to clean pools in LA with a big, what were those Rand McNally maps, you know,
I was super dangerous driving around LA, you know, but I mean, like I knew, you know, my 50 pool
route or whatever in LA and it became second, that was a skill again, that is gone. So it sounds
almost like your concern is with the advancement of
technology, that at every advancement of technology, we're going to lose something of our natural
ability to do something. But that seems like it's kind of just a world we have always lived in.
Yeah, that's great, Preston. Thank you. That is McLuhan's argument. It is. Now, Jay mentioned
cameras and Photoshop and how those aren't art anymore. Again, I'm really trying not
to make that argument. That is an argument to consider. But this particular technology,
in choosing our words for us in the way that it does statistically, where there's no grounding
in reality, where it may or may not be right. You get this long thing
that's generated and we're not reading anymore. Talking about using it in education, there's
people are claiming, we don't need to read books anymore because AI will just do it for us. There's
literally people claiming that. If you're not reading books, we'll have no discernment over
whether the thing is right. We're talking about, you're generating maybe a thousand words on Harriet Tubman. Okay. Well, what if these particular details are wrong? You have no idea because
you haven't done the hard work of thinking that through. Yes, it's a technology, but it's this
particular technology. It's the non-grounding to reality and it's the deception around it in terms
of the claims to be intelligent, the claims to
that the design to be relational, the design to be the core of everything that Big Tech is forcing
upon us. Those are incredible risks. If I trusted Big Tech, like, hey, gosh, you guys, you really
saved the world with social media. We're so much better off because of TikTok. All of our kids
are have this awesome mental health and they all feel great about themselves. If we trusted big tech like that, then maybe I would trust their next tool.
But I know that they're not spending hundreds of billions of dollars and putting data centers,
taking the water out of some Mexican farmers so that they can put a data center down there
so that the world can access their AI for our benefit. They're not doing this because
they're blessing us. They're doing this for domination. So, I don't trust them. I don't trust them. Their
track record has broken my trust. So, for ministry, we shouldn't trust them either.
So, I wouldn't say that I trust them either, but God can use all kind of bad things for good. God
can use anything for good. God calls us all things to work together for good for those who love Him
or are called according to His purpose. So, the question becomes, how can I use anything for good. God calls us all things to work together for good for those who love Him and are called according to His purpose. So the question becomes, how can I use something
for good? I'm not going to stop AI any more than the Amish are going to stop people from driving
in cars. I'm just not. And so the question for me is, I can use it or not use it, and if I do use it,
how can I use it for good? On the context of writing specifically, a couple of things. One,
there are some people who are very upset that we can't write in calligraphy anymore.
There are some people who are very upset that we do not teach cursive to children anymore, or maybe they do in some places. I'm sure in some of these homeschool
classes, my kids are in, they probably do, but my kids, my kids don't know what cursive
is. It's so exactly. But I will concede this writing is thinking. There's no question about
that. And to lose the ability to write is exceptionally
dangerous. I think our education system has flipped completely upside down in so many
different ways. But there are fundamental things that there's no question that any of
us would agree people need to be able to do. They need to be able to read. They need to
be able to write. They need to be able to at least understand basic math. If we don't
have those three things, we're you know, for many different reasons. And so the question becomes not can I trust the AI, the question is what is truth?
Because I can go out and read any blog post or any book or any podcast and I would be
no better off in understanding if it's true or not by listening and consuming that content
than I am consuming content that is delivered from an AI.
Now we can argue over the quality of the output,
but that's only going to improve.
I know that it has because people still make the argument
that AI can't produce images with hands properly
and that's just not true.
It was true two years ago, it's not true anymore.
And so the question becomes, what is truth?
And as a Christian, we would all say
the truth is the word of God.
And so the one thing that matters the most,
and I think the core issue here,
really above anything else technologically, is that we, as Americans and as modern Christians,
statistically, do not know our Bibles. And if we do not know our Bibles, if we do not
know the Word of God, then how can we discern truth? And so, the problem, in my opinion,
is not the technology. It's not even what's being consumed where,
because we're already reading blog posts that have all kinds of inaccuracies and some human
being wrote that are completely wrong. We're already listening to podcasts that dispel
truths, that they say are truth, that are not true. And how do I know it's true? Because
I can listen to it and I go, hold on. That contradicts what the Bible says. And if it
contradicts what the Bible says, then it is not true. Everything else is an opinion. And so, that's kind of where my thought is. I do think that
writing matters a lot, but at the same time, many people are not very good at it. Some
people have disabilities. I mean, if you look at a lot of major books, especially in the
business world that I come out of, when you see a book by an executive, eight to nine
times out of ten, they did not write that book. They sat with a ghostwriter and
they were interviewed. They told them ideas about what they do. I mean, even our lead
pastor, Pastor Jobb E. Martin at the Church of 1122, he's exceptional on stage. He may
or may not be a great writer, but he has a good buddy, Charles Martin, who they co-author
books together. So, he's not a ghostwriter. It's publicly shared. They work on them together. And they sit and write. Why? Because Charles is a better
writer than Jobi is. He would say that. I mean, and so they're using their gifts, talents,
and abilities together to create the best possible resource. And I don't think that
Pastor Jobi is being, you know, is not thinking because he thinks very deeply about all kinds
of things. It's just he's using his gifts, which is more speaking and Charles using his gifts, which is more writing. So that's
kind of how I think about it. A quick question, JD, what about, and I think Doug, you raised this
and it is a question in my mind, cultivating trust. Like even now, like in Google, you can Google
something and you get the little AI summary. And even though I'm with you, Doug, I'm very skeptical of big tech,
and I think that's a whole other thing,
and everything you shared about that I resonate with.
And even I, when I see that AI,
it just gives a perception of authority.
I'm like, oh, so this is the summary of the thing.
But we all know Google kind of curates the kinds of articles that they want you to read.
Like if I'm researching a controversial thing that's not popular in the dominant narrative
of America, I have to use a different search engine.
I have to go like DuckDuckGo or some other search engine.
And I get a whole slew of other articles that come up that I can't even find on Google or
it's like eight pages back or something. So even there, it's like, well, hold on a second. Like it sounds like this
is the right answer because AI has curated all the best resources, but it's like, well,
not really. I mean, and I'm just, I do fear that especially younger people, like you said,
I think Doug, you said like when they hear something on a blog or podcast, actually you
both said it, you know, And the AI is just gonna exacerbate
that false perception of trust,
that this is the authoritative thing, you know?
Is that, again, fostering undue trust,
is AI exacerbating that problematic?
I would love to answer that if you don't mind, yeah.
And then I do wanna hear Jay's.
Yeah, I wanna hear Jay's. So I've been using, I just wrote this organization called Renew.org,
not Greg Boyd's, but the one that's about theology. It's spelled Renew the right way.
They just published a new article of mine on should we use AI in ministry. And the word I keep
using is discipleship and it's discipleship in the broad
way. We're supposed to build a trust relationship with the AIs. We're being discipled in our usage
of them to trust them. Because of their design, it's an intimate – the only people we've ever
chatted with before two and a half years ago have been people in your text messages, in your iMessage, in your
Slack, in your Teams, whatever you use. You're chatting with people. You get responses back.
This is amazing. It's like the artificial relationship. When we've heard these terrible
stories of people build character.ai or whatever that's got like where you can build relationships
with actual people and they have romantic relationships and there's been suicides and
there's all kinds of obviously horrible stories. But that's just an extreme of showing that this is a, that it's
building a trust on purpose. They would not have had to build it this way. They wouldn't have had
to make it so that it's sound, it's confident always, even when it's wrong. And so like,
for example, the summary that I want to poke on the summary again that you were talking
about.
Preston, you're a Bible scholar.
If you read the summary of Ruth from Logos, you'll know if it's wrong or not.
But a student isn't going to know that.
They're not going to know if it's right or not.
They have to trust it, right?
And so, if our kids are learning and this, again, this dopamine loop quick response,
they're consumers, they're not creators when they're using these tools.
They're not gonna be like,
well, why do I have to learn to write, mom?
Because it'll just do the writing for me.
And it's not about learning, it's not about getting it done,
it's about becoming a thinker, like Jay said,
it is about that.
So I'm super concerned about people who are using it
for especially non-expert level.
If you're a super expert and you want to use it,
maybe okay, but really be thinking about how it's changing you and where it's taking you.
Yeah. So one of the things that you talked about that I think matters a lot is this idea of
discipleship. And so, you know, we have liked to, you know, operate under the idea, hopefully,
and hopefully under the actions that we are disciple making disciples. You know, we always
say we're a movement for all people to discover and deepen a relationship with
Jesus Christ.
And so, the question becomes, how do we do that?
Well, here's what I see.
What I see often inside of the church, especially a very large one, like the one that I get
to help at, is there's a lot of administrative work that needs to be done.
There's a lot of functions that need to happen that require a lot of time from people. Sitting behind a keyboard, doing research,
all kinds of things that are not discipling other people. And so, what I like to think
is what are the things that we can have robots do so that people can do the things that only
people can do? I do not believe that people should develop relationships
of any kind with robots or artificial characters.
Are they?
Yes.
Is it dangerous?
Absolutely.
I think there's high risk, especially for young kids,
especially teenagers.
And so I think we should guard against that
at an extreme level, in my opinion.
That said, there's a lot of administrative things
that happen inside of a church.
For example, I sat with our director of HR the other day and she needed to get
this spreadsheet attached, this other spreadsheet and cross-connect these, you
know, different questions, these other questions and attach those to our code
of conduct. And it would have been some manual labor on her side. Maybe if she'd
been real fancy with knowing how to use pivot tables and everything else in
Excel, she could have just done it herself.
But I showed her how to use some of the different AI tools that I have to complete a task that would have
take her an hour or two in that day in about 60 seconds.
And we double-checked it, and it was accurate.
And so the question then becomes, well, what else can she do with that hour or two?
Well, she can go sit and have coffee with somebody.
She can sit and go study her actual Bible.
She can go spend time with people because the robots are doing things that robots can
do so that people can do things that only people can do, one of which is the actual
discipleship of other human beings.
So I actually see that as a huge advantage, not a disadvantage, but I will say that the
idea of developing intimate relationships is a danger, and we should pay attention to it. We should watch it with our kids. Some of
these apps that let people create characters and then chat back and forth, I think they're
exceptionally dangerous psychologically. I don't think we know what the consequences
are, just like we didn't know what the consequences of social media were. That said, I want robots
to do what robots can do so that people can do only what people can do, which is build
in-person face-to-face shoulder-to-shoulder relationships. That sounds pretty compelling, Doug.
What are your thoughts on that?
Okay, you convinced me, Jay.
I'm done.
No, I'm just kidding.
No, that was really good.
And I love the spirit of where you're going with that because we want people to be spending
that, doing that time, spending the time with real in-person relationships, right?
That's what we want to do.
My concern is that administrator, that administrative person is being discipled by the tool. They're being changed by their very use of
the tool, by their dependency upon it. You double checked it and it was right. I mean,
I've just seen an example of recently of example of, you know, plot me a spreadsheet of the 50
states that has, you know, this various different thing, you know, population and average income or
whatever like that. It outputted 45 states. What about the other five states? Where's Wyoming? Oh, okay,
yeah, sorry. Here's Wyoming. But then it throws in a Canadian province. You know, it's like,
yes, are they getting better? Okay. But the fact that we trust that they're getting better is
industry propaganda because they have to survive, they want us to use these things. So, will most
people go and have coffee and do that?
Most people are going to go back on social media, especially the younger.
They're going to be, okay, I get to go play my game now.
I would love to be in a world where they would, or now I have to get this hour back, but we've
been getting this hour back with efficiency, with new technologies over time.
We don't have to walk for miles and miles.
We can drive fast.
We should only be working like four hours a week with the efficiencies we currently
have.
Now, this is the one thing we can't live without, again, because they want it to be our top
priority, not because it should be.
Because of the deception, because of the potential of addiction and connection, just too much
use of it, and us being discipled by it into becoming consumers
instead of creators, that is why I would push back on that.
Yeah, I mean, my mother-in-law still does her taxes
and her payroll by paper ledger.
Oh my God.
Shouldn't trust Excel.
I mean, and there are people who have used Excel
and they put a formula in wrong into a box
and it returns completely inaccurate data.
So you do have to know how to use the actual tools.
This is what I'm passionate about
is helping teach other people how to use them,
use them appropriately, have the right guardrails on them,
not let it just spit back anything to you,
not just to trust whatever it returns.
I would never mail it in.
That said, the exponential opportunity for creative, especially in our
worlds, is awesome. The things that it's able to help produce internally so that we can
reach more people, so that we can get the Word of God in front of more people in multiple
languages, in multiple variations, whether it's a 60-second clip of a sermon or a narrowed-down
answer to a question, human beings should
validate and double check all these things for sure.
And if you're using the tools wrong or using them incorrectly, it's going to be like anything
else.
You know, I'm not very good with power tools.
If I go out into the garage and start sawing and drilling stuff, it's probably not going
to go real good.
It doesn't mean the tool's wrong.
You know, if I want to dig a pool in my backyard
and I get inside of a backhoe or whatever tool it is you even dig a pool with, I don't even know,
it's not going to go very good. But it doesn't mean the tool's bad or wrong. I just don't know
how to use it. And so that's a totally different problem altogether. And you're right. I mean,
the question does become, are people going to go back and just go play their video games or death scroll on social media?
Maybe, but that's a maturity and discipleship issue
for that person themselves.
They hopefully have mentors and other people around them
asking questions of what are you doing with your time?
And that's more, it sounds like Doug,
that's more of a concern about the pragmatic probability of using this,
not like an indictment against the intrinsic possibility that somebody could use AI to
reduce time in admin data type stuff.
So that like, what if, what if, yeah, they were disciple to use their time more wisely, then could you see
validity in that?
Well, again, Jay is going to technology writ large again, digging pools and these kinds
of things.
I'm talking about generative AI.
I think that the concern is that this is an intentionally deceptive technology.
It claims to be intelligent when it's not.
It doesn't have any binding to reality. We're supposed to believe it's getting better. Images, the images that, you know, maybe not putting six fingers on people's hands,
but where was I just, anyway, I've seen some images that are used in church. There's creepy
Jesus. There's things missing. I mean, there's some really wacky stuff.
And because it's fast and quick, and yes, it's good.
So yeah, assuming we could trust it was grounded in reality,
we could actually trust the output,
we could trust the people behind it to do things for our good.
And it wasn't breaking our ability
to think and create and discern.
Potentially, yeah.
But this tool has only been here a short time.
I've been having this conversation at work about even with software engineering, because
I do software engineering, and there's people that don't want to use it for software engineering.
And I'm very concerned about that too because of the loss of the ability to think about
your software, right?
Because you're outsourcing it.
There's this thing called vibe programming where you don't write any code.
You just say, hey, write me an app that has this and that. Okay. They would love for us to become
dependent upon that. Right. But especially the concern about the people who don't know that I'm
an expert. So I would be able to discern whether it's good or bad. But if you're not an expert,
you won't know. And then again, why? The only reason this is a priority is because it's their
priority. It's Google's priority. It's OpenAI's priority. It's not because it's good for us.
Yeah, I mean, I just, I just, I just disagree. I mean, I do agree that the, I
don't necessarily trust the creators of the tool, but I don't necessarily trust
the creators of the internet or even Riverside FM that we're using to record
this podcast or anything else for that matter. I don't know who they are. So I
don't know if I should trust them or not. But I do know the tool is a good tool
to use for a particular outcome. So let let's talk about vibe coding, for example, which is the idea that I could prompt an application
into existence.
A couple of weeks ago, I was sitting in a creative meeting as we were planning for a
series a couple of months from now at the church, and the series is called In the Arena,
and it's about, you know, being in the battle, you know, of spiritual warfare in life.
And one of the ideas was to print out this magnet that we were going to give to people
that they could kind of take a scripture and kind of input their own fears and disbeliefs
and then see the truth of God in it.
And when you have a church that on average on a weekly Sunday has 20,000 to 25,000 people
sitting in seats, and you decide to do anything that is printed or handed out, it is exponentially
expensive.
And so my brain immediately goes,
do I really, I mean, even if I got a 50 cent magnet
times, you know, whatever,
I'm gonna need at least print 30,000 of them,
that's $15,000.
Do I really wanna spend $15,000 on it?
Like, is that really, is the juice really worth the squeeze?
And so instead, I sat in front of an application
that does let you write your own code.
I know enough to be dangerous, to be fair.
Like I can debug a programming log. I understand the difference between PHP and ASP,
and I can look into a SQL database and understand what's going on, but I can't write it from scratch.
But with the tools that are out now, I can. And so I said, hey, what if we just had a little app
that people could fill in the same thing they were going to put in the magnet, and then it would
automatically create a screensaver for their desktop computer or for their phone or for whatever else.
And so everybody was like, well, that's great, but we have a meeting tomorrow at 12,
and we need to demo whatever our idea is to the rest of the leadership team.
This was eight o'clock on the prior night. I said, no problem. I'll put together a demo.
They're like, before tomorrow? I said, yeah. And in 30 minutes, I had a working, functioning demo
that we're going to use for tens of thousands of people
to encourage them in the truth of the Bible that was written by generative code.
Now, if I got a senior programmer in there and they wanted to dig into the code, they'd
probably nitpick it to the wazoo.
No problem.
And they'd be completely wrong because they're missing the point, which is to have the right
outcome in this particular situation.
I saved the church $15,000 at least.
We're producing arguably the same
outcome, something that probably people pay more attention to because they're
creating an image that's going to show up on their phone, and we're helping
control what that image is. The background, the scripture that shows up,
all these other kind of things. We didn't have the resources internally to have
been able to pull that off prior to this generative AI. And so that's another
specific use case that we're using on a day-to-day basis that I believe will help encourage and disciple people to have a deeper relationship
with Jesus Christ.
Hey guys, are you struggling with porn or other unwanted sexual behavior? A recent Barnet
study reveals that 75% of Christian men are looking at porn. We believe the lie that we
need to stay isolated and keep all the secret, but the truth of the gospel is that there is grace, healing, and community
for you when you take the brave step to receive it. Whether it's porn, lust, fantasy, or just
chronic discontentment in marriage or singleness, you're invited to join a Beyond the Battle
men's group led by pastor, author, and my friend, Noah Philippiak. Get out of isolation
and into community at beyondthebattle. Get out of isolation and into community
at beyondthebattle.net.
That's beyondthebattle.net.
Can I raise a question going all the way back
to the translation?
Sure.
Translating sermons into Spanish.
That, when I first heard you say that, I was like,
oh, here's a clear example of a good use of AI.
But then, I don't know, like the last few minutes I was like, oh, here's a clear example of a good use of AI.
But then, I don't know, the last few minutes I was thinking, because my brain has several
different thoughts, I'm like, but wait a minute.
What if, I mean, what if you, like, is this taking away not only a job, but a ministry
opportunity for an actual human being?
Like, what if you hired a Spanish bilingual speaker to translate the sermon? It might be a little slower, but you're giving somebody
a job. You're also inviting somebody else in a really vital ministry opportunity. You're
giving them value to use their gifts. And this is just, sorry, just like one example,
is it possible that using AI could unintentionally
and almost without even thinking about it, take a ministry or economic opportunity away
from an actual human being?
Yeah, great question. And this is the same question that was levied years ago, many years
ago, when I used to outsource, like I said, I run a marketing agency, we do all kinds
of web work, development work, and we used to outsource work, and still do,
to people all over the world.
And people would say, well, hold on,
aren't you taking from an American job
by outsourcing to somebody else?
Now, as a Christian, I actually think
that's a horribly flawed argument anyway,
because the people around the world are regular people
who deserve work and money as well.
So I don't even understand the question.
But this is kind of the same question, right?
Is it like, am I taking an opportunity for somebody else?
And I would say maybe, but probably not,
in the same way that, like, look,
people used to deliver ice to your door
because we didn't have freezers.
They don't anymore.
They used to deliver milk.
Now, actually, some places are going back to that
because people want like raw, natural, organic stuff.
That's all another conversation.
And so, we weren't doing Spanish translation.
We didn't have the bandwidth.
Maybe we could have pulled it off,
but it's like, why do something exceptionally more difficult
in that particular scenario when we weren't doing it at all?
You know, in the same way that back in the day
when I would outsource work all over the world
to different people to accomplish particular tasks,
we wouldn't have done the work otherwise because the clients we were working for couldn't have
afforded to hire an American developer.
They weren't the right clients for that.
And so there's pros and cons to both though.
Any advancement in technology, right?
This is always the argument.
They just inevitably will take away jobs from people who are doing the work prior to the
technological advancement.
So I guess that's not really intrinsic to AI. will take away jobs from people who are doing the work prior to the technological advancements.
So I guess that's not really intrinsic to AI.
That is more of an already concerned about using advancements in technology that is going
to take away jobs from certain people.
And I think we're going to have a hard...
Oh, go ahead, Doug.
I was going to say, it is a concern.
I mean, the thing is, is that like so many things that Big Tech does, they've deployed
this without considering those things.
We don't have the ethics or the systems to be able to deal with. They don't care if people are getting romantic relationships with
characters and committing suicide. Their real true belief is that AI is going to be so significant,
it's going to solve all the problems they're creating by deploying this worldwide without
the foundation. If millions of people lose their job over this,
AI will help figure it out, right? But in the meantime, there's going to be an awful
lot of pain if that's really where it goes. So yeah, I think that is a valid concern and
a big concern. I just want to circle. I asked a couple of questions in my Renew article because
I'm just from spiritually. I'm like, if there is a prince of the power of the air, would
we expect there to be spiritual warfare?
If the whole world lies in the power of the evil one, like 1 John 5.19, would we expect
products made by the most powerful corporations in the world to be helpful to Christian ministry?
Would we expect that?
Should these tools, why do we accept this?
Why do we say this?
Okay, Jay's example of building this app, by programming this app for his congregation is admirable on one hand. I mean, that's cool.
That's a great story. And yet it's almost like a hammer in search of a nail. He had this tool.
He had this opportunity. What should I create? I should create an app because I can.
Did Jesus command efficient discipleship?
He changed the world with 12 guys powered by the Holy Spirit and that multiplied across
the world.
He didn't need this two-year-old deceptive technology to get his thing done.
So I'm just asking people to ask that question and really be concerned about there are things
behind this.
There are things that it's doing to us and it shouldn't just be accepted at face value
because it might save us a little time.
Yeah.
I mean, there's things to be worried about.
There isn't everything.
There will be job loss, deep fake content's a problem, election manipulation, security
threats, lack of spiritual discernment.
AI pornography is crazy terrible.
Oh yeah, for sure.
Crazy terrible.
There's all kinds of things.
Although interestingly, it could arguably destroy that industry from a people perspective,
you know, not to get too far off the rails, even though you're a covenant, you know, you're
really deep in that particular world, helping to prevent that.
Yeah, yeah, that's my job.
That's my day job.
It's a wild world.
But in reality, like Jesus certainly didn't need this, but he also didn't need cars or
the internet or radio or television, but it doesn't mean we shouldn't use those things
for the glory of God.
So I think people have two options.
They can sit around and make a list of all the things they're worried, scared, and anxious
about even though the Bible says, do not be anxious about anything.
And don't worry about tomorrow because today has enough trouble of its own.
Or they could ask the question, what does this make possible?
How can I use this in a way that would be God-honoring and glorifying
and help reach one more person for the gospel of Jesus Christ? That's what I'm compelled
to do. Will we make some missteps along the way? Probably. Will it be some things that
we maybe step into that we need to like pull back on? Probably. And we should have wise
people around us who know the Word and hold us accountable.
And so, that's why I appreciate, Doug, you in this conversation.
I appreciate you challenging my webinar that I did on this, which is how we all ended up together
here today. Like, I greatly appreciate that. And I want to be challenged in that. I want to think
deeply about it. I want other people to go, hold on, what does the Bible say about this? I need
people like that in my life who are saying those things because I'm prone to wander, you know? And so, I'm definitely paying attention to that. But I'm
also asking the question, what does this make possible?
That's great, Jay.
I got a question. Can I raise another question?
Sure. Can I just say real quick?
Sure. Yeah, yeah.
I really appreciate, Jay, your spirit about this. It's wonderful. And just that we can
have this dialogue. I just want to make clear that I'm not making a fear-based case. I'm
not saying I'm afraid. I'm a technologist. I know what these things are doing. I am concerned for
people. I'm concerned about deception. I'm concerned about the scriptures, the warnings
to know the times and to not be deceived. And I'm concerned about the things I've studied about how
technology changes as this particular technology, not writ large, but this particular technology
is problematic. So not because I'm large, but this particular technology is problematic.
So not because I'm afraid, but because I've studied it.
So thank you though.
Here's my question.
Will AI, in a sense, is AI or will AI be so intertwined with all future advancements of
technology that it will be nearly impossible to separate healthy technological advancements that we all roll with
throughout human history versus taking technology and bracing technology while rejecting AI will
become so intertwined that it would be impossible to reject AI and not technology. Yes.
I think that's what they're going for. I would say that's what Big Tech is hoping for.
I'm hoping this is a dot-com bubble type of thing and it's going to implode as people realize that
the hallucination problem isn't probably going to go away and it's not as great as it was. I'm
hoping that things will shrink and will return to sanity. We are at 1999 year 2000 peak. The internet is going
to change everything and as long as you put dot com after your name, then your company's
value goes up 10 times. It's that way for AI right now too.
So I think they're working so hard to win. When you look at our big tech companies, they
are competing with world powers to try to, they've made this an existential risk or an
existential thing for them. So they hope that
Preston. And so it's, it's already happening to some degree, but I hope personally that it'll
roll back some. I just would encourage people to turn it off everywhere they can. I turn it off
everywhere I can. I do, I do think that there will be a, um, an interesting future. I think about this
in business a lot because I run a business that arguably could be totally replaced by AI in the coming years.
I think many businesses, especially high knowledge based businesses could be.
So I got to pay a lot of attention to it.
It matters a lot to people I get to employ and the people that I care about.
And I think there's going to be a strong divergence in two directions.
I think one is going to be almost absolute and total dependent
on technology, which I think is a mistake. I think the other one's going to be a total rejection of
technology, which may not actually be a mistake. I mean, people do homesteading and all kinds of
stuff. I think there's going to be a lean in both directions. There's an old Irish proverb that says,
for every mile of road, there's two miles of ditch. I think we've got to pay attention to that.
It is the pendulum often over swingsings. And I think it is
swinging in the direction of probably too much technological dependence. And so I think Doug and
I probably both agree with that, even though we both would call ourselves technologists. We're into
this stuff. Like we're, I mean, I don't want to put a label on you, but like I'm a nerd at heart,
you know? Like I grew up, I love this stuff, you know? It's fun to me.
It's entertainment.
It's an enjoyable thing to learn and grow with.
I think it has a huge impact on the positive aspects of humanity.
And man, I see the negatives.
Like I mean, look at social media, you know?
But at the same time, what am I to do as somebody who gets to lead in a large local church?
Should I reject it wholeheartedly or should I use it
to the best of my ability to reach people for the glory of God? That's what I'm going to do.
But there's still potholes along the way to try and maneuver, and we got to pay attention
to the pendulum over swinging. I think we would all agree that there needs to be a lot more
intentional, robust discipleship in the area of technology, especially youth groups.
Would it be going too far to say if you're a youth leader and you really want to disciple
Gen Z and Gen Alpha, and if you're not consistently discipling your students in how to use technology
wisely, spend less time on social media, maybe don't even get on social media. I mean, I just feel like,
it's like the elephant in the room. It's like you're, because it's intertwined with,
it interrupts discipleship on so many levels, not intrinsically because it's there, but because
people are just either not being discipled on how to use it wisely or the addictive element is like,
you're battling chemicals in people's brains. That takes
a lot of intentional work to disciple people. That's why I wrote my book. Screen time is the
discipleship issue of our age and especially understanding the intentionally addictive
practices of the industry. So, you're exactly right. We've got to be discipling our kids and
that you're exactly right. We've got to be discipling our kids and we have to be okay saying no.
You have permission. All the times I do these talks, you're okay to say no to generative AI too, by the way. And you're not going to be left behind. You're not going to be,
just because you're not as efficient as some people might be. It's only been here two and a
half years. Sit back and watch it. And it's okay to tell your kids know about social media too. In fact, you really totally should.
Yeah. I mean, I have five kids. I care a lot about this. My youngest is 12. My oldest is 20.
We pay a lot of attention to it. My 12 and 13 year old girls do not have phones,
even though most of the people that they know have them. I do not care.
Good for you, man. Well done.
I do not care. Interestingly-
That's a hard battle to fight.
It is. But interestingly, if you look at the culture outside of our
theological, Christian-based environments, psychological environments are catching up
to this stuff too.
It's funny how it all comes back to the truth.
And almost any really well-known psychologist at this point would say that children under
the age of 14 should not even have phones.
There's massive cycle.
And really anybody under 16 shouldn't have social media.
I mean, it's like you don't give kids certain things.
Alcohol is not, well, depending on your belief structure,
alcohol is not inherently bad, but for many people,
they should never be neared in any way whatsoever.
And social media is the same.
It is a drug, let's be clear.
And it is a drug of dopamine to our brains, but most things are. And so we is the same. Like it is a drug, like let's be clear. And it is a drug of dopamine to our brains,
but most things are.
And so we should pay attention.
But yeah, I mean, discipleship of the digital age
for our children is one of the most important things
right now.
And it's in my own home.
You know, I mean, I got a,
we got a home group tonight with about 20 high schoolers
who will be at our house, you know.
And so I'm acutely aware of these things
because I see the damage and it is, who will be at our house, you know? And so, I'm acutely aware of these things because
I see the damage. And it is, the damage is very significant. It's not just generative AI, it's because we've kind of broadened our scope now. But technology in general,
with our kids, gosh, we got to pay attention to it. Jonathan Haidt's book, The Anxious Generation, might have single-handedly done more help
in discipleship.
I mean, just societally, but I mean, it's number one New York Times bestseller.
Schools are already changing their policy on cell phones.
So I just want the church to be able to integrate just the reality of these dangers and just
integrate these into discipleship practices. All right, one final question. What is the future of these dangers and just integrate these into discipleship practices.
All right, one final question. What is the future of AI? Where will we be in two or five
or ten? Can you even imagine? I mean, both of you agree how just ridiculously fast this
thing's developing. Can you imagine at this pace in 10 years, are we going to be put in
pods so that robots can use us as batteries like the Matrix?
Like, I mean, no, but I mean, it's crazy to think that the speed at which it's developing
far faster than any of us even know. Where are we going to be in two, five, 10 years with AI,
if it keeps going the way it is? I'm really concerned about the impact that AI is currently
having on us and what we're
doing to the next generation, who is trying to learn.
So there are a few elite organizations
that are creating this technology for us,
and the rest of us are being harmed by it.
And so what kind of a world is that going to be?
I'm less concerned about there are all these predictions.
I mean, when you read the big tech leaders,
they're predicting eternal life,
eschatology type of stuff. They're going to solve every biological problem. Everybody's
no more sickness, no more death, no more crying, no more pain. It's revelation, right?
That's truly what they're predicting or others are predicting we're all going to die.
So, I don't know if it's – I'm not willing to predict either one of those, but what I can see
is that if we're losing these abilities, these abilities, we're going to lose our discernment.
Then where are we going to take – where are they leading us?
We've already been harmed by social media and the mental health crisis.
There's this documentary Tristan Harris and Asa Raskin did called the AI Dilemma, kind
of like the social dilemma, but it's the AI Dilemma.
It's on YouTube.
They talk about this wave two of AI, they call them Gollum
class AIs. They talk about just the collapse of reality. They talked about that 2024 was the last
human election. Elections are going to be broken. Our ability to perceive reality is going to be so
diminished that that kind of world is going to be very, very difficult. So my call is to Christians is to say no to it now and work on growing your discernment,
growing your wisdom so that you can help other people, you can love other people through
whatever comes, right?
So I'm less concerned about the extreme stories and more concerned about the day-to-day harms
by using the technology itself.
Yeah.
Jay, your thoughts?
Yeah, I just close in this way.
I think there are a couple of things that we would all agree on. Number one, we should know the Word.
We should get in the Bible just all by itself. The best reference to reference the Bible is the Bible itself always.
It's not a commentary. It's not a website. It's not somebody else's book. I got in a bad habit for a long time.
I've read too many books about the Bible and not reading the Bible itself.
I think that's a risk for a lot of people, whether it's AI or other people's books. So, I think we need to know the word. Number two,
I think we need to get in community. We weren't made to be alone. The first time that God said
it wasn't good was when He said it wasn't good for man to be alone. Maybe it was in the context of
marriage, but I believe it was also in the context of the greater community. God meant for Christians
to get together in person, physically. And that's as a guy who also helps lead a large online
ministry as well. But I believe that people should gather together in homes.
They should gather together in buildings.
They should gather together wherever.
I think Christians were made to be together.
The ecclesiastical body of Christ should come together.
No question about it.
I think we all agree on those two things.
Yeah, I couldn't agree more, Jay.
That's very true.
Third, I would ask, as it relates to AI and the future, which is your question, Preston,
how can we use this for good?
I think that wholeheartedly rejecting technology or moving away from it,
all it does is leave no good people in the mid... Well, there are no good people to be fair,
but if you want to really get into the theology of it, but here's what I don't want.
I don't want a world where the future of AI, which will dominate business,
it will dominate culture, it will dominate
the entire structure. To be really clear, this is the greatest human innovation other
than maybe the splitting of the atom, which didn't go too well in some cases. The impact
that it will have is bigger than the Industrial Revolution. It is the largest, most significant
change culturally over the next 20 years than we've ever experienced. And that's coming
from a guy who's got a 12 to a 22 yearyear-old. I care a lot about this stuff.
I care a lot about my kid's future and their kid's future and their kid's future, and that
the gospel of Jesus Christ would be transferred generation after generation after generation.
And so that's why I believe that Christians should be involved with these things, including
generative AI, and ask the question, how can I use this for good? And we should have guys like Doug, who are sitting on the other side of us going,
hold on, pay attention to this, because I'm into that too.
Thank you guys so much for the fascinating dialogue, man. I'm going to have, here's my
prediction. I'm getting a bunch of emails from people that are also into this stuff
saying, can I come on your show to talk about this? You didn't have this person, you didn't
have this person. So I think this is just stirring up the dust, which is good.
We need to have ongoing conversations about this. And you guys modeled a lot of charity,
wisdom and thoughtfulness in this. So thank you both for doing this conversation and thanks
for being a guest on the Converge Podcast Network.