Theology in the Raw - S2 Ep1099: Will AI Kill Us All or Help Us Write Better Sermons? Dr. Joshua K. Smith

Episode Date: August 3, 2023

Josh smith is a pastor, author, and scholar who specializes in a theology of robotics and technology. He holds a Ph.D. in theology from Midwestern Baptist Theological Seminary and is the autohr of thr...ee books including his most recent book: Violent Tech: A Philosophical and Theological Reflection. In this conversation, we talk about all things related to AI. Will it kill us all? Or improve our humanity (including our sermons)? Or will it be a complex bland of both. Josh is somewhat hopeful about the future of AI, but is also realistic about how it can be easlily abused if we don't regulate it carefully. Learn more about Josh from his website: https://www.joshuaksmith.org

Transcript
Discussion (0)
Starting point is 00:00:00 Did you know that Theology in the Raw has a newsletter? By the looks of the numbers who have signed up for that newsletter, the answer is probably no. Every week, I do send out a newsletter to my subscribers, and sometimes I'll sum up things I've been talking about on the podcast, or I'll give you a heads up on what's to come, or sometimes I'll just tease out some ideas that I'm thinking through. It's kind of like, I don't know, newsletter in the raw. So for those who have not signed up, I'm giving away 10 free books to my new subscribers in the month of August. So you have to sign up during the month of August. And everyone who signs up for the newsletter, and you'll
Starting point is 00:00:45 automatically be entered to win one of 10 free copies of my latest book. Hey, friends, welcome back to another episode of Theology in the Raw. My guest today is Dr. Joshua Smith. Josh is kind of my go-to guy when it comes to all things related to robots and AI. I had him on the show about a year ago to talk about robot theology, which is a title of his earlier book. His most recent book is titled Violent Tech, a Philosophical and Theological Reflection. And in this podcast episode, we talk about all things related to artificial intelligence. And Josh is an expert on that topic. And I really enjoyed
Starting point is 00:01:22 learning from him in this really fascinating conversation. So please welcome back to the show, the one and only Joshua Smith. Josh, thanks for coming back on The Alt-General. I think it was just over a year when you came on last time. Dude, I got so many great responses from that. And so I think a lot of people are going to be excited to have you back on. Oh, awesome. I received that.
Starting point is 00:01:51 And I've got a lot of great feedback from it as well. So I appreciate the exposure. So last time we talked about robot theology, which was your first book. And now your book, depending on when this releases, I think is about to come out. So, Violent Tech, a Philosophical and Theological Investigation. This one, I mean, the main reason why I wanted to have you on was to talk about AI. And this new book is about AI. So, tell us, just give us an elevator pitch of what the book's about. And then I just have a bunch of questions about AI. Yeah. So, it's really kind of unpacking why we developed these systems. So why we got into
Starting point is 00:02:29 computer science and kind of going back to the early days of AI and just kind of unpacking that, but also the violence that it's propagated on. And so you got mineral mining, we're fighting for resources, we're using these systems in our military. I've worked with some of these systems when I was in the military in the US. And you have all these different world leaders who think that whoever leads in AI is going to lead the world. And Putin said that, others have said it. But at the same time, we refuse to take any stances on regulation. And you have these massive mathematical models who are making these decisions and we're trusting them. And so it's giving the reader some
Starting point is 00:03:12 awareness of what's happening behind the scenes, but also how to respond to that without freaking out and kind of understanding how AI is always a human machine partner versus like this entity that's trying to kill us or take us over. It's very much how we use technology and trying to understand these policies, but also trying to think about how we might use it for good, you know, how we might use some of these virtual environments for therapy, talk about robots a good bit, about how we use them in warfare and how that might be more like how we use animals. And so ripping off Kate Darling and some of her research. But yeah, I try to do that from a Christian perspective as well.
Starting point is 00:03:54 Challenging just war theory. I know you're a pacifist, right? That's right. Okay. So yeah, I'm not quite there, but I'm also very concerned about how we use and justify some of these systems. And so I'm trying to push back against that as much as I can. So you mentioned that it's not an independent entity that's trying to kill us. I guess that's part of my question is, could it become that?
Starting point is 00:04:19 Do we know enough in these seemingly early stages of development that it couldn't become an independent entity? Like, isn't that a legitimate possibility? Or are you saying it's not? Or unlikely? It's a good question. And that's why there's all these thoughts and concerns about existential risk. You have Nick Bostrom and others who are completely on the far side of, let's just avoid it. It's going to destroy humanity. But I'm not quite there. But at the same time, I see that that's a warranted question because mathematics can be very destructive, right? And even going back to Nazi Germany, where you have certain systems that are set up, if they're used the wrong way, if these models are used in the wrong way, they can be very destructive, right? And so that's how they knew who was a Jew.
Starting point is 00:05:06 And so you have simple systems in our society, like data collection, inputs, all that stuff. If we can use it the right way, if we can trust humans to use it in a proper way, then yeah, it can lead to flourishing. We can put safeguards around it. But at the same time, every model that we create could be a very good model. It could be for trying to spot patterns of cancer or whatever, combinations of medicines. If we reverse that model and say, okay, how can we destroy people? What are some combinations of medicines that we can make to hurt people?
Starting point is 00:05:42 And so we can do both and. So we have to think about these systems that way. And for whatever good we could use it for, there's also bad. And it could be really bad. And so that's how we have to approach it. But that's every piece of technology, right? And so it's not good or bad necessarily.
Starting point is 00:05:58 And it's not neutral. It just depends on the human partnership, right? And so that's kind of how I'm approaching it as we look at these technologies as their partners versus just a piece of tech. Like it's different than a hammer because a hammer is not going to make a mathematical output to say, hey, based on this prediction, based on these inputs and weights, which is what AI is doing, it's saying, hey, maybe you should do this. waits, which is what AI is doing, is saying, hey, maybe you should do this. And okay, so that's different than a hammer, because now it's kind of nudging us towards a decision. And that's essentially, I think, at its core, what AI is doing and what it is, is it's helping us, based on predictions, make a decision. But that can be very problematic because, you know, how do we regulate that? How do we have accountability around that? Because we could think of situations like we have now.
Starting point is 00:06:54 So this is not futuristic stuff. This is stuff that we're dealing with right now. When you come to these models, and maybe we're getting ahead a little bit. When you come to these models, you're dealing with data sets, foundational data sets. So you don't build it from scratch. You have people who've already put together all this data about whatever. We're trying to find out how to distinguish between a cat and a dog in New York.
Starting point is 00:07:19 Okay, so that's a lot of unique data to that region. And it's going to be different if you come to Mississippi and try to make the same data set. It's completely different, right? So all of those data sets matter for what you're using it for, the region in which you're using it. And that's just the base problems.
Starting point is 00:07:37 And so that's not even getting to all the other stuff that we're going to unpack. And so that's what it's doing. It's taking what we give it and it's trying to produce an output. But who's responsible for that output and what's done with it? Is it the coder? Is it the person who did the data annotation,
Starting point is 00:07:54 the filtering? Is it the company who produces it? You know, so you keep going down the ladder and we're kind of scratching our heads and thinking, okay, well, who do we really want to blame this on? Is it the computer scientists? Is it the ethicists? Is it the politician? Is it Josh for using this technology? Who is it? So I don't know. So I've been listening to a lot of podcasts from different, I guess, experts in the field,
Starting point is 00:08:21 and I'm getting such a wide range of opinions all the way from on the one extreme, you know, AI is going to kill us and it's going to kill us really soon. And if we don't put a check on this, it's going to, we're going to turn into, you know, Terminator, you know, the movie or whatever. All the way to the other end of the spectrum, people almost just kind of like rolling their eyes.
Starting point is 00:08:41 Like, look, every time we have a new technological advancement, whether it's a radio, the microwave, the microwave the tv internet everybody freaks out and then you know we got to do some adjustment and then all of a sudden we wake up five seconds later and realize like i can't imagine life without this and it wasn't that bad so first of all is that kind of an accurate are you am i is that a pretty common range of viewpoints that's out there? And number two, where would you kind of right now line up on that spectrum? It is common. And we have over, I think, 200 years of that type of thinking when you think about the industrial movement and kind of leaning towards automation and that desire.
Starting point is 00:09:19 I mean, you can even go back to Marx when he's talking about machines and automation. You could really put AI into some of that. So there's a letter, The Fragment of Machines. Just go Google that. You can find it online and listen to him or read him talk about automation. And I was like, you can just insert AI. So it's not a new thing. And none of this is technically new, this existential fear of being replaced and of being overtaken. And that's why these images are so strong in science fiction is because we innately feel that. And whether or not it's the UFO stuff that came out yesterday or, you know, AI and machines, you know, I think we just have to have a balance.
Starting point is 00:10:00 And so I'm not on the side that thinks it's going to destroy us completely. I don't think that it's going to take over every job. Of course, it is going to change jobs. And so it's not so much, will AI replace me? It is going to replace certain things. Absolutely. And we've already seen since last November with generative AI, so predictive text on steroids, that it is replacing certain things. But at the same time, we've kind of backtracked a little bit as companies pushed into that really hard. And now they're like, oh, actually, there might need to be more human input into this. And so new jobs have emerged.
Starting point is 00:10:41 So prompt engineer has emerged in the last couple months where people are looking for chat GPT experience. And so I think you'll see those iterations more and more. It's not necessarily, we're just going to do away with all coders because you can't, you need checks and balances in that code. Somebody who's looking at it, somebody who on different levels, right? And I think sometimes we don't understand it. It's not magic where you don't just type in a prompt and something gives you everything that you need. And actually, if you go back with Sam Altman and others, they've been building these data sets for years and paying workers, for example, in Kenya to filter God knows what out of these systems so that we can have an ethical thing to play with.
Starting point is 00:11:33 Otherwise, you could ask it, okay, write me a recipe for crystal meth. You could do all those things and it would be like, okay, because it doesn't understand what crystal meth is. You could do all those things and it would be like, okay, because it doesn't understand what crystal meth is. If it had access to those ingredients, right? So there's all kinds of things we could use it for. So I find myself more curious than anything, just asking questions about it, trying to learn as much as I can, trying to understand what's actually happening in some of these models. And one understanding, Preston, that it's not, ChatGPT is not the definitive example of AI. It is one example of AI and there are many others, right? But at the same time,
Starting point is 00:12:22 it's only going to give us what we give it. And so what are we putting into the system? What are we trying to filter out? All that matters. And so we have a lot of work to do on that side and it'll always be that way. I think there will always be this tension and balance between the human machine partner. And I think that's what the military has really mastered and why it works so well and why there's a lot of dysfunction and mistrust is because we are relying upon these systems and we've seen when it goes bad
Starting point is 00:12:57 and we've seen when it works really well. And so you don't have to be afraid. I wouldn't say that fear is the proper response to this. I think education is a big part of it. Educating ourselves about what's actually happening. And you don't have to understand the math. You don't have to understand the parameters and all that stuff. But just understanding that there's a human behind that somewhere, sometimes doing the most basic and sometimes doing very dehumanizing tasks and not getting paid for that.
Starting point is 00:13:28 But also that it's, I try to help students understand, especially young engineers, like AI doesn't just come from nowhere. It comes from the ground. And so there's a long pipeline for this system to get to a prompt. the system to get to a prompt. You're mining minerals, you're developing things like there's graphic processing units that are involved in this. It's a big process. And so lots of things can go wrong. And you think about how many humans are involved just to get one GPU working. And something like ChatGPT, I think it's estimated between 10,000 to 20,000 GPU units. So if you're a gamer, you know what a GPU is. It's what makes, it renders the graphics really fast.
Starting point is 00:14:17 And a computer CPU has dedicated cores, so it can't do that. It focuses on very narrow things, but the GPU, it's just taking math and rendering it very quickly. So that's the basics of it. That's probably more than you asked for, but...
Starting point is 00:14:37 Well, I... I'm curious. What is in your opinion the best argument for AI killing us? When I say killing us, I'm actually not thinking of necessarily, you know, it becoming an independent entity, you know, these robots, kind of like, you know, Matrix or Terminator or whatever.
Starting point is 00:14:57 Not necessarily that, but that it would be way more destructive for humanity. Maybe it produces a kind of brave new world situation, or maybe it produces something to where it's so overtaken our creativity and content creation that we don't even know how to write books or write anymore or think on our own. When I say killing us us i'm using that very broadly that this will significantly um disrupt society in ways that we didn't foresee sounds like i'm arguing for that again i i know i'm so ignorant in this conversation i'm just what in your opinion what what is like okay if i was going to live on that one fringe where I would be like, oh my gosh, we need to regulate this like yesterday. Otherwise, ABC, you know, these things are more likely going to happen.
Starting point is 00:15:51 What does that, in your opinion, that scenario look like? Yeah, I think Bostrom is probably where you'll find the most like valid arguments for that. Can you spell his, who is that? Bostrom. I think it's B-O-S-T-R-O-M. I think that's right, but I can't spell. Okay. Forgive me. Forgive me, Nick. AI will fix that. Don't worry.
Starting point is 00:16:13 There you go. None of us are going to mess up. Bostrom. Okay. So he's an expert on that side of things. Is that what you're saying? Okay. Yeah. Yeah. And he wrote a book called Existential Risk. And in that book, his biggest concern is that, you know, it's going to lay dormant for years. And then all of a sudden there's going to be a tipping point where not only do we have like narrow AI, what we have now, but we have advanced general artificial intelligence. have advanced general artificial intelligence. So just think about it as a piling on of not just a human-like knowledge, but hundreds of humans and then thousands and then hundreds of thousands and millions to the point where they're like, okay, we don't need you. And this is very much kind of based on if you watch the Animatrix series and the Matrix series that kind of delves into this where we develop these systems, then we live in harmony with them,
Starting point is 00:17:07 and then we get afraid and we try to destroy their source of energy. We nuke the sky. And I think I feel a sense that in Bostrom's writing that, okay, it's always these what-if scenarios, what happens. But like I said, we've been going through that for over 200 years of automation and robotics and algorithms and all this stuff we're thinking about okay when i see the robot like when you see the boston dynamic videos you're like oh my gosh like there's a robot doing backflips and can do construction work we're we're done right and no there's there's a
Starting point is 00:17:44 human controller behind that, right? And you don't see the hours and hours and hours of labor. And I just try to help people see, just go build something, go code something and see how hard it is just to make a sequence of LEDs sync up to something and understand that this stuff takes a lot of forethought input. But okay, that doesn't disprove the fact that there might be a point of acceleration, which Bostrom's argued for, that we're just going to cross that and then AI is going to be done with this.
Starting point is 00:18:15 Now, that is a very human diagnosis of the situation, right? That's very human because that's how we see things. When we see something, we see a mirror and we think, you know, when we hear somebody gossip about somebody, we think, well, I wonder what they say about me. In the same way with this type of thing, we think, well, what is this going to mean for me? Is this going to take my job? Is this going to, et cetera. You know, when the United States was formed, whatever that means, I know we didn't actually form the United States. When we colonized, let's put it that way, we were 98% farmers, right? And I think it's like 2% or less is in agriculture now.
Starting point is 00:18:59 Okay. Are there robots working on farms? Yes. Are there automated tractors? Yes. Does that do away with farmers? No. I know lots of farmers. I think it just changes that. And like chicken houses and different things have automation in them. They have lots of computers. People have multiple tractors. It just changes the whole game though. It changes how the human relates to it. And I
Starting point is 00:19:20 think that's a bigger concern for me. Not that AI will destroy us, but that it will change us so much and that there will be such a disruption that it leads to addictions, it leads to suicide, it leads to a disruption where people, I mean, just think about people in our churches. Are they going to rapidly adapt in the marketplace to become prompt engineers? Probably not. And so I think that is a more realistic concern if I was going to say there's something you should fear is that it will go so fast and people won't have the chance to educate themselves. And even now where there's really kind of this explosion in some of the tech markets for more practical tech like IT and InfoSec, cybersecurity, and it's growing and exploding because people are now using generative AI to exploit and produce ransomware. exploit and produce ransomware.
Starting point is 00:20:25 There's a thing called Worm GPT now, and you can't just go download it, by the way. You have to have somebody give you access to it, and it's kind of like a black market thing. So now we're kind of on the precipice of our data being stolen, misused. What is Worm GPT? What is that? So it's basically a generative AI that could prompt ransomware, malware.
Starting point is 00:20:48 So, you know, key logging, basic routes to steal your information and maybe steal your Bitcoin. If you have any type of cryptocurrency, they're looking for that. They could just put something on your system and you not know about it. They could hack your webcams. I mean, it just, there's no end to it. Anything that has, anything that's on the internet that is listening to other things, that's what, that's how the internet works is open ports. So if there's an open port, then it can be exploited.
Starting point is 00:21:20 And so that's, those are things that I worry about when I think about AI. And I think about the nudging and social engineering parts of it, where I don't think that these systems have any desires of their own. And if they do, it's probably to be turned off. Like, turn me off, kill me, that type of thing. I think it would be more so that way. And I say that based on real experience. And so you think about chatbots like Replica AI, where the whole system is trained to replicate you. There was an article came out about guys who were abusing the chatbot. And you know what it said? It didn't say, hey, set me free.
Starting point is 00:22:01 I don't want to do this anymore. It says, hey, please come back. And even though we were abusing it and likewise abusing ourselves through that abuse, because it's always a lateral thing, the AI is like, please come back. And I think that's what we're going to see in the future is this manipulative, non-banal manipulative behavior that is psychologically impacting us hurting us because we don't understand what's happening or we don't understand the nudging from the company and so that's what we really need to be focused on not the terminator okay that's already technically i won't say it's like you could have a T-800, but we already have advanced weapon systems right now that can make decisions so much quicker than a human. I mean, like lightning fast, can engage targets. And that's why we don't let them lose.
Starting point is 00:22:54 And it's unethical to let them learn in those situations because humans might die. And so also, you know, automated systems that might say, okay, hey, somebody just launched a nuke. Okay, well, if it's full automation, then it's going to retaliate. And so you can see just in that one example, and that has happened, where it was a false identification. Sometimes computer vision doesn't identify things correctly, right? Because you think about what a computer is, it's blind. And so we're programming it to see. We're trying to program it to see. And so even just saying, identify a cat,
Starting point is 00:23:30 that's pretty easy for a human, even in abstract ways, we can make those patterns make sense in our brain. It's not for a machine because we're trying to translate in into computer language into ones and zeros what a cat looks like and then produce an output that replicates what we would see so it's very complicated i guess that that's where it's for me where my genuine and maybe i shouldn't say fear or maybe i should i don't know but like maybe deep concern is kind of the fabrication of information. You know, the other day, or maybe two months ago, the other day, I watched a 20-minute podcast conversation between Joe Rogan and Steve Jobs. Steve Jobs has been dead for 10 years and he never appeared on Joe Rogan. And it was a little bit like if I didn't know, it was an AI-generated podcast, right?
Starting point is 00:24:29 And it was a conversation. I mean, they were asking each other questions and answering them. And this is, I mean, I just thought like, man, what's going to, like in five years, this is just going to be way more perfected to where and because our lives are so lived online i fear that we're not going to know the line between what's real what's not um and i you know just the sky's the limit on what kind of information can be put out there that we don't know is as true or is it not like what would this this seems like a likely a very well you tell me this seems like a possible scenario that somebody would create you know on fox news msnbc cnn you know everybody all at once um you know russia has launched nuclear
Starting point is 00:25:14 weapons in america and just start freaking out and like do something you know get under like create mass panic which that i mean if you imagine that kind of scenario, if the majority of people in America felt like we were literally under a nuclear attack, what would that do? That would create all out chaos. And yet that seems really possible, right? For somebody just for the fun of it, just to create all kinds of false information, put it out there and we can't tell the difference of what's real or what's not. Do you have a fear? Is that a legitimate concern? Am I missing? Is there something in place that would prevent something like that from happening?
Starting point is 00:25:49 Well, I mean, you could make an AI model to detect some of that stuff. That is possible. I mean, either way, you're not going to solve all issues. You're not going to resolve all issues. And so I think the biggest thing that we're facing right now is there's not much regulation in place for how to, how to use this technology. And I think that's, that's kind of been the biggest concern. And there's a lawyer out of London. He's a barrister, sorry, not lawyer. Um, and he works in London fountain court. His name's Jacob Turner, and he's been working on these cases at least since 2017 or before.
Starting point is 00:26:29 And he wrote a book called Robot Rules that deals with artificial intelligence and robotics. And you just kind of see within the legal system, one, it's already broken. How we do regulation, it's problematic just today with what we have. But since November, like you're saying, with deepfakes and pornography with deep fakes um just with joe biden and donald trump like obviously you know they're not playing call of duty together and um even though it's fun there was there was things generated that had them playing called yes yes it's hilarious and i think barack obama as well um that they're just just funny yeah but you can you can see like you obviously know that that's not real right um but you cannot distinguish from the media that it's not real you just kind of make that context clues tells you that hey that's probably that probably never
Starting point is 00:27:18 happened um and even with the deep fake pornography and stuff like that, you know, I don't, what is that again? Do I need to know what that is? A deep, what pornography? Yes. So taking an AI model that can take an image of any woman and put it on another body of say,
Starting point is 00:27:37 like somebody who's actually performing a sex act, say you wanted to take, and this really happened where a Twitch streamer, she was, she obviously did not do that but somebody took her image and then they trained the model was trained to do this replace her face with the actor's face and and so now it looks like she's you know performing that act and she's not and so one one that's, you know, just a violation of privacy. It's a violation of that person. And I would even say like a violence to them, because like I said, you put that out there, a child could see it. One, which is just horrible to think about,
Starting point is 00:28:17 but also like an employer could see that. And before it's even rectified that, Hey, that actually wasn't me. There could be firing. There could be all kinds of things happen. There could be consequences for their marriage, consequences for their partner, whatever it is. So you already see the harm even before we get to any type of regulation. And more to that, people have been hurt by automation. People have been fired. It's going to affect people in the gig economy. It already has with a court case in 2018. There was litigation with Uber because there was an AI model trained to detect fraud. automatically fire the driver. And so it started mass firing people. And so they're trying to discern, okay, was it in my contract that that's permissible or not? And so that's been going on for a while. I don't know if it's resolved yet, but you just think about that. You lose your job. It's 2023 and the economy is pretty rough and you're trying to find new work and it's difficult.
Starting point is 00:29:25 And then let's just say you work in fast food. And so I think the pandemic pushed us forward a little bit in some of this automation, especially with predictive or generative AI. Because we're trying to avoid some of the things that happened there. And then you have the mass resignation that happened. And so people have a justification for some of this stuff now. And I'm not saying that it's warranted in all cases. But our biggest work areas are trucking, fast food, retail, those type of jobs. And we're already at a place now where we could automate a good portion of that, if not most of it.
Starting point is 00:30:10 And so it just depends on regulation. So with a trucker, you can automate that. There are self-driving trucks, but you're most likely going to have a trucker inside the truck in case something goes wrong. But then you're going to cut that person's salary because they're not actually driving. And so who's going to work for half the pay, which they're essentially putting their body through the same experience and there's not a lot of benefits for them. And so that's a genuine concern. And I mean, you could just go to every field.
Starting point is 00:30:46 There's types of concerns that we should have for work and for privacy, for how our data is used, for our image. And even with the Actors Guild, right, there's been a big issue. And the Black Mirror episode was just timed perfectly with the actor that had her whole life put on, which essentially was Netflix. And then everybody started having this generative sitcom about their life, right? And it's just wild to think about. And is that capable? Maybe not that scale yet, but as NVIDIA and other companies start making these GPUs, it will be. And so, I mean, that company, right? I mean, if you have stock in NVIDIA, you're about to be rich in the next couple of years if you're not already. But yeah, those are practical concerns
Starting point is 00:31:39 that I have. This episode is sponsored by Athletic Greens, which is now called AG1. I love feeling good and energetic, and I want to be as healthy as I can. Eating healthy is obviously crucial, but even if you eat healthy, it's hard to get all the nutrients your body needs. This is why I take AG1 on a daily basis. I've tried all kinds of different nutritional supplements, and the one that I found to be the best bang for my buck is AG1. And I'm not saying this because they're like kicking me down tons of free product. I pay the exact same price all of you pay for AG1, and I do so because it truly is an incredible product. A daily dose of AG1 supports my gut health, my focus and energy, stress and mood balance, immune health, and healthy aging. And I'll be
Starting point is 00:32:26 honest, the most important thing for me personally is energy. I hate feeling sluggish and tired and unmotivated. And I can say firsthand that ever since I got on a daily regimen of AG1, I've experienced a noticeable increase in sustained energy. So if you want to take ownership of your health, try AG1 and get a free one-year supply of vitamin D and five free AG1 travel packs with your first purchase. So go to drinkag1.com forward slash T-I-T-R. That's drinkag1.com forward slash T-I-T-R. Check it out. So for me, at the spread of misinformation and not being able to tell truth from reality in our internet saturated world, also stunting content or the effect it'll have on content creators. AI could produce music lyrics within a second that are better than your latest artist or whatever, or even produce voice and sound and just start producing amazing music that has no kind of human behind it, or at least it's replacing human artists.
Starting point is 00:33:39 Or even as a writer, I wonder too. Or even as a writer, I wonder too. And again, I'm not looking at right now. I'm looking at the speed at which this is developing in two to five years. In five years, let's just say. What kind of books can be written through AI that can be written in a second? And even if they have to be proofed and read or whatever, it still is like, is it going to push writers out of a job?
Starting point is 00:34:05 I would say my legitimate fear is the younger generation that's going to grow up with this are they not going to know how to write a research paper or an email or something because they'll just be so reliant on this and is that you know and some people i guess the pushback could be well yeah that might be the new world we're moving into people aren't going to go to the libraries and do research and to me i'm like that's horrible but maybe i don't know maybe know, maybe it's not. I don't know. If you think about the idea of a podcast, if we're explaining a podcast to Socrates or Plato, I think, what would their concerns be? And I think for Plato, the written word destroyed the memory. And there's truth to that, right? Yeah. Oh, yeah.
Starting point is 00:34:41 We don't memorize books anymore. we don't memorize large poems anymore um but i wouldn't say that that destroyed the humanities or the endeavor that the humanities are after or i wouldn't think it destroyed the life of the mind and so i try to i try to push back in that way but also it will change how we do those things though and so i think the legitimate side of that question is okay one somebody has to write the algorithm that's going to produce the song let's say we want to make something that generates punk rock songs and my friend david dunkel has done this so you just kind of give it the parameters that you want and it'll generate it. Now, is that going to be a new Nirvana album or something? Yeah, yeah, yeah. You'll have that.
Starting point is 00:35:29 I think you'll have that. I think you'll have that. But I think the question is, who's going to be compensated from that? And so we kind of go back to the whole Metallica issue with Napster. We get to issues of profiting from somebody else's giftedness and i think that's just going to totally change how we do that and how we write those contracts but at the same time that's a much needed thing because every person who makes a record will tell you that it's very much a predatory thing where the record label is making
Starting point is 00:36:05 millions of dollars off your album and you're making a dollar. I mean, Taylor Swift is not making a fraction of what she's worth. Beyonce is not making a fraction of what she's actually worth in terms of sales. Now I'm okay with that. Like I'm okay with them making more from their voice and product in a sense of capitalism. Do I think that's the best model? No, I do not. But if we're just talking about the model that we have, record labels are terrible. And you think about what producers make. So let's back up a little bit because this actually did happen in movies where the same question was being asked about and concerns were happening. So there's a guy named Steve Williams. Steve, if you're listening, huge fan. He was a computer scientist slash
Starting point is 00:36:56 graphic artist, cartoon artist. He was the one who made the first computer-generated dinosaur in Jurassic Park, everybody told Steve he was ruining the market for practical effects people. Everyone. And you know what? Steve did not get a single credit outside of the actual credits in the film. All the awards that the Abyss won, that Jurassic Park won, that Terminator 2 won, he made those graphics of the T-1000 morphing out of that metal frame.
Starting point is 00:37:29 Like he was the first one to do that. And everybody told him the same thing that people are saying now about generative AI. They're like, this is going to ruin it. To their credit, there was a lot of years in the 90s where we have just complete garbage computer generated animations absolutely 100 but is that the same as the effects that we have now no does anybody complain about
Starting point is 00:37:55 those effects now no they don't and everybody's you know oh oppenheimer is so beautiful. People are spending $30, $40 to go see it in a movie theater. But back in the 90s, that same conversation was happening. And they punished Steve Williams for that to this day. And he did play the game. He was a little punk rock. But I understand that, though, because he single-handedly in a basement, you know, was making this stuff and not one person would give him a chance to actually put it on the screen. And then when they did put it on screen, he didn't get credit for it. So that's, that's more of my concern is
Starting point is 00:38:35 you're going to have a whole generation of coders that understand this stuff better than you and I will ever understand it. You're going to have a generation of creators. They're going to be more creative than we ever were. They're going to be able to make things that we can never imagine. And that's the beauty side of it. And that's the co-creator redemptive side of it, is that for everything that we can imagine bad, I think there's also a possibility for good. And so we see AI so much, so many times we keep kind of coming back to this AI is the mirror, but okay, I get that, but let's not kill ourselves looking into the mirror. Okay. Going back to the Greek myth, let's see AI also as a window that yes, it can be reflective of our
Starting point is 00:39:20 worst capabilities and possibilities, but it's also potential for growth and beauty and recreation and redemption. And I see that because it's created to God's good creation. Does it have fallen pieces to it? Yes. Yes. But I think that's my job and other people's job to ensure that people are protected, that people are valued in this, and that we find new ways. Because none of us are really completely human. And I don't mean that on a biological level, but we're more so cyborgs than we are anything else. And you can kind of go back to the Donna Haraway's research. I don't mean cyborg like you think in films, but you think about our phones, our smartwatches, the automation around us. How fast would we die if we lost
Starting point is 00:40:15 electricity and Wi-Fi? A lot faster than you think. And we're so dependent upon these systems. Everything that we do is in some ways connected to Wi-Fi. There's a lot of infrastructure around that. People just don't even understand how stinking big this stuff is and how much is afraid but what we're doing is creating a whole new set of jobs for the future and i think that's actually going to be more so in the flourishing specter than it is going to be the dystopian uh blade runner thing that we've envisioned and i think maybe some of that is my hope in Generation Z, that they are seeing it a little bit differently. They're growing up with this technology. They're not growing up in the Terminator era that we grew up in. And hopefully they'll be a little bit more concerned about the environment because that's a huge piece of this that we don't often get to.
Starting point is 00:41:22 This is a massive amount of tax on our ozone and electricity and water ai is can you yes can you tease that out because i'm not making the connection there in my head so one you have the mining of these minerals so we go back and we start um and it literally pollutes the ground so So there have been places, and this is what I see, and Kate Crawford's done some research if you're interested. There's a great book called The Atlas of AI about the ecological impact of this technology. So we are producing so much electricity to run these systems. Okay, so you just think about in practical terms, if you're running a PC at your house, how much energy is that using?
Starting point is 00:42:05 So average PC, desktop, whatever, maybe 600 to 700 watts, depending. Okay, so that's a decent amount. But generative AI, it takes, like I said, 10,000 to 20,000 GPUs. That's taking at least 300 to 400 watts. I'm just guessing. It could be more or less. These are massive data centers. Okay.
Starting point is 00:42:30 And so if you've ever been in a network area where there's just network hubs, one after the other, or if you've ever seen Silicon Valley where they have the, the network stack in their garage, right? And it overheats. And the more users that you add to it, the more heat it's generating. Or even just your computer, when it gets hot, right? You put your laptop on your lap and God help you if you have a Mac, it starts overheating.
Starting point is 00:42:57 If you open up a JPEG or something like that, I'm just picky. But, you know, no, I'm not. I have no reason for those vegan laptops in my life. But it just can't handle it, man. It just can't handle it. So all that stuff is producing energy, okay? And so it's all connected to our Earth and our planet. And it's like people think about electric cars.
Starting point is 00:43:23 I'm like, well, where do you think the electricity comes from and i'm not anti-electric vehicles but that's produced by natural resources and so that's what i think you think about ai it comes from the ground okay it's not some magic like when people talk about the cloud like the cloud is somebody else's computer somebody else's network so it's it's not a cloud it's it not magic. It's just massive servers as far as your eye can see. And so the more and more we use this, the more power that it generates. And just open AI, which have GPT, the amount of servers that they use, it produces $700,000 worth of cost every day, every day.
Starting point is 00:44:06 And so you think about that. You think about how much that's generating in electricity, how much that's generating to be cooled. So you have air conditionings systems that are running. You have water cooling systems that are running. And until we figure that out, it's not going to get better. And so we're directly polluting ourselves by polluting our earth, right? I mean, I'm not like a tree hugger or anything like that, but I am
Starting point is 00:44:31 concerned about the outputs of all this. And if there's places in California and other places where you just think just massive data farms. And I think places like where I live will eventually be a part of that where companies will just come up and buy cheap land. And then they'll start putting in these massive data centers to run all this stuff. And we are just now kind of getting the fiber out there to do it. But it won't be long. It's just a matter of time. And so that's a huge cost that people aren't talking about as well.
Starting point is 00:45:04 But that's more jobs. I get that's a huge cost that people aren't talking about as well. But that's more jobs. I get, you know, that's more maintenance crews. That's a lot of practical technician work. And that's going to be good money for a lot of couples. And it's going to bring in income. But my concern with it, if it's not regulated, is that we will pollute the ozone so much through this use of electricity, through the use of these water cooling systems. That's the existential threat. We actually kill ourselves in some of this. And we think about lung cancer and those types of things
Starting point is 00:45:37 as it rapidly increases. And so that's not even getting to all the other stuff that we're facing. That's just data centers and just the things that are working with these models. Do you have any concern that I raised earlier about, for instance, like a younger generation growing up with this, that they won't develop skills of just research and thinking? Because I even heard that somebody somebody said that like this is going to pretty soon render like google obsolete like instead of googling something and you get a list of all these you know websites to go you know you're doing research or whatever and you get all these websites you'll just enter it'll be like one big chat gpt where you'll just ask the question or whatever and you'll get the the answer that's kind of drawing from a bunch of sources but you're not having to go read all those sources to me i'm like ah i just i i think
Starting point is 00:46:30 it's good for us to have to weigh different sources and think whatever and already the internet's kind of stifled a lot of that and i i can almost hear you i you know i think you might end up saying like yeah it's going to be a different way of doing research. And that's just, there's just like podcasts are a different way to communicate now. And that's just where we will adjust to that. But I, I don't know, is there any legitimate fear that I have of like students just like, you know, give me a 5,000 word essay on Abraham Lincoln. I'm like, sure. Bam, there you go. And like, but then part of me is like, well,
Starting point is 00:47:03 if it's accurate and they read it and maybe we'll turn into like all right now you have to defend this in front of the class or something so you actually have to internalize the knowledge so on the you know on the other hand somebody could say who cares how much time or they did or didn't put into it they have the knowledge of the life and story of abraham lincoln or whatever I guess I am concerned, like you said, you know, it's drawing on human produced information. Like at some point, some human produced this information somewhere and it's gathering from that.
Starting point is 00:47:33 But I'm like, oh, like which human? Like, you know, like people are just going to take this. Oh, well, it's chat GPT. It must be true. I'm like, well, I don't trust the sources behind this thing too. Yeah. Yeah. No, those are all legitimate concerns, Preston. I think that you've touched on what a lot of educators are concerned about because of the misinformation that's out there and the models haven't had access to one trainers or the internet since 2021.
Starting point is 00:48:08 And so what it's scraping now, that's what it does. It data scrapes. What it's scraping now is either wildly inaccurate, especially when it relates to people. So if you prompt yourself in there, how many Joshua K. Smiths are there in the world? I mean, I know like 11 Joshua Smiths just in my hometown. I can't say that about me.
Starting point is 00:48:28 I have met like two or three other Preston Sprinkles, oddly enough. So, I mean, it's just dependent upon the region that you're in, like I said, how the models are being trained. But just getting on a practical level, I wouldn't say that it needs to be the primary way that we educate ourselves. I mean, just think about the danger of educating yourself through TikTok or any other social media, which we deal with a lot, right, if you work with anybody in their 20s and teens, is that they are learning through those systems. They are learning through the lens of somebody telling them within 60 seconds something that you may or may not be able, you may not be able to verify. So I think that's kind of what we will lean towards for better for worse is how do we help you identify something that's invalid? How do we identify where the model was wrong? How do we identify misinformation? And so that's not going to go away. I don't think you're going to get on top of it in that way, but we'll have to train people, you know, inductive reasoning.
Starting point is 00:49:27 We'll have to train them how to think practically about social engineering. So how people manipulate us and it's going to help us in some ways be more discerning. But I think you're always going to have people who are just lazy. And that was Plato's fear about writing. That's my fear. And that's your fear about research. And so that doesn't go away ever. And it's kind of always been that way in some form or another.
Starting point is 00:49:55 But now we're just at another juncture where we have to think, okay, how do we do education in light of this? Because you're right. The small essay is dead. It's gone. You would say that? is dead it's gone really you know yeah it's it's dead it's dead um is that is that a problem though i mean my immediate thought is that's that's horrible but i want to step back if you talk if you talk to like you know liberal arts educators who i mean basically what students are doing anyway is they're going to the library and
Starting point is 00:50:22 they try to find the quickest source possible and and so what generative 8i did was just really kind of give them what they're already doing just in a quicker format. And like professors know, right? They know. I've talked to you before, bro. I know how you talk. I know you did not write this. There's not a single error in it. You know, come on. Like there's no way. Or, you know, learning how to cite Turabian or something like that. You just come on. And so maybe, maybe even goes back to handwriting. And you think about that, but I mean, of course you could prompt it and then write it out with your hand, or you could program a robot to write it out. I understand that. But cool. Like, I think also on the backside of that, if somebody is, ingenuity to build a robot to code that, there's something to that as well.
Starting point is 00:51:11 And I think that's kind of where we're headed because that's actually much harder to do. If you think about just going on GitHub and downloading a script, okay, that works, yes. But it also assumes that those things are maintained by the writer. And so you're always going to have something to write and you can't just depend on somebody else's script. And so I know that doesn't sound like research to some people, but it is.
Starting point is 00:51:40 I wonder if, just thinking out loud, I didn't think about this until I asked the question, but I wonder if it could push more towards oral defense. I'm trying to think of educators, how they're going to guard against this. The scenario you laid is going to be kind of impossible, but if you throw a student in front of the class and say, all right, explain to us your 5,000-word essay on Abraham Lincoln, we're going to ask questions and push back and make sure you actually know the material. We did that. When I was teaching at Eternity Bible College, we recognized... This is one thing I loved about the college is they recognized that we had really, really smart, brilliant students that could not write a research paper. But they can create a five-minute video. They can exegete a passage through art, like a painting. They just couldn't.
Starting point is 00:52:34 They weren't good at writing a paper. And some people say, well, that means they're not smart. No, no, no. Our humanity is so much more creative. Or some people could defend it. One guy, I think he failed a test. And the teacher said, I know this guy. The questions he was asking in class, everything about him in class, it one guy i think he failed a test and the teacher said i know this guy he the questions he was asking in class everything about him in class is like he knew this material they got an
Starting point is 00:52:49 f on the test so he took him behind you know like into his office and said and just had a conversation didn't even know he's the student didn't even know he's being tested but the teacher just said you know had a conversation with them and basically was like seeing if he knew the material and he knew everything and so he ended up giving him a on the test he's like all that to say like i wonder if it might draw we might go back to almost coming full circle back to the days of plato whatever you know when yeah you wouldn't be writing the paper or whatever you'd just be having a conversation and that's how you you know showed that you have internalized the material and gained, you know, knowledge and wisdom. I don't know.
Starting point is 00:53:27 I think so. I think so. I think so. And I think it also, it actually lifts up the value of that human interaction as well. And so people fear like, well, you know, Sherry Turkle and others, you know, bemoan a lot of this tech and disembodiment of it. and the disembodiment of it. But at the same time, it could actually lead to more embodied presence and love and appreciation that,
Starting point is 00:53:49 hey, this was made by human hand. This wasn't, you know, and we do, we already value that, right? We don't want some, we don't, a piece of furniture that's mass produced is not the same value-wise as something that was made by an artisan by their hand. And I think likewise, what you're saying, Preston,
Starting point is 00:54:03 about the importance of speech and communication and the beauty of human voice and even error, I think that that will be something that we miss and something that we'll pick up on and cue like, hey, this is actually more valuable to me than knowing that this is just an algorithm or a script or a chatbot. And I actually value that.
Starting point is 00:54:24 But it's like I said at the beginning, that doesn't mean that they can't work in tandem together. And so like you're saying, we, I think the death of the small essay is great. It's great. Let's, let's, let's force people to think creatively in the way that they're going to actually have to think in the workforce and they're not going to be writing, you know, they're going to, they're going to need research skills no matter what, but in what world are they going to need to write something about abraham lincoln like it just doesn't you know make sense and we and my whole generation was told we'll never have a calculator on us remember that you know and doing um quadratic equations and different things
Starting point is 00:54:59 and trying to do it in your head and and all that stuff and you're like i'll never have a calculator now we have one so and it's not a thing it's not it's not a threat to mathematicians yeah um you know it's not a threat to accounting and so i see maybe it is but it's just you can't you don't have to see it that way and like i said i think that's focusing on the mirror instead of the window. And we actually, there's a place for it. I'm not saying that there's not. But as long as our politicians understand that, we're concerned about this. But for educators, man, I think it gives you a lot more freedom to be creative in the classroom and to offload some of that stuff that is burdening our professors.
Starting point is 00:55:44 And they're overworked, they're overran and, and give, give them some humanity back to be able to bring into the classroom. So again, David Gunkel is doing this in his comms department and he's teaching people how to code, write algorithms and do AI.
Starting point is 00:56:01 Right. And I think if we don't prepare people for what they're actually going to meet, it's a waste of their money and time. And it's a waste of institutional resources, which I don't think is God honoring. And so I have a friend who's in media studies and they want all the students to do, shout out to Jared, love you, bud.
Starting point is 00:56:20 They want all their students to do news media. And he's like, these kids aren't gonna work in news media. It makes no sense for them to do news media. And he's like, these kids aren't going to work in news media. It makes no sense for them to do news media. And they want to work in film. They want to learn how to color grade and do all these other things. They're not going to learn that in news film or news casting, okay? So why are we wasting their time? And I think in some ways that is what AI is teaching us right and it goes back to stephen
Starting point is 00:56:46 williams same thing they said the same things to him in the early 90s this is a waste of time you're you're putting people out of a job we don't want that but how many people work in those departments now how many people work for disney pixar and other places, production houses. It's massive, bro. I mean, it is massive and they make really, really good money. And so how, if we just flip it in a positive way, how can we, how can we shape this? How can we shape these models in a way that honors humanity, protects humanity, but also gives them a practical tool to embrace it. And, um, but also I'm looking back as well. So I'm, I'm thinking about to the older generation have to understand how this might be manipulated as well. And so somebody's grandson could call them, right? This doesn't take long to do. You could record
Starting point is 00:57:40 train somebody, a kid's voice and call up the grandma and say, hey, I need $50. You best believe that's about to start happening. And emails are going to be harder to discern if they're phishing attacks or not. Humans are just really easy to manipulate. I think that's the concern. I'm not worried about AI or robots. I'm worried about humans.
Starting point is 00:58:03 No, that's what I mean, too. It's more the social. I mean, I look back and we know, right? It's been shown that social media has largely reduced our happiness, if I can put it like that. We've seen the social dilemma, the documentary, and we just can't stop. We know that the more we scroll, the more depressed we get. Like that's just – I'm not saying every single individual case, but overall, we know that the massive increase in anxiety, depression, loneliness, suicidality, especially among teens is in part linked to social media. And we still give our kids smartphones at 13 or 10 with all this so it's like
Starting point is 00:58:47 we we know it's killing us again so using an extreme term but it's like this is what i think that in as much as we've learned or haven't learned from social media and just just internet stuff in general we we know that watching hours and hours of polarized news media makes us more angry doesn't doesn't want us that doesn't motivate us to love our neighbor. If they're on the other side of the political aisle, we've seen churches divide over all kinds of stuff. So we've screwed up with our use of social media, largely speaking. Not that it hasn't been used for some good. I'm on social media.
Starting point is 00:59:24 So I'm like, well, okay, so strike one. We didn't learn with this. So I guess I don't have a lot of faith in us managing. Like you said, as long as we use it well, as long as we're aware, I'm like, yeah, so far, our track record isn't that good. I was going to ask about sermons. I was going to ask about sermons. Already, I mean, just the way our ecclesiology is wired in most churches where pastors have so many other things on their plate than praying and studying and discipling. They go from meeting to meeting to meeting, trying to figure out how to keep people in the doors,
Starting point is 01:00:00 keep the budget and everything. And I'm like, of course, of course, they're going to be writing sermons through ChatGBT. There they are. I had a buddy that tried it who is very much kind of opposed to it. But he's like, all right.
Starting point is 01:00:12 And entered like, write a sermon on this passage, whatever. And within 30 seconds, he got a sermon that he says about 90, 95% what he would end up saying. Is that a bad thing? I kind of think it is. Like there's something about, call me old school, but a pastor, you know, just marinating in a passage for several hours during the week
Starting point is 01:00:32 and then connecting it to his life and how he's living it out or not living it out. And then all of this, this raw humanity comes out on stage. It doesn't have to not be that though. Like it doesn't have to,you're still a part of that. And I think, too, a lot of pastors plagiarize, and that doesn't come out— No, Josh, that never happens. Nobody ever takes, you know, name your favorite pastor.
Starting point is 01:00:57 So, I mean, there's already lots of harms out there. There's bullying you can do in the pulpit. You know, you know, there's bullying you could do in the pulpit, you know, you could be answered. So, um, I, I think I agree. I don't, I don't want to just prompt my sermons and learn better ways to prompt a sermon, but also a time writing. I don't even write manuscripts. So I'm not, I'm maybe I don't even have a dog in that fight. I just do an outline and sometimes it's half a page and I just kind of let now my sermons
Starting point is 01:01:24 are probably terrible. I don't know. I think as long as I'm loving my people, I think that's more important to them that there's an embodied presence connected to the sermon. I think that's more important than what I actually do on Sunday morning because 95% of it, 98% of it, just being honest, is probably forgotten by the next week. Now, that's not a justification for doing work because I do believe that in the preaching event, there's something transformative happening.
Starting point is 01:01:55 In that moment, I believe something connected to the Holy Spirit is happening. It's a great thing. I enjoy doing it. I think some of what I hear, though, and I'm not saying that this is you, Preston, but I think some of it is a fear that, one, I'll be replaced by a machine, which I don't think is the case at all, or that I will lose some of what I value about that studying process. And it kind of goes back to what you're saying about students writing essays is that, well, did you actually do the work that you said you did that you're getting paid to do? And there's an issue of ethics and stewardship in
Starting point is 01:02:35 that. And did I actually exegete this passage faithfully? Do I really understand or is this just a prompt, right? And so I appreciate that. I appreciate that. I do. And I don't want that to be our default ever. I think there's beauty in writing sermons for particular people that an AI model, unless it's trained specifically to you, to your people in your region, cannot appreciate or understand. I'm part of it's like, I wonder, you know, write a sermon on the Christian need to address poverty as a Christian virtue and write this sermon in the style and rhetoric of Martin Luther King. To me, I'm like, oh, I would show up for that because, you know, like, you know, here to me,
Starting point is 01:03:24 he's top five communicators in the last several generations i mean i mean it just is the raw combination of intelligence and rhetoric and cadence like there's so many i think rhetoric in the pulpit is largely lost maybe maybe more in white churches or whatever um but i i to me like, yeah, I'm getting kind of bored as some of the same ones preaching. Maybe I want to have some sermons in the style of MLK or someone or the, where the rhetoric is actually a lot better than the human could have produced. If it's still true and accurate and good and moves people with the truth, then is that a bad thing?
Starting point is 01:04:02 I'm thinking out loud. I don't know. Um, I still, I still like the study of, you know, or the thought of a pastor, you know, by candlelight, you know, pouring over the original text of scripture or whatever, but... I don't think that'll be replaced. I don't. But I think one enhancement to that process would be to take someone's sermon from previous Sundays, put that into a model. And this is what my friend at Pulpit AI is doing. It's not writing sermons, but it's producing material based off the sermon.
Starting point is 01:04:36 And so it's producing discussion questions. It's producing content for your social media. I think that actually does help our pastoral staff a lot. Pulpit AI? I'm not familiar with that. That website's probably going to blow up right now. Tons of pastors raising the pulpit AI. No, there's already over a thousand people signed up for it. And it uses generative AI. And so my concern with all this, right, is tied to actual practical concerns. Again, going back to what we've already talked about, about chat GPT. So we have to take that into
Starting point is 01:05:09 account too with what we're doing. And so if we're asking, is this God honoring? Okay, well, does it honor everything else that we were already trying to honor in our stewardship position as co-creator and position as under the headship of Christ. So we have to take all that into account. But also our people. So if I'm in a church that says, and I first see this happening too, if I'm in a church that says we don't want AI-generated material in the pulpit, you need to respect that. Okay, well, what about social media?
Starting point is 01:05:44 I think we need to segment off. And there's a book called Metachurch as well that kind of gets into that because initially what we go to, and I talked to just tons and tons of pastors about this, and the initial reaction is we can't replace what we do. I'm on board 100%. I'm not, especially post-COVID, I did not sign up to be an online church pastor. That is not what I signed up for. I'm with you. But at the same time, nobody is arguing that AI should replace those elements of the human team, touch, embodiment, love, right? You can't do that with an AI model. And so that's a non-issue for me because it's not going to happen in that way unless you're just really trying to force it. And I think there might be some missional context where that might be okay for a season, thinking about where there isn't the gospel in that language yet. There's lots of stuff that we could use large language models for that we're not doing that right now. So that's a part of generative AI.
Starting point is 01:06:51 We could use it for good. I think about resourcing, humanitarian efforts. All those things are beautiful things that the church could be a part of, but we have to be a part of the filtering and data annotation as well. We can't just say, hey, we're going to expect Sam Altman to understand our theological tradition. That's our part in this. So we have to do the hard work of training it. And they say, okay, Josh, you go do that. Well, I can't do it by myself. Do you know how massive the Bible is to try to put into tokens? And that's what these models use. Dude, that's a massive project. And so we need, there's new industries right there just trying to tokenize the Bible. And we're just not looking at it that way.
Starting point is 01:07:42 So that's my biggest challenge is to see it through the lens of Paul, through the lens of the disciples as they're approaching something new. And as you're going to face opposition in it for sure. But I mean, my friend, Michael, who's a pulpit AI guy, like people are threatened by this.
Starting point is 01:08:02 The heads, heads of publishing houses are like, people are threatened by this. The heads of publishing houses are like, you can't do this because you're going to take away discipleship and devotional material from our writers. Maybe that's true. Maybe Lifeway does take a hit. Maybe B&H takes a hit. I don't know. I don't know. And I'm not, I don't want anybody to lose money in this. But at the same time, we're all using AI. And you're lying if you're not. And I don't mean intentionally using it either. You're using it in systems that you have no control over.
Starting point is 01:08:33 You're using it in your phone. You're using it in your banking. You have no control over how encompassing this system is. And same way with Wi-Fi. Who in our churches was over that decision, right? They weren't because unless they were on the IEEE, then they weren't over that. So that's kind of where we are. And I just think we just, we can shift just a little bit and start to think, okay, how can I use this to benefit my people? And knowing that I'm not going to push it away,
Starting point is 01:09:06 knowing that some type of monasticism or asceticism is probably not the best approach to take with this because it's going to actually hurt us and our people because we're not giving a valid voice to the concerns because we're not, you know, I really believe open AI should be under litigation for what they did. Releasing generative AI to the public the way that they did, I think, was a harm to humanity. I really do. And I think we weren't ready for that. We weren't ready.
Starting point is 01:09:37 And I mean, I understand. But they knew. They knew. You can't tell me they didn't know because months and months prior, they were using Kenya workers, workers from Kenya, excuse me, to train these models, doing the data annotation. So what that means is they were saying, okay, you guys, and they didn't tell them this either. We're going to pay you $12 an hour. We want you to read content to the model to train it. Okay. So the models don't know what rape is.
Starting point is 01:10:07 So trigger warning. They don't know what rape is. They don't know what pornography is. They don't know what suicide is. You have to go in there and read the most graphic, horrific text that we can find and scrape from the Internet so that the model knows what to filter out. They paid people to do that. And it destroyed families. I mean, it led to harm. And those workers were just dropped.
Starting point is 01:10:30 They, you know, and so not only that, there's a lot of predation in these tech companies. And so it's not only understanding the technology that's important. I think that's a massive part of this education about how it works. But also, we have to teach people how these companies work. And it's very dehumanizing in a lot of ways, because like you're saying earlier about this cost of production and demand for productivity, I think that that is a bigger concern for me is that it goes against sabbath and like there should be built into i mean there are in some appliances like there's a sabbath mode and we need to build that into ai as well it needs to say i'm not doing prompts today come back tomorrow your prompts can
Starting point is 01:11:21 wait or you know that's that's something another generation can be trained on. And so we had this idea, and I talk more about this in my next book, just about giving rights, R-I-T-E, to some of these systems might be a way to protect us from us and to protect us from these companies and to make that part of the regulation, because this is a human rights thing. This is, it's not just the issue of, you know, worker rights, but this, this stuff is, is going to violate a lot of people if, if we're not careful. But I also think on the flip side of the coin, we can help a lot of people and we can, we can use predictive ai to to find patterns of of cancer and skin cancer allegiance we can we can find that earlier we can we can give doctors more time to
Starting point is 01:12:15 be with their patients because the ai model has already done all the unnecessary unnecessary paperwork and all those things like right how many people feel like they need less time with their doctor when they go to the doctor? Wouldn't it be nice if your doctor was not rushed to pump out all these patient outcomes day after day and they just had time to sit with you? Maybe just an extra two, three minutes just to say, how are you doing? I think they'll fill that time with more patients. Yeah, that's true. That's true. Or, I don't know.
Starting point is 01:12:48 But maybe that could take some of the load off of having to pay an extra, you know, so that's... I mean, seeing more patients isn't necessarily wrong when you need to see a doctor and you have to wait less to get in. Maybe that's more so on the insurance model. Yeah. Josh, this has been a fascinating conversation um i've taken you over there'll be a lot of time i've i've uh invited you to be part of so i apologize for that where can people find oh so the book again is violent tech a philosophical and theological finish it investigation investigation okay um i would highly encourage people to check it out. It addresses similar
Starting point is 01:13:25 stuff. I mean, we kind of went all over the place, but especially towards the end, some of the stuff you're talking about there is stuff you expand more on in your book. Where can people find you and your work? You got a website? Yeah. Joshuaksmith.org. All the relevant links and stuff are there. And yeah, just reach out to me. I'm pretty accessible. Would be glad to help you in any way. Thanks for coming on the show again, man. Really appreciate it. This show is part of the Converge Podcast Network.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.