Theology in the Raw - Is Artificial Intelligence Good, Bad, or Neutral? Nick Skytland

Episode Date: March 8, 2026

Nick Skytland is Vice President of Gloo Developer and AI Research, leading initiatives to shape open, values-aligned AI that supports human flourishing. Before joining Gloo, he spent over two... decades at NASA as Chief Technologist, advancing early-stage technologies and building some of the largest open innovation communities in history. He is also co-author of What Comes Next? Shaping the Future in an Ever-Changing World.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Transcript
Discussion (0)
Starting point is 00:00:00 The number one use of AI in 2024, people were using it to generate ideas. It's like, hey, give me a recipe. Here's the refrigerator. Write me a song. Give me the directions. The number one use of AI in 2025 was therapy and companionship. And now it's like you can process your emotions. It can ask you reflective questions.
Starting point is 00:00:19 It isn't really hard to imagine that we as humans are going to use AI to mediate our experience in the physical world more and more. Hey, friends, welcome back to another episode of Theology and Rob. My guest today is Nick Skitland, who is vice president of Glue developer, G-L-O-O, and AI research, leading initiatives to shape open values-aligned AI that supports human flourishing. Before joining Glue, Nick spent over two decades at NASA. That's crazy. Dude worked at NASA. As chief technologist advancing early stage technologies and building some of the largest open innovation communities in history. He is also the author of the book, What Comes Next, Shaping the Future in an Ever-Cchanging world.
Starting point is 00:01:12 Man, I love this conversation. Nick was super knowledgeable, super kind and wise and down to earth. And we talked a lot about, yeah, is AI going to kill us or save us? Definitely not going to save us. It might end up killing us. I don't know. We'll see. We talked about all those things in this episode.
Starting point is 00:01:30 Also, the exiles in Babylon Conference is right around the corner April 30th of May 2nd. All the info is at TheologyNRA.com. We have added, well, I mean, I've been talking about this. We didn't recently add it, but we are doing a pre-conference on artificial intelligence and the church. How should Christians think about the ever-present, ever-pervasive reality? The AI is not upon us. It is already here. It's everywhere.
Starting point is 00:01:58 It's part, as Nick says in this conversation, it's just, it's here. So we have to deal with it. We can't think like, oh, maybe we don't want AI. Well, AI is just going to be part of our life forever, I think. So we have to figure out as Christians how to respond. We're going to have an awesome pre-conference where we have several scholars on AI discussing these questions. We also have many other really engaging sessions on immigration, on mental health, Christians
Starting point is 00:02:24 in war, the reliability of the Bible. And I cannot wait. see you there. I love, love, love the exiles in Babylon Conference. So all the info is that TheologyNorah.com would highly recommend if you can make it come in person. It's a unique experience. If you can't make it in person, we also have the virtual option. So Theology and Thore.com. All right, please welcome to the show for the first time. The one and only, Nick Skightland. Nick, welcome to Theology and Raw. I say this a lot. Oh, I've been waiting to talk to you, whatever. I mean, and that's true. Every guest, I'm excited.
Starting point is 00:02:58 But you're an AI expert, technology expert. You have a long list of credentials, both just, I mean, yeah, with experience and working in so many different areas. Is AI going to save us or kill us or something in between? You know, there's already a lot of books written about that very topic. Yeah. You know, there's people taking sides on both sides of this debate. And my kind of my point is, I don't know. If it matters, I think what matters is AI.
Starting point is 00:03:28 is here. It's the AI moment. It is happening. It's the reality that we live in. Whether it's you think of it as good or bad, technology is neutral. It's, it's the human heart that wields the hammer that wields the sword in a way that can be impactful or not. And so I think we're just, we're just living in that moment, right? So you're so it's it's inevitable. It's like it'd be like asking a question, you know, should should the internet exist or not? It's kind of like it's here. And we're not going back, so we need to learn how to manage it. Yeah, exactly. It's the next step in that technological.
Starting point is 00:04:04 I'm very much very optimistic. We are people of hope, right? We know how the story ends. So let's wield this technology for good and for God's purposes on this planet. And so I'm very optimistic about all of it. I've heard people say, people who are thoughtful, a friend of mine actually, who was on the podcast before, I actually had a debate about. AI and Christians between somebody who is more pro and another person who is more against and the person that was against said he argued that AI is not just the latest technological advancement like the printing press and the car and the internet where it's kind of like it disrupts society for 10 20 years everything settles down new jobs are created old jobs are gone whatever but like it's it's it's a disruption but it's just how things go but he said
Starting point is 00:04:58 AI is not simply another technological advancement. It is categorically different. And that's, and I don't want to, you'd articulate it better than me. And I was like, oh, okay. Well, if it's, if it's another technological advancement, then, okay, we can as a society hang with this. But would you put it in a category of, yeah, this is, it's different, but it's still another technological advancement?
Starting point is 00:05:22 Yeah, I was actually trying to find the great, the great quote I had on this. I wanted to get it right. It's essentially, you know, you know, the quote about teach, yeah, teach a man to fish and you'll feed him for a day. Teach a man to fish and you'll feed him for a lifetime. Or I'm sorry, give a man to fish and you'll feed him for a day. Teach a man to fish. Yes. And you'll feed him for a lifetime.
Starting point is 00:05:43 Teach an AI to fish and it will teach itself biology, chemistry, oceanography, evolutionary theory and fish all the fish to the point of extinction. Right. So that might prove his point a little bit. Yeah, when I think, so I will just, I will just say that humans, like for as far back as technology was created, we have this both fascination with technology and this fear of technology, right? Humans are both captivated and a little unsettled by the idea of technology in general, let alone machines that mimic us that can think and decide and maybe even act on their
Starting point is 00:06:21 own terms if AGI becomes a thing. And so I think, you know, I think there's, again, pros and cons to it. But it is the reality we find ourselves in right now. Okay. Give us the lay of the land. I mean, for those who are like, I, yes, I kind of know what AI is, but I wouldn't be able to maybe define it or articulate it. Give us to start like 101. What is AI? Where are we currently at? And then what are some new developments? Like, what's going to come on the scene in the next weeks, months, years, you know. Yeah. I get this question a lot.
Starting point is 00:06:54 It's a good one. This is a great way, by the way, to use AI. Like, so if you're using chat, CBT, uh, and there's a, there's a topic you don't understand. Say,
Starting point is 00:07:04 Hey, explain it to me. Like I'm seven years old, right? Yeah. Oh, it's amazing. Yeah, it is.
Starting point is 00:07:08 It's good way you use it. So AI from that perspective, it's like a smart robot brain that learns stuff, solves problems and helps people do things faster and easy. right? But kind of taking a step back, AI is the overall, just academically we think about that as the science and engineering of making intelligent machines, building machines that think like humans, right? Right. And then you probably heard all these other terms too, like machine learning. So machine learning is a major breakthrough in helping achieve AI. And that's where we use computers and math
Starting point is 00:07:44 and algorithms to detect patterns in large data sets. And that leads to other things like deep learning and large language models. So deep learning is this idea of using neural networks to essentially think about the way the brain works and neurons work to make sense of data and the world around you and process that complex data. That's deep learning. And then generative AI is what we're all kind of living in the moment of. This is large language models, which is very similar to your brain, right? It's a very large data set based on language, on human language. And so it's software that ingests this data, that's human language data,
Starting point is 00:08:30 and then you can interact with it with applications like chatGBT. And so it's kind of magical. It's like a black box. It's like trying to understand the brain. That's why in the field they talk more about not creating AI, but growing AI, growing large language models that seem like magic. What are some of the concerns you have with AI? What are some of the negatives and then what are some of the positives? And I've got several questions about what I see is some very much potential negatives.
Starting point is 00:09:08 I'm the farthest thing from an expert, so I'm not even sure if I'll be able to articulate the question, but there's things happening where I'm like, ooh, that sounds horrifying if that's actually happening. Like, where is that going to lead us as a society? But yeah, I guess general pros and cons, as you see it. Yeah, I mean, how theological do you want to get on this? Or do you want me just? Very. There is the algebra in the raw.
Starting point is 00:09:29 Yeah, go deep. I know. So I'm excited to be here. So I think my biggest concern, if you will, is just, you know, how humans have used technology over time, right? Like, again, think about it like a hammer. A hammer is not good or bad, but we can wield it in different ways. We can use it to build a house.
Starting point is 00:09:51 We can use it to break a window, right? And so the danger isn't necessarily in AI. AI itself is not evil. It's not, it's neutral. But we humans are really good at using technology in ways that either support us or undermine us, right? And so the danger is when you have something so powerful in arriving so fast,
Starting point is 00:10:17 that's almost like replacing wisdom and accountability and our moral consciousness, what are we capable of as humans? Again, we're capable of extraordinary good and extraordinary evil, right? And so when you think about technology, theology, I think my concern is what happens when we turn powerful tools into idols. And in scripture, we see idols are something that are made by human hands, granted power they don't have. We then consult them for our guidance and our security, our legitimacy, and then we obey them rather than discern or use them.
Starting point is 00:10:57 And this isn't a new thing. It's just something we humans are really, really good at doing. Yeah. What about just as you're talking, just as you're talking, the hammer and human agent analogy. I like that. I do more than just idols, though, could certain otherwise neutral entities be, let me give an analogy, but rather than trying to explain it.
Starting point is 00:11:26 Like, in uranium is neutral. Enriched uranium is neutral. Actually, I don't know if this is true. saying, you know what I'm going at this. I mean, why not? Yeah. Yeah. But, you know, in the hands of humans who have evil hearts, a nuclear bomb, you know,
Starting point is 00:11:45 made from enriched uranium, is worse than a hammer which can do some good. And if it does some bad, it's kind of like worst case scenario, one person could maybe kill one other person with that. And even that is kind of
Starting point is 00:12:01 you know, fairly rare. So are there degrees of the level of evil that a certain otherwise neutral entity can produce. And is that a concern? And here's, I mentioned this offline. I'm thinking of just in my space of sexuality, the stuff that AI is doing with online child pornography. I mean, the face of your kids, not yours, but generic yours, on Instagram,
Starting point is 00:12:32 people can right now create a porn video out of your child. It's like actually a growing problem. Or even, or even I think like on a societal level, like you've seen these videos online that look so incredibly real. And they're just not. There was one I saw this guy.
Starting point is 00:12:53 He showed a clip of a policeman confronting an ICE agent. And I was like, wow. And he just opened with that. And I'm like, oh, wow, this is. crazy. Like what? And then he stops, he enters in and says,
Starting point is 00:13:05 that's fake. It's made up. I'm like, wait, what? And I went back, I'm like, how's this fake?
Starting point is 00:13:11 Like, this is like, there's nothing fake about these, like totally AI. Like, the societal havoc that can wreck when there's an AI video of Trump confronting Lindsay Graham behind,
Starting point is 00:13:24 I'm just making this up. Okay, like you can, to the point to where since most of our world is lived online, our news, and everything, our world is informed. Our knowledge of the world now is largely online. If that gets so, there's so much misinformation out there created like that to me,
Starting point is 00:13:43 I could just see wars, like nuclear wars being started from that. And I like you, I'm a positive person, but as my mind goes of the potential that could happen from this, it's like, okay, maybe it's neutral, but is it? Like, is it enriched uranium neutral in the hands of? Yeah. I don't know. Yeah, help me out here. Man, I think you summarized it pretty well, right? And just like even the way you presented that of like, that's the tension that we're living in right now. I always say perception is reality, right? So is artificial general intelligence here? Our machines going to have their own intent and make these decisions and kill us all. I don't know if it matters when it arrives. The perception is already that humans are having relationships with machines, right? That humans are using these machines. in powerful ways, and those ways can be really destructive to societies. You can use these machines. You can use AI to have breakthroughs in chemical warfare and biological warfare.
Starting point is 00:14:41 There's a really great essay just dropped this week by the co-founder of Anthropic called The Adolescence of Technology. He published one previously that was talking about all the positive benefits of technology like AI, but this one was actually talking about these very things. Like, what would happen in the hands of a wrong person? One of the most powerful technologies ever, you know, maybe there was some safeguards before when you think about nuclear weapons as an example. Right.
Starting point is 00:15:12 Kind of have that stuff locked away. And even the smartest people can't act on it because the technology itself is locked away and it's hard to get to. But when AI is accessible to anybody and everyone on any device, you know, there's a lot that could happen. There are seasons in my life when I'm always on the go, whether it's traveling on planes or just going from one meeting to another. And in these seasons, it's sometimes hard to eat healthy.
Starting point is 00:15:42 Sometimes I just need to grab a snack and head off the door, which for me, one of those snacks is mosh, mosh bars. Mosh joined forces with the world's top scientists and functional nutritionists to go beyond your average protein bar. Each mosh bar is made with ingredients that support brain health like Ashergand, Lions Main, collagen and omega-3s, plus a game-changing brain-boosting ingredient that you won't find in any other bar. Mosh bars also actually taste great. I can speak from experience. And the best part is they donate a portion of all sales to gender-based Alzheimer's research.
Starting point is 00:16:16 If you want to find ways to give back to others and feel your body and your brain, Mosh bars are the perfect choice for you. Head over to moshlife.com forward slash theology to save 20% off, plus free shipping on the best-sellage trial pack or the new plant-based trial pack. Okay, that's 20% off plus free shipping on either the best-sellage trial pack or the plant-based trial pack at M-O-S-H-L-I-F-E dot com forward slash theology. Thank you, Mosh, for sponsoring this episode. Have you ever woke up and found yourself stuck to the mattress because your old sheets have lost their elasticity
Starting point is 00:16:50 so they slip off anytime you move? You don't always realize how bad your sheets are until you need. to replace them. So if that's you, I would invite you to check out bowl and branch signature sheets. Bowling branches signature sheets are made from 100% organic cotton and designed to hold their shape, stay breathable, and feel super soft night after night. You'll fall asleep faster, stay comfortable all night, and actually notice a difference the moment you get into bed. And they don't wear out. They somehow get softer and softer and more comfortable with every wash. I love my bull and branch sheets truly and I know you will too sleep sound with bull and branch get 15% off your first order
Starting point is 00:17:30 plus free shipping at bollin branch.com forward slash t iTR with code t i t r that's boll and d branch b o l a and d branch dot com forward slash t i t r use the code t i t r to unlock 15% off exclusion supply yeah it's not so much like i feel fair well maybe this is naive i feel fairly confident that me personally could use AI for good. I don't, you know, AI doesn't do my writing, but I can consult AI on like, hey, what's another synonym for this word? Like a glorified thesaurus. It's great. I use, you know, chat GPT for like, almost instead of Google for certain things because, gosh, to sort through all of it, you Google something and you get a list of, you know, pages and pages and pages of articles and to get the summary from AI or even like I've I asked
Starting point is 00:18:28 AI like I there's some trusted news sources that I follow and I said tell me um you know uh the history of U.S. relations in Yemen over the last 20 years according to these five news outlets because I trust them and within seconds it's like that would that I would have to comb through these websites finding whatever I mean it was like it was incredible yeah my kids my kids are like dad is it really true that way back in the 1900s, you used to have to have to type something in Google and like pick and poke at all the sources to like figure all what it's more with. Back in the 19. What it says.
Starting point is 00:19:02 It just gives you the answer. I know. And even there, it's like you can't totally trust, you know, but it's like it's giving you. Yeah, it's pretty. Or I even not, I'll shut up after this, but I have an ankle injury that I've been nursing for six years. Bad sprain. Like tore every legament.
Starting point is 00:19:18 So I've been, I've got kind of a week where I. right ankle. It kind of flared up a few months ago and I'm training for a marathon. And, and I, I literally had, I told chat jvety, okay, here's all my symptoms. Here's my injury. Here's how long ago. Here's what do you think this is? And I've had an MRI. So I've had some diagnosis, whatever. But she, she, my chat, GPD is a she. I don't know. Isn't that interesting how we project that on? I know. Well, I think because. my, who's the, uh, who's the, um, Alexis, Alexa? Alexa, was a sheet? Alexa, yeah.
Starting point is 00:19:56 Yeah. So I actually don't let chat TPT talk to me because that'd be too weird. But, um, yeah, gender, gender neutral. Um, I mean, said, this is most likely your injury. It's the posterior PTD. But they gave me graphs or where the ligament goes up and this is probably where your pain is and your arch is affected this way. It was to the T exactly what I had. And then I said, and then I said, said, can you recommend what's the best ankle brace for this injury? Boom. Number one brace. She says you got to get a bioskin. But this is tailored to your injury. I bought it. I use it. I can now run like 10 miles on it because this brace is. And then like how often should I use the
Starting point is 00:20:38 brace? Like should I wean it off, whatever? And she's either they like, you know, you want to ease back into it. Eventually you want to work your way away from a brace. You want to strengthen the ankle. but it's good as you're building back to use a brace, especially for longer run. I mean, just, I've gotten so much like, as people could say, well, that's not real medical advice. I'm like, okay, I'm not going to, I'm not going to say this is replacing a doctor, but I don't know. It's, it's free. Would my PT have given me a different diagnosis and, you know, $1,000 later or whatever? And like, yeah, it actually is like, it's worked, like really well.
Starting point is 00:21:14 Well, and I don't know. Could you get your PT or your doctor on the phone? I mean, it's a little bit easier to go to. AI and ask the question, right? Yeah. And then you extrapolate that. Like, well, what if you get into a fight with your wife? It's kind of easy to go to AI and be like, well, I can't go hold of my therapist or my
Starting point is 00:21:28 counselor or my pastor. I'm just going to ask the question here. Or what if you're struggling with something theological? Like, isn't it just kind of easier to go to AI and get the answer? I like that. There's a really great Harvard article that was published in 2024 that said, hey, here's how people are using AI. And it was as you might expect, the number one use of AI in 2024 when it was kind of first way back in the day, do you remember 2024? That was so long.
Starting point is 00:21:57 People were using it to generate ideas. It's like, hey, give me a recipe. Here's the refrigerator. Write me a song. Give me the directions to a place. The number one use of AI in 2025, early 2025, like way back in the day, almost a year ago, was therapy and companionship, right? And what we're seeing, which makes perfect sense is that, you know, it's the most influential technology over time. It's super easy to get information, to share information. We're becoming increasingly reliant on it. We're trusting in the results of it.
Starting point is 00:22:34 And now imagine a tool that can also, it's easy to talk to. You're struggling not to call it or her, right? And now it's like you can process your emotions. It can ask you reflective questions. you start to ask it to help name your feelings because maybe you're having a hard time doing that. You're asking for advice to navigate relationships. And so it isn't really hard to imagine that we as humans are going to use AI to mediate our experience in the physical world more and more. It's just a reality.
Starting point is 00:23:06 And it's not something far off in the future. And I have example after example of people who are relying on it right now. A really good example of this is using Alexa to tell my daughter a bedtime story. You know, like I ran out of ideas. So it's like Alexa, can you keep it going? And then Alexa tells a story. And then it says, hey, Ada, good night. I love you.
Starting point is 00:23:28 And Ada, my daughter's like, oh, I love you too, Alexa. My daughter growing up in this digital world, does she know that Alexa's not a disembodied human, that it is a machine, that it's not a her? Now, just play that forward in all of our human relationships going forward. And I think you can draw parallels to social media, right? The promise in a fallen world was that technology was going to connect this all. And I don't know if anyone's been on social media lately, but it's kind of dumpster fire. Don't feel they're connected, right?
Starting point is 00:24:04 So now play that forward with hyper personalized experiences that are mediated because of technology. that knows you better than you know yourself. See, that's where when it, when the line, when it spills over into not just getting data or facts or getting information, but actually spilling over into like psychology, relationships therapy, that's what I, that just feels eerie to me. It's one of the reasons why I don't have the voice or whatever. I don't want this thing talking. I don't want to, it already feels human in the way it gives me information.
Starting point is 00:24:41 Like, hey, that's a great question, you know. hear some style. It's just, even the written communication is eerie enough. And that's, even there, I'm like,
Starting point is 00:24:49 oh, just, just, you're, you're not a person. You're not, you know, I'm not,
Starting point is 00:24:53 you know, one time I almost said thank you after like, they gave me a bunch of information. I was like, I'm like, what am I doing? Well,
Starting point is 00:25:00 that way, that way, that way, when I overtakes the world, at least it'll keep you in mind that, oh, that one was the one that said, thank you all the time.
Starting point is 00:25:06 We'll let him away. So I should keep it happy. I, so I don't know, it just on the spectrum of, okay, it's here. So let's just assume, okay, it's not like, should we have it or not. It's here on the spectrum of it can be used for good. It can be used for evil.
Starting point is 00:25:24 I just, am I wrong to feed be more on the side of like, I just see more potential, like really disastrous things that not necessarily what I would do it for or maybe most people, but, you know, there's people in the world that traffic kids. what are they going to do when they have AI? I mean, this is an old news. I think it's a fair question. Or people that, you know, powerful people that wage wars for financial profit. Like, what are they going to do with?
Starting point is 00:25:54 It's just like, oh, my word. Like, who, we need, I would assume we need some guardrails, but who's going to enforce the guardrails? And do I trust their ethical framework to know what guard wills, guardrails that put in place? Well, man, you bring up so many good things about this. And I think, I think these are the tensions that we have to wrestle through. I'll tell you, I'm actually, it's easier to think about the extreme cases of like, what if somebody uses AI and makes a bomb and now there's, you know, World War III? Right.
Starting point is 00:26:27 I think it's a little bit. I think the danger is more. It's like a modern-day screw tape letters. It's the slow drift. It's the easy. It's just kind of easy and convenient to outsource the cop. the complicated human relationship work that I have to do with my wife and my kids and my friends, just kind of, you know, AI never tells me no.
Starting point is 00:26:46 AI always has an answer for me. AI helps me process by feelings. From a relational standpoint, it's easier to depend on AI. And you kind of play that forward over time. And so then we are asking a lot of questions around, you know, if AI is mediating our experience, what values does AI have? What world view, if you will, does AI have? If you chat GPT or Gemini, does that represent my values and my beliefs?
Starting point is 00:27:19 If it's given me relational advice, should I divorce my wife? Is it giving me a list of pros and cons and saying, do what makes you happy? Kind of from a secular, moralistic, therapeutic theist kind of perspective? Yeah. Or is it representing Christian values that would say, hey, hold up. maybe you should talk to a human about that. That's a really important relationship in your life. And that, that kind of erosion of our autonomy and our outsourcing of our will to machines out of convenience is, I think, a big concern.
Starting point is 00:27:50 And even though the relational level, I do think, I mean, it's already happening, right? People falling in love with their AI girlfriend or whatever. Yeah. I mean, think about the relational dynamics. Why do they love this AI person so much? because it's always there for them. It exists to make you happy or whatever. And it has no needs.
Starting point is 00:28:11 It has no needs. You never have to listen to his or her or their, like, pains and heartaches, whatever. It's just there to serve you. Think about how that's building our relational muscles in people. Even something as simple as chat. Like, I mean, this is why I try to keep an arm's distance from this thing, whatever it is. Because it's like, yeah, it could be intuitively building into you certain relational expectations that just are inhuman or unbiblical, are not realistic. Relationships are a two-way street and sacrifices at the heart of every relationship.
Starting point is 00:28:48 Well, and think about like what is the AI built on, right? It's built on the language data of the Internet. Think about Reddit giving you marriage advice or would you let Elon Musk or Sam Olman babysit your kids for an hour. I mean, as a parent, as, you know, and I'm not saying anything wrong with them specifically. I'm just saying they probably have a different value set. Right. That's not aligned with mine. And when I'm trying to raise little humans in a world to navigate this uncertainty,
Starting point is 00:29:17 it's a really difficult challenge, right? And that's just my kids, let alone my friends and the environments that we find ourselves in. So I think there's a lot to be concerned about. But let me just go back and say, again, I'm a glass half full guy here. Yeah. And here's why. I think, again, we know how the story ends, where people of hope, God's not surprised by any of this stuff.
Starting point is 00:29:38 It's a huge opportunity for Christians. If we take it seriously to step up, it might be one of the greatest shepherding moments of our lifetime. There may never be anything like this again. And I think it's going to cause people to have questions about their faith, their relationships, their core identity. What happens if you lose your job because AI has now made it more efficient than it's easy to kind of outsource that to AI.
Starting point is 00:30:04 I mean, what does that mean for us as Christians in our identity and all of it? I think there's a lot of opportunities here for us to remind people that we are created in God's image and we have a responsibility in that creation to steward it well, to order it, to organize it, to be an active participant in it. And so I would actually say run full into it. We need to understand it. And then there's huge opportunity in that to reach the world. new ways.
Starting point is 00:30:33 What are some, that's good. Okay, so that's a good segue. Like, what are some maybe concrete, really positives, but benefits,
Starting point is 00:30:41 things that AI can be used for, for good. Well, I mean, I think again, if it's, it's becoming more and more a part of people's lives
Starting point is 00:30:50 than they realize, I think generally. I mean, some of the most adopting, the population segment that's maybe the most adopting of AI is actually practicing Christians. Like, we are using it not only to find information, but to ask theological questions,
Starting point is 00:31:08 etc. So that is interesting. It's a way for us to reach people in new ways, to open up conversations about theology. And there's a lot of opportunity in it just to rethink missions, as an example. How can we redo or rethink the way we reach the world, the way we do church, the way we engage with our friendships and relationships. By the way, did you read the four-hour work week book way back in the day? I'm still- I'm waiting for the promise of that because I was super excited about only working four hours a week because of technology. I don't think I ever read that.
Starting point is 00:31:44 Is that hisias thesis? The technology is going to reduce our workload. Yeah, and we keep thinking that, right? It doesn't. Email has added to my 10 hours of my week. Right, exactly. And so there's a strong argument that maybe this time it's different. But it's kind of played itself out in history that technology is just going to give us some superpowers.
Starting point is 00:32:04 It's not like work is going away. It might shift. The work we do of the church is still the same. How we do it might change. So that's why I remain pretty bullish on technology and the opportunity we have in that. So you're saying advancements in technology give the promise and maybe even perception of reducing our work. and it always ends up increasing it. Yeah.
Starting point is 00:32:30 And always might be too strong. Maybe always, yeah. I mean, I think you can think about farming as an example. In the essay, as I mentioned earlier, farming was given as an example of back in the 18, early 1900s, 90% of the population were doing the manual labor of farming. And now it's like 2%, because a lot of it's automated. It doesn't mean those people don't have jobs anymore.
Starting point is 00:32:53 It means that they're doing different work, right? Right. There's good and bad that come with that. It's going to change. And I think that's the reality of it. Like, it is here. It's going to change everything from how we learn to how we worship. And so what do we do about that? Now is the time to start thinking about it. It's finally here, folks. After three and a half years of research and writing, my new book from Genesis to Junior is available. Oh my word. As a subtitle suggests, this book represents my honest search for what the Bible really says about women in leadership. I comb through all the major passages that deal with the question of women in leadership. I evaluate the strongest arguments on all sides of the debate. And I do my best to let the text determine what we should believe about this important topic.
Starting point is 00:33:40 From Genesis to Junie is now available wherever books are sold. What's coming down the pipeline? Like, are there things? Because you, you know, you're knee deep in this. Like in 2027, 2028, 2030. Like, what are some things? that maybe the average person might not be aware of. And, you know, maybe this is somewhat of a prediction.
Starting point is 00:34:06 But what do you, where do you see AI advancement going? It's a good question. I think, okay, so right now at kind of the peak of inflated expectations of AI is the idea of AI agents. Last year was agents, everything. We're still trying to figure out how to use AI to automate things in our lives. So that's a big conversation right now. Real quick. Can you give me an AI agents?
Starting point is 00:34:29 Like they're doing stuff independently? Yes. Can you give me an example? Like give an example with that might. Yeah. So imagine the way you interact with like a chat GBT right now is you ask it a question. It gives a response. You're trying to have this conversation.
Starting point is 00:34:41 Imagine asking chat GBT of like, hey, go go file on taxes. And then it goes off and then it files your taxes. It comes back an hour later. It's like, hey, I did all the work. Oh, my gosh. Now imagine automating any part of your life. That's the idea of AI agents is like you can, you can, can take a lot of this stuff and you can use AI to go off and do a thing for you.
Starting point is 00:35:02 And not just in in a simple way. Like it can think, it can reason, it can work through things. Now, the trend is that it can do it faster and faster. So right now, AI, the advances in AI are doubling every four months what a human can do in a in a bit of time. So right now, an AI can do what a human can do in a matter. like what it takes a human to do in five hours, and AI can do it a matter of seconds. Right.
Starting point is 00:35:32 And that progress is doubling every couple months. So just play that forward to a couple of years from now. What does that mean? I mean, just a couple years ago, AI could barely pass the bar exam or do a math test. And now we know it's really good at all those things, plus some additional things. So play that forward.
Starting point is 00:35:51 It's only going to accelerate. And so that's why people are excited about this idea of, AGI, where AI almost has its own intent. It's making its own decisions, acting really independently of humans. I think that's, you know, that's an interesting thing to think about. The other thing I'll point out is that all of the conversation we're having is based on large language models. That is only a subset of AI.
Starting point is 00:36:20 Artificial intelligence, like I said at the beginning, is a big field, right? So there's a new term now called world models. So when you run out of all the data on the internet, how do you continue to build these models that are data hungry? They need data to do what they do. That is where, you know, sensor data and video data, all the data of the physical world comes into play. And so new models are being trained on the data around us in the world.
Starting point is 00:36:47 And then you can play that forward to think about, well, what would it look like if you can control a robot, with AI and that robot interacted with the physical world. It's like a really smart Roomba, only the Roomba talks to you. And it's an independent agent. I think it's very reasonable to think that these things will be in our very near future. And there's a lot of people working on that type of technology right now. And that's near term.
Starting point is 00:37:14 So five years from now, it's hard to imagine what might be the case. Okay. Talk to me about like the ethical guardrails. Yeah. This is something I've listened to a few interviews. I listened to one. It was actually, I know both of these are polarizing names, so whatever, but it was an interview by Tucker Carlson interviewing Sam Altman.
Starting point is 00:37:36 And he was, Tucker was pushing him, raised some really good questions about the ethics of AI. And Sam Altman was like, yeah, we have like, you know, a hundred ethicists that are working on this. He's like, well, Tucker is like, who are they? Like, what's their beliefs? Like, oh, I mean, you know, their ethicists. He's like, well, I know, but like I don't believe in gay marriage. They believe in gay marriage. You know, like, what's their worldview?
Starting point is 00:37:59 He's like, I don't know. Like, he's kind of like almost, he was almost like getting really thought about that. It's like, well, they're ethical. You know, it's like, well, who's ethics? Like, you know, are they, do they believe in euthanasia? And are they going to allow people to learn how to euthanize their grandparents or, you know, like, or suicide? Is that abortion? You know, like, ethics is a really disputed category.
Starting point is 00:38:22 So like who at the top is thinking of guardrails to put on the advancement of AI and who are they and what are their ethics and what does this mean? It turned out to be kind of like I don't know. It seemed like it's just kind of open end to a little bit. Yeah. One, there's multiple companies. Sam Alman's an example. He's a founder of a company, Open AI, that makes multiple models, GVT being a model, before being a model. And the way you interact with that model is through a thing like chat chabit, the app, the interface, right?
Starting point is 00:38:57 And so there's multiple, what we call them frontier labs, multiple companies like Google, meta, open AI, XAI is grok. When you talk about the undressing, her or undressing movement, meme that happened right before Christmas, that's Gwok for integrated with X, which formerly known as Twitter. So I would argue they all have bias, not argue, it is well documented. They all have bias because they're trained on data. And when they have bias, they essentially have like a worldview, right? They, they have cultural assumptions built into them. They have values built into them. So the question you asked is like, but whose values, whose assumptions, whose ethics, who's making the decisions, right?
Starting point is 00:39:42 And they all have a little bit of a different personality if you ask them the same question. They might answer some of these things a little bit differently. So a lot of the work that we do at glue is related to benchmarks because benchmarks help us understand and quantify that worldview, the assumptions that are baked into AI. Because we're trying to understand, is it aligned at all with a biblical worldview? Is it aligned at all with human flourishing and the idea of human flourishing? Or is it about performance and efficiency and how well you can do a math test, right? And then, by the way, this all breaks down right now because it's all language. It all breaks down outside of English.
Starting point is 00:40:23 Other languages aren't as represented as much. So then is that fair? From an ethics standpoint, is it fair that AI doesn't represent you, that doesn't represent your values? Is it okay that Sam Altman's model? So you can say Sam Altman's values, that's not quite a fair critique, but you could say, should his values represent you? And like, do you have a voice in that? And it all starts with an understanding of what the worldview and the bias of the model even is.
Starting point is 00:40:52 And so a lot of the work we do is around that. There's a lot of others that are doing work around the ethics and safety of AI. And that's really good work. Right. We don't want AI like creating hate speech and, you know, pornography and erotica. We don't, I don't want that. But they're all kind of setting a moral floor. And so we, again, as Christians, have a deeper responsibility than just a moral floor.
Starting point is 00:41:23 You know, we really want to align this with the elements of human flourishing from a biblical understanding. Yeah. With regard to the intersection between, like, porn and AI and do you see that? I mean, I feel like that is just going to keep getting bigger and grosser and more horrible and more pervasive. like, do you see, is there, with your glass half full, is there, do you see that being real, did it all? Or I mean, what, what, what, what, were we going to be in 10 years with that? I mean, is, is it worse now or is it worse in, like, was it worse in the times of Acts and the Bible or the city of Corinth? You know, like, I think humans will be humans at any time in history. Technology will allow us to explore some of those things deeper in different ways. And our job as Christians is still the same call and mandate that we have in the Bible.
Starting point is 00:42:18 It's just, it's different. Will humans use it in that way? No doubt about it. In fact, it's like the pornography industry is usually like one of the fastest adopters of new technology, right? And so we shouldn't be surprised by that. It doesn't even take too much to anticipate what's going to come next. Do you think there's ever going to be any like somebody that can shut that stuff down? Is that even an option of this?
Starting point is 00:42:41 I mean, I'm sure it's an option, but is that a realistic option? it somebody somewhere with power, whatever, will come in and say, this is not going to be allowed. I don't even know how they would monitor to that. Well, yeah, I mean, think about who is the power right now in that. So the AI companies themselves, governments, influencers, and I would argue the church shouldn't give away its power. It shouldn't give away its voice. So I think the church is a great example of a values-alying community that can stand up and has always been influential throughout history. If it chooses to have a voice, it can be influential in this.
Starting point is 00:43:16 So whose responsibility is it to make sure there's ethical guardrails, safety guardrails? Some of the companies are taking it upon themselves. Some of the companies are not. I mean, it's a direct quote from Sam Altman. He said three weeks ago, he said, he was in an interview. He said, there's a lot of people that want a deep connection with AI. more than we thought, right, at this moment. People like their AI chatbot to get to know them,
Starting point is 00:43:47 be warm to them, be supportive of them. And then he goes on to say, you know, you could stop there and be like, okay, well, so is there a danger in that? Should we stop it? He's the CEO of a company and he says, there's value there. I don't think we know how far we should allow it to go.
Starting point is 00:44:04 We at Open AI are going to give people quite a bit of personal freedom here. Now, that is a decision by one, company and building AI. There's other companies. And so I don't think it's realistic to think that the AI company is themselves. All of them who'd have to kind of come together and say, we are all going to shut this stuff down, not allow. And from whose worldview is pornography wrong? I mean, as a Christian, from a biblical perspective, we have a, we have an answer to that. But from a secular, moralistic, therapeutic deist perspective of the world, is it wrong? And so then the question is should government step in, but we live in a highly unregulated environment in the
Starting point is 00:44:45 United States. Right. And even that's a hard conversation. So you play that forward and you say, look, well, what does that mean going forward? Essentially, is it just a runaway thing? Is nobody going to speak up? And again, I think it's like after a hurricane in Houston, which is where I live, who steps up first when humans are in their in their deepest needs?
Starting point is 00:45:05 It's the church. It's a church that's out there rescuing people and rebuilding houses. and feeding people and taking care of the homeless and the widows. It's the church. And so I'm not saying it's only the church, but I'm saying the church has a role to play and a responsibility in here. Even if we don't understand the technology,
Starting point is 00:45:25 we're not going to build a better large language model, right? But we have a responsibility in it. I do wonder if on the relational side, with all these kind of two-dimensional, basically fake relationships, that people are going to be scratching and itched is never totally satisfied. I mean,
Starting point is 00:45:43 theologically, I would have to say that. You can't have a genuine flourishing relationship with a robot. And I wonder if the church can sort of embody some kind of
Starting point is 00:45:57 not complete, but some kind of technological resistance and do things that are truly countercultural and foster genuine Amago Day relationship
Starting point is 00:46:10 that we would create attractive communities where people are like, kind of like if everybody is like just stuff in their faces of snicker bars and sugar and oh, they feel so good. And then they feel horrible. They feel so good, horrible. And their insulin goes up.
Starting point is 00:46:24 And they're like, they're just not really happy. And then they go to, you know, the church. And there's like all this healthy, delicious food and people are genuinely happy. They're, you know, it's sunny out, sunny inside the church. You know, like it's just, I don't know. I just wonder if, kind of like what you're saying, like if the church can truly embody
Starting point is 00:46:45 living according to the creator's design, which leads to human flourishing. Right. You know, obviously that's true, but I'm saying specifically with regard to having healthy relationships with technology, not letting technology sort of determine or become our relationships, that that could be part of the beacon of hope we can shine out, shine in the world. Yeah. I mean, again, we're uniquely called as human sister steward. creation. We alone are called in Genesis to do that. So technology is just a part of creation. We shouldn't let technology be, you know, overlording us. It should serve humanity. It should
Starting point is 00:47:22 serve human flourishing. And I think it's easy to believe a lot of the headlines right now that are very pessimistic, very dystopian about the future. Again, those are one possible reality. If the church did nothing, yeah, some of that stuff might play it out. However, our actions have real consequences. And so how can we better model the benefits of living the way we were created to live in relationship with one another and not forgetting the power in that, even when it's messy, like maybe in fact, it was designed that way to be a messy to help sharpen us, right? To help make us who we are. But I think it's going to be tempting, Preston. I think it's going to be harder and harder because it's so easy. It's so convenient. All the incentives for
Starting point is 00:48:09 industry in a capitalist society are incentivized to get you kind of fall in love with your chatbot as an example of all the different ways we could misuse technology. So I think, again, we have a responsibility to show what redemption looks like in a fallen world and be part of that creation process. What are some pieces of advice you'd want to give Christians and maybe church leaders even in particular as we're talking about AI? Yeah. Yeah, you know, it's a really good question. I get it a lot. So a few things.
Starting point is 00:48:44 Number one, I think we have to have humility. I think we have to have humility that maybe we don't understand it and we can't control it and it actually is happening. And so, because sometimes I think it's easy to fear it or to ignore it. I think we just have to have humility. You'd be like, Lord, I don't know. It's outside of my control. Use me. And then number two, I think we have to have village ins about a responsibility.
Starting point is 00:49:07 We got to take it seriously. We got to have an awareness of what is happening and then be part of that stewarding creation and essentially refuse to outsource our moral agency to technology, but participate kind of in redemptive creation, like to be part of that redeeming work in this world. And then I think finally that I would say, just trusting that God is not so fragile that he's going to be overwritten by machines. Like, this is not surprising God.
Starting point is 00:49:38 Like, we know how this story ends. And so, you know, with that confidence, can we trust that we can be uniquely positioned in this time in history to have an outsized impact on this world in a really positive way? It's almost like wherever there is evil in the world or new advancement of evil or, you know, to your point earlier, something that can be used for evil. Right. That just creates another possibility for the church to be redemptive agents, hopeful redemptive agents to be. tools in God's hands to address whatever latest invention of evil is is, is, uh, is upon us. So yeah, I always give an example because I, you know, I come from an exploration background. Yeah.
Starting point is 00:50:20 I love stories about famous polar explorers. And, you know, like I think about Ernest Shackleton in the early 1900s being stranded on ice, not knowing what to do next. It would have been so easy just to live in fear and give up and be like, I guess we're all going to die. And what he did was he had enough confidence to stand. out to trust to take a step forward into that snowstorm and not knowing how the story was going to turn out, but be a willing participant in the bigger story. And I think it's such a beautiful analogy for, you know, our world now. It feels like such a post-trust, post-truth, uncertain world.
Starting point is 00:50:58 It's all swirling around us. We don't understand it. It feels overwhelming. I don't know. I think it's a huge opportunity for us to step into it. That's good. Before I let you go, tell us about glue. the organization, you're a VP, right?
Starting point is 00:51:12 I'm a VP. I oversee our developer program, our AI research, and our AI products. So very much kind of in the thick of thinking about AI. I will tell you, Glue is an amazing gathering of leaders, theologians, technologists, who are thinking about all of these things and doing it in a way to serve the faith ecosystem. So it can be overwhelming. And what we're trying to do is provide infrastructure, AI, technology. that the church can trust to do the work of the church.
Starting point is 00:51:42 And so I lead a lot of that work surrounded by amazingly brilliant people. We are definitely on mission for this. And I will say the harvest is great. The workers are few. We need more people thinking this way, building on top of AI that you can trust in a way that will serve the church and the work of the church. That's great. A website people can go to? Yeah, glue.com.
Starting point is 00:52:05 And specifically, I'll say glue.com slash. F-A-I. And that's where we publish all of our research about AI and about what we're learning related to the latest models. It's also where we release all of our AR-R-A-R that people can use. Okay. And that's G-L-O-O for those who aren't watching. Yeah.
Starting point is 00:52:25 Nick, thank you so much, man. This has been a fun conversation. And giving us a lot to think about. It's something that I feel like I keep, yeah, it's something I'll be thinking about, probably for the rest of my life. But yeah, I'm really glad that you're engaging this really crucial topic and doing it with such grace and wisdom. So thank you.
Starting point is 00:52:43 Thanks for being a guest at the Algen-Rov. Yeah, thank you for having me. And what a privilege it is just to be born this time. I think that there's a lot to be done. So I'm excited to be part of it. That's awesome.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.