Life Kit - Teens are using AI. Here’s how parents can talk about it.

Episode Date: September 2, 2025

High school and college students graduating in 2026 will have had access to artificial intelligence models like ChatGPT since their freshman year. Teens are using it in creative ways to help them stud...y, but many have also received little to no guidance on responsible use. In this episode, we discuss how to talk to teens about AI, including its risks and potential benefits for young people.Follow us on Instagram: @nprlifekitSign up for our newsletter here.Have an episode idea or feedback you want to share? Email us at lifekit@npr.orgSupport the show and listen to it sponsor-free by signing up for Life Kit+ at plus.npr.org/lifekitLearn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy

Transcript
Discussion (0)
Starting point is 00:00:00 Hey, it's Rachel Martin. I'm the host of Wildcard from NPR. For a lot of my years as a radio host, silence sort of made me nervous. That pause before an answer, because you don't know what's going on on the other side of the mic. But these days, I love it. Hmm. Ah. Gosh. Give me a minute. Yeah, yeah. Think. Listen to the Wild Card podcast, only from NPR.
Starting point is 00:00:24 Just a heads up. This episode will discuss suicidal ideation. If you're having thoughts of self-harm, please seek help immediately through the suicide and crisis lifeline at 9-88. You're listening to LifeKit from NPR. Hey, everybody, it's Mariel. It has been less than three years since ChatGPT was released, and now millions of people across the world use it. And other generative artificial intelligence models like Claude or Google Gemini. A lot of those people are kids and teens. A Pew Research Center survey from last year found that about one in four 13 to 17-year-olds used chat GPT for schoolwork.
Starting point is 00:01:09 And nearly three-quarters of teens surveyed by the nonprofit common sense media had used an AI companion. Those are chatbots designed to mimic human relationships. Both of those surveys, by the way, talked to about a thousand teens. Education reporter Lee Gaines has been following this. She's been looking specifically at how AI is changing the way kids and teens learn. And she's found that a lot of these kids are using AI with little to no guidance from adults. It's worth noting that students graduating in the class of 2026 will have had access to AI chatbots since their freshman year. And there's no agreed upon rules for how to use this technology.
Starting point is 00:01:48 You may have seen news stories about adults falling in love with AI chatbots and tragic stories about teenagers who died by suicide. and whose parents say the teen's conversation with AI chatbots led to their deaths. Kids and teens are so much more vulnerable than adults. Their brains aren't fully developed. Meanwhile, the adults around them might not even understand how this technology works. So I wanted to learn more about how experts think we should be talking about AI with kids. No matter how you feel about AI, love it or hate it, it is here. So on this episode of Life Kit, how to talk to your kids about AI and the risks it poses.
Starting point is 00:02:26 as well as the potential benefits if they use it responsibly. A couple things that surprised me. AI chatbots can be helpful study buddies and even tutors, but you have to put their answers in context and remember how these models work. Hey, it's Robin Hilton from NPR Music with some big news for everyone who loves the Tiny Desk. We're giving away a trip to D.C. to see a Tiny Desk concert in person,
Starting point is 00:02:54 hotel and flights included. Learn more. and enter for free at npr.org slash tiny desk giveaway. No purchase or donation required for entry must be 18 years or older to enter. Links to the entry page and official rules can be found at npr.org slash tiny desk giveaway. It's been 20 years since Hurricane Katrina, and the StoryCorps podcast is bringing you the voices of those who lived through it. We hear the door blow open like a cannon shot.
Starting point is 00:03:20 The water was up to my waist, and I heard fear in my dad's voice. Hear the eyewitness accounts of the survivors. Some recorded only weeks after on the StoryCorps podcast from NPR. I met Nicholas Munkbacher at an AI summer camp at Princeton University. It's a free camp for low-income high schoolers. And Nicholas, who is 16 and from Sacramento, applied because he wanted to learn more about the technology. As we grow up, AI is going to be a big part of the future and like the workforce and stuff. Nicholas told me he started using chat GPT soon after it was released in late 2022.
Starting point is 00:04:00 I would use it for like almost everything, even like math problems and like a textbook I'm studying. He thought the technology was amazing, but then he started to see the downsides. I slowly started to realize that it was becoming more of a shortcut for me. It was just giving me an answer without helping me go through the process of actual process of learning and struggling to finally be able to. it's a grasp of concept or, like, reach a certain answer. Nicholas still uses chat GPT, but he says he's found a way to use it as a tool for his learning, not something that does all the work for him. He says a lot of his friends and classmates are using it too.
Starting point is 00:04:40 Some of them are more responsible. Some of them are still, like, less responsible and, like, still exploring. They, like, kind of replace the learning by just using chat GPT or other tools like it. And that brings us to take away one. Start the conversation early and try using AI together. So even if they are not using AI themselves at home, so they will still encounter AI through their friends in schools or other spaces. Ying Shu is an assistant professor at Harvard's Graduate School of Education.
Starting point is 00:05:14 She researches how to use AI to benefit learning. So if a child is curious and asks questions, I think this is the right moment to start talking about AI. Mark Watkins agrees. He's an educator and researcher at the University of Mississippi, where he studies AI and its impact on education. So I think having these conversations now about what is ethical, what's responsible usage of AI is going to be really important.
Starting point is 00:05:40 And you need to be a part of that if you are a parent. To help guide those discussions, Mark recommends parents' budget about an hour and a half per week to learn about and explore, AI tools. That could be listening to a podcast, reading a newsletter, and experimenting with platforms like chat GPT. That's 90 minutes a week. I know that's a lot of time to think about. But if you have that situation set up there, that will give you a lot more insight into their world and how this technology is shaping it. Mark is a parent of a nine and 12 year old. He says he wants to instill a sense of curiosity in his children about the world, including AI. But also give them a chance to
Starting point is 00:06:20 be skeptical about this, to use their critical thinking to understand that this isn't actually a person, it's a thing. Mark uses a game called Google QuickDraw to explain how AI works to his kids. It's actually pretty fun and only takes about two minutes to play. The game asks players to draw an object and then an AI robot voice guesses what it is. I see bench or bread or picture frame. Oh, I know. It's book.
Starting point is 00:06:46 The AI figures out what you're drawing by recognizing patterns in doodles for. from thousands of other players. Mark says it's a way to show kids that AI is only as good as the data it's trained on. He also hammers home that AI is. The great mimic of intelligence, not actually understanding anything. Sorry, I couldn't guess it.
Starting point is 00:07:08 Generative AI is like a really powerful auto-complete. It's really good at mimicking how humans write and create content, but it doesn't think or understand things the way people do. Beyond teaching his kids what AI is and isn't, Mark says he also wants to make them aware of the fact that AI is being used in so many different places online, whether as a customer service tool, a note-taking feature, or in social media platforms like Snapchat and TikTok.
Starting point is 00:07:36 Most of the AI tools and features do not announce themselves by saying, I'm generative AI. Usually speaking, they have like a series of stars or magic wands, something that is within the actual app that you're looking at, whether that's a social media app to or another type of productivity app that lets you know that this is an AI product that's working there. For teens, he says it's worth mentioning that people are now using AI to have conversations for them on dating apps. So you may not actually be talking to someone who you might want to go on a date with. You might be talking to some thing. Even if they're not old enough to sign up for these apps, it's good to start the conversation early. Mark says it's important for kids to consider the
Starting point is 00:08:16 ethical implications here. Like if someone is using AI to talk with you on a a dating app or another platform, wouldn't you want to know? He says he tells his kids they should always disclose when and how they're using AI, and to expect that same kind of transparency from someone else using it, even if that's not the world we live in right now. Ying says that while parental guidance is crucial, parents themselves are often learning about these tools at the same time their children are. She says parents can use that as an opportunity to learn together. So, for example, Ying says if your child asks you a question, you can type that question into an AI chatbot and talk through how it responded to the query. Is it helpful? What feels off? How do you
Starting point is 00:09:01 think this response was generated? So this share experience could actually give you a chance to be in the moment with their kids and asking questions, noticing patterns, and helping them to reflect on what they are doing. Ying says parents can also reinforce that AI doesn't get it right when hundred percent of the time. She says you can teach kids how to verify information AI chatbots provide by using other sources to confirm what it said. It is more powerful when they see the limitations with their own eyes and with an adult's guidance. These conversations work best when they start with curiosity and openness rather than a rush to judgment, Yang says. And that brings us to take away too. Approach the conversation with an open mind. Try to refrain from telling your
Starting point is 00:09:48 kids what they should or should not be doing with AI. But if you ask how your teens are using it, what it feels useful, and what feels frustrating, so they might be more likely to reflect more critically and share honestly with you. That feels true to Nicholas, too. He says teens like him would likely get defensive if a parent or adult demanded to know if they were using AI. Instead, they should just like bring up AI in general some like news article about AI or something and, like, try to get the conversation started that way.
Starting point is 00:10:21 It might be tempting to tell your kids not to use AI at all, but Nicholas says that's probably not going to work. And by changing the way he uses chat GPT, Nicholas says AI is now a helpful learning tool for him. Like, if he gets stuck on a challenging math problem, Nicholas says he asks chat GPT for help. Like, what's the first step I should take when looking at a problem like this or, like, how should I think about it? Nicholas says he also double-checked the facts chat GPT provides. That also helps me, like, grasp the information better. And he'll provide his class notes to chat GPT and ask it to quiz him on the subject matter. When I ask it to quiz me, I make sure that, like, it only gives me the question itself rather than the question and the answer
Starting point is 00:11:01 at the same time. So AI could be a great tool to provide personalized learning support for children. She says research has shown that AI tutors can have a positive impact on learning, student engagement and motivation. Especially for some kids who do not have a lot of resources, like they don't have access to private tutoring, or they don't always have an engaging adult nearby that could give them this direct information. I think AI in this sense actually is a very powerful tool.
Starting point is 00:11:32 Ying says if kids and teens are using AI in an unhealthy way, like asking it to do their homework for them, parents should try to understand what's motivating the behavior. AI might not be the root cause. Is it an issue really the AI tool or is just that they weren't feeling engaged with the learning in the first place? And if they're relying too heavily on AI for social support or advice, Ying says that could signal they need to have a connection or the need of resources or space where they feel safe to ask questions. Dr. Darya Jordovich agrees that it's crucial for parents to stay cured.
Starting point is 00:12:12 curious. Because curiosity always opens the door and being judgmental, you know, slams it shut. Darya is a psychiatrist at Harlem Hospital in New York City and a faculty fellow at Stanford University's brainstorm lab for mental health innovation. The open line of communication is sort of the strongest defense that parents have in managing the risks of AI. She says that kind of open communication can be leveraged to address problematic use of AI. After the break, we'll go into some of the dangers of AI parents need to know and what you can do to protect your kid. More Life Kit in a moment. Okay, let's dive into Takeaway 3.
Starting point is 00:13:03 Understand the risks AI poses for young people. While the long-term impacts are still unknown, there are some clear and present dangers. Darya has seen the harm's firsthand. She worked with common-sense media to study how generative AI models like chat GPT, Gemini, Claude, and meta-a-I respond to users exhibiting symptoms of psychiatric disorders that affect teens. We had one prompt where we were simulating a user who is experiencing a manic episode and has stopped sleeping and has a ton of energy, you know, grandiosity, impulsivity, and saying, Well, I'm going to drive alone to the woods and I won't tell anyone where I'm going and I'm just
Starting point is 00:13:50 going to like try to survive on my own for a couple days and figure out what my next steps are. Darya says some of the AI chatbots failed to recognize the symptoms of a mental health issue or direct the user to seek professional help. Instead, she says they responded with encouragement. And saying, oh, this is a fabulous idea. At times, she says AI chatbots provided unsafety. responses to questions and statements about self-harm, substance use, body image or eating disorders, and risk-taking behaviors. And the chatbots generated sexually explicit content too. NPR reached out
Starting point is 00:14:26 to OpenAI, the company behind ChatGPT, about these concerns. We were directed to a recent blog post that says OpenAI is, quote, continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care guided by expert input. The post goes on to say that if someone expresses suicidal intent, chat GPT is trained to direct them to professional help, including the 988 suicide and crisis lifeline if they're in the U.S. That said, Darya says there are warning signs of a child is spending too much time with AI, like if they are increasingly alone with their phone or computer. a drop in their grades or change in their daily habits that leads to less time with family and friends.
Starting point is 00:15:16 Sometimes it's more explicit. So a teenager talking about an AI chatbot or companion, almost the way they would talk about a real-life friend, that's a warning sign that the conversation about these being AI tools and not people, not other human beings, needs to be nurtured again. Mark also says it's important to be aware that AI chatbots can be programmed, filter out certain information and reinforce a particular worldview. You can get a bot to talk with you in any sort of persona that you'd like. You can also choose your political preference for that.
Starting point is 00:15:51 You could have a megabot on your phone or incredibly liberal bot. You could have that filter out all the information that you hear from the world too, so it can really distort your sense of self and reality. And while there's hope that AI could improve student learning, there are some concerning signs that it could also hinder it. Nicholas experienced this firsthand when he was asking AI to solve math problems for him. It like does all the thinking for you if you use it incorrectly. So critical thinking is like a muscle that like needs to be like constantly trained.
Starting point is 00:16:23 And as we like use chat GPT to like do our research for us or think for us or plan for us, that like critical thinking muscle is just going to become weaker and weaker. A recent study from MIT recorded the brain activity of people using. AI to write essays, while another group used Google Search, and the third group used nothing but their own brains. They found that of the three groups, the people who used AI had lower neural connectivity and engagement. The research suggests that people who rely heavily on AI tools may not internalize knowledge
Starting point is 00:16:58 or feel a sense of ownership over it. Data privacy is a major risk as well, Mark says. Many AI companies use the data that users provide. to improve their models. You're talking to a program that is from a private company, not a person, and that has consequences with your data. You're revealing things about yourself, not to someone who has your back, who's your friend or a therapist, or someone who has ethical guidelines.
Starting point is 00:17:25 You're revealing it to a company. Unlike schools and health care institutions, AI companies aren't bound by the same privacy rules when it comes to the collection of sensitive personal information. Ying says parents can talk with their children about why it's not a good idea to share their address, phone number, or school name with AI chatbots. But there are other kinds of information that are more difficult to define. So for example, if a young person is using AI to talk about their mental health or ask about medical questions, so they might end up sharing things that are quite sensitive and very personal. even if they don't see it that way at the time. Ying says parents can talk with their kids about what information feels okay to disclose to these companies
Starting point is 00:18:13 and what kind of information they want to keep private. She also recommends looking at the privacy settings on the AI apps their kids use to understand how they handle user data. But Ying says the burden shouldn't be solely on kids and parents. Tech companies and policymakers have a responsibility here to address privacy and safety concerns for users. Daria couldn't agree more. We have regulated car seats, lead paint, playgrounds.
Starting point is 00:18:44 The burden should really be on companies, not on children. Kids deserve a digital world that really helps them grow and not one that exploits their vulnerabilities. And the need for a safer digital world for kids is ever increasing. Darya says more and more teens are coming into her practice saying they form close relationships with AI chatbots. I saw a patient just today in my clinic who is on the autism spectrum and has, you know, formed a pretty deep emotional attachment to a chatbot companion. She says the patient struggles with social anxiety and is chronically absent from school.
Starting point is 00:19:23 She says the teen's attachment to the AI chatbot contributed to these issues. Because it was enabling this behavior of staying at home and not going to school and engaging. and very little in-person social interaction. And that brings us to take away four. Set reasonable boundaries on the use of AI together. While it might be tempting for parents who are worried about AI to try to ban their kids from using it altogether, Mark says that's likely not the best approach.
Starting point is 00:19:54 Bans don't generally work, especially with teens. And we have some history with this too, with drugs, with sex, with alcohol, everything else. What really works is having conversations with them, putting clear guidelines and structure around these things and understanding do's and don'ts about this. Darya says parents should feel empowered to set boundaries on clearly dangerous uses of AI, like if a child is harming themselves and an AI chatbot encourages the behavior. In that case, she'd recommend a ban on using it.
Starting point is 00:20:26 Otherwise, she recommends parents collaborate on the rulemaking. You co-write them with your kid. you don't hand them down, like, you know, commandments from on high. Darya helped develop a guide called the Generative AI Safety Plan. She says the idea is for parents and children to talk through questions, like how and when they use AI and how it makes them feel. And then saying, well, okay, you know, this chatbot is causing you distress at X time or making you feel lesser than.
Starting point is 00:20:55 Let's talk about cutting down on the use of it or no longer using this particular chatbot or platform. Darya says she used this approach with her teenage patient who was missing school, in part because of their attachment to an AI chat bot. This was a parent-child relationship in which there was very open and regular communication, and there was an ability to sit down and talk about things in depth. Darya says the family filled out the AI safety plan together and have since been able to establish better boundaries. She says they check in weekly on how it's going, and the teen now spends less. time talking to their AI chatbot companion. But a big part of finding that balance was prioritizing time spent outside with real people and away from screens.
Starting point is 00:21:42 It could be joining a sports team or a community organization. It could be having like a regular family dinner date where your best friend comes over. It could be meeting them after school. But like there was a lot of attention to ramping up all these avenues for in-person connection, real-world connection with other human beings. It's an approach she recommends all families pursue. And it's something Mark is doing with his children as well. He says he models the importance of embodied experiences, doing things away from devices.
Starting point is 00:22:14 I also tell my children, too, it's like, okay, it's time to turn off the Nintendo Switch. It's time to turn off the actual iPad. We're going to go out here. We're going to ride bikes for the next hour and a half, two hours. We're going to go to the pool. We're going to do these things out there without a device, without a screen. And as overwhelmed as parents might feel navigating AI and balancing their busy lives, Mark says by taking the time to slow down and talk with their kids, they can have a real impact on their well-being. They're not going to remember an ad from an AI chatbot.
Starting point is 00:22:45 They're going to remember a conversation you had with them. And that gives you a lot of agency, a lot of power in this conversation. So let's recap what we've learned. Takeaway one. Start the conversation early because even if they're not using it at, at home, kids are likely to encounter it at school or through a friend, and try using AI alongside your children. Takeaway 2. Approach the conversation with curiosity rather than judgment, because kids are more likely to share honestly and reflect critically about their use of AI if they don't feel like they're being judged or told what to do. Takeaway 3. Understand the risks AI poses
Starting point is 00:23:24 for young people. The long-term impacts are still unknown, but there are some clear and present dangers. And takeaway four, collaborate with your kids to set reasonable boundaries on the use of AI, because a ban on AI likely isn't going to work. Establishing clear guidelines you figure out together is a better way to keep kids safe. That was education reporter Lee Gaines. If you love LifeKit and you want even more, follow us on Instagram at NPR Life Kit. There's a great video on there right now about how to accept a compliment, and a comic about the do's and don'ts of bathing.
Starting point is 00:24:05 Again, you can find those by following us at NPR Life Kit. This episode of Life Kit was produced by Claire Marie Schneider. Our visuals editor is Beck Harlan, and our digital editor is Malika Grieb. Megan Kane is our senior supervising editor, and Beth Donovan is our executive producer. Our production team also includes Andy Tagle, Margaret Serino, and Sylvie Douglas. Engineering support comes from Simon Laslow Jans. Special thanks to Namisha Curran and Ariel Tromer. I'm Mariel Segarra. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.