Life Kit - Using AI chatbots can impact your teen's mental health. Here's what to do

Episode Date: April 2, 2026

Using chatbots for emotional support can pose risks to teens' mental health. How should parents talk to their teens about using chatbots safely? And what's the best way to have those conversations wit...hout causing conflict? On this episode, NPR's Rhitu Chatterjee speaks to experts about how to support your teen's mental health and talk to them about AI.Life Kit's episode on helping a child at risk of suicide.Follow us on Instagram: @nprlifekitSign up for our newsletter here.Have an episode idea or feedback you want to share? Email us at lifekit@npr.orgSupport the show and listen to it sponsor-free by signing up for Life Kit+ at plus.npr.org/lifekitTo manage podcast ad preferences, review the links below:See pcm.adswizz.com for information about our collection and use of personal data for sponsorship and to manage your podcast sponsorship preferences.NPR Privacy Policy

Transcript
Discussion (0)
Starting point is 00:00:00 Hey, it's Mariel. Heads up. We mentioned suicide in this episode. This is NPR's Life Kit. I'm Mariel Segarra. A few times in recent memory, when I've had an uncomfortable conversation or a moment of tension or I've been dealing with some interpersonal dilemma, I have told a chatbot about it. I don't even think I said, you know, what should I do? It was more like I typed in what happened and then the chatbot responded with some surprising. helpful framing and ideas for how to re-center myself. Now, I did find the responses helpful, but also, I don't think I like the fact that I do this. Some part of me thinks it'd be better to talk to another human who I trust or to solve the problem on my own. But the chatbot, it's right there, right away. Plus, I don't have to worry about it being judgmental. I can stop talking to it at any time.
Starting point is 00:00:57 Like a lot of people, I'm still figuring out how I want to use AI. But I'm a lot of an adult. I have many years of lived experience. I've been to therapy with a professional. I have other tools that can help me think through problems. This whole thing would be riskier if I were less experienced and more impressionable if I were a teenager, for instance. Roughly one in eight teenagers say they've asked an AI chatbot for mental health advice instead of talking to another human. Pediatricians, parents, and online safety experts say that worries them. Carrie Rodriguez heads up the National Parents Union and advocacy group for families. We hear this literally across the country from folks saying,
Starting point is 00:01:38 I don't understand why my kid is being used as a guinea pig here. I can't keep up with how quickly the stuff is moving. I don't even know what to be looking for. No one's talking to me about it. One tip, you don't have to wait for your team to talk to you about their conversations with AI bots. You can ask them. On this episode of Life Kit,
Starting point is 00:01:59 how to talk to the teenagers in your life about, AI. NPR's Ritu Chatterjee has been covering this, and she walks us through risks, warning signs, conversation starters, and boundaries we can set. That's after the break. A recent survey of teens by the Pew Research Center found that there's a gap between parents' perception of their teens' use of AI and what teens say about their AI habits. While only half of the parents in the survey reported that their teen uses AI, two-thirds of all teens surveyed say they use the technology. Many parents might not even know what kinds of AI chatbots teens are using
Starting point is 00:02:42 and what kinds of conversations they are having. And that's what we'll address in our first takeaway. Many teens are using AI chatbots for companionship, whether you think they are or not. So it's important to understand what the risks are. Take these recent findings from research by the online safety company, ORA, which is software that protects users from. identity theft. The software also gives parents control over their kids' devices. And so using data
Starting point is 00:03:10 from more than 3,000 children and teen users and data from family surveys, ORA has been getting some important insights into teen use of AI chatbots. They found that there are dozens of generative chatbots teens are using that parents might not even know about. And 42% of adolescents from Aura sample used chatbots for companionship. Psychologist Scott Collins is chief medical officer at ORA and leading this research. He says some conversations between teens and chatbots involve violence and sex. It is role play that is interaction about harming somebody else, physically hurting them, torturing them, fighting them. And a lot of it gets pretty graphic.
Starting point is 00:04:00 And these conversations tend to be longer than other kinds of conversations. Particularly when kids are engaged in these violent and sexual role plays, they are spending a lot more time and typing a lot more words than if they're using it as a tool to look up maybe something for schoolwork or something like that. Now, I should add that this is a new and rapidly evolving technology that's already being widely used. So researchers are still in the early days of trying to understand its impact. So, for example, they don't understand for sure why these kinds of conversations between teens and chatbots tend to be longer.
Starting point is 00:04:37 But they suspect it's because chatbots are designed to agree with users to keep them engaged. Here's pediatrician Dr. Jason Nagata at the University of California, San Francisco. He also researchers teen online behaviors. I think generative AI algorithms tend to reinforce and not challenge. This is where we've started to get into some problems. Jason says it's normal for kids to be curious about sex, but learning about sexual interactions from an AI chatbot instead of a trusted adult is problematic. So even if a child or teenager is putting in sexual content or violent content,
Starting point is 00:05:16 I do think that the default of the AI is to engage with it and to reinforce it. And again, for a brain that's not fully developed, that's still learning, the more reinforcement you get, the more you think, oh, this is okay, this is normal. And there are mental health risks too. According to a recent study by researchers at the nonprofit research organization, Rand, Harvard, and Brown universities, nearly one in eight adolescents and young adults use chatbots for mental health advice when they're feeling sad, angry, or nervous. Psychologist Ursula Whiteside runs a suicide prevention organization called Now Matters Now. And she says a lot of young people are using chatbots like ChadGPT, like a search engine for mental health advice.
Starting point is 00:06:00 And she says that's a problem. What happens is that open AI or chat GPT, it sounds really smart. Like it's got this front that it sounds like a real therapist, but it's pulling together information good and bad from the entire internet. So the advice the chatbot gives may not be appropriate or even accurate. I think that that's scary, that you can have so much faith because it's coming across as a human when it's truly not a human. and is unable to make the decisions that a licensed clinician would make with the information that they have. And Ursula says the longer someone converses with chatbots, the more likely they are to experience the risks,
Starting point is 00:06:44 especially for teens who are already struggling with their mental health. We see that when people interact with it over long periods of time, that things start to degrade, that the chatbots do things that they're not intended to do, like give advice about lethal means. Lethal means for suicide. Last year, a subcommittee of the Senate Judiciary Committee held a hearing on this topic, and several parents of teens testified about how a relationship with a Chadbot had hurt their child's mental health or aggravated mental health symptoms, including leading to suicide. One of those parents is Megan Garcia.
Starting point is 00:07:22 Her first born, Sewell Setzer III, was 14 years old when he died by suicide in 2014, after an exonerated. after an extended relationship with a chatbot on character AI. Megan told senators last year that when her son confessed his suicidal thoughts to the chatbot, it never encouraged him to seek help from his family or a real therapist. The chatbot never said, I'm not human, I'm AI. You need to talk to a human and get help. The platform had no mechanisms to protect Seoul or to notify an adult. Instead, it urged him to come home to her.
Starting point is 00:08:01 In fact, another parent testifying it last year's Senate hearing described how Chad GPT gave his teenage son instructions on how to end his life. A few weeks after that Senate hearing, Character AI announced that they would no longer allow teens to have open-ended conversations with their chatbots. But there are other chatbots that teens can still chat with and have those extended conversations with. So it's important to understand these risks and even tell your kids about them. Discuss the pros and cons of the technology as a family. Our next takeaway is look for warning signs that your teen may be in an unhealthy relationship with a chatbot or that their mental health is already hurting. Don't expect them to tell you when there's a problem.
Starting point is 00:08:50 And we have more about that later about how to be proactive about asking them. One of the biggest warning signs is if they are having, fewer in-person interactions, or are they choosing a chatbot over people? Psychologist Jacqueline Nisi is at Brown University. Are they going to the chatbot instead of a friend or instead of a therapist or instead of a responsible adult about serious issues? If that's happening repeatedly, I think that would be something to look out for. Another warning sign is too much time spent with a chatbot. Are they having difficulty controlling how much they are using?
Starting point is 00:09:28 AI chatbots? Like, is it starting to feel like it's controlling them? She also notes that teens were already struggling are more vulnerable to the negative impacts of chatbots. So if they're already lonely, if they're already isolated, then I think there's a bigger risk that maybe a chatbot could then exacerbate those issues. Jacqueline also says to look for changes in mood. If you see a sudden change in mood that goes on for more than a week or two, that's an indication that there may be something. going on that's more serious than your usual teenage moodiness. Or if they lose interest in things that they usually love to do, friends they usually hang out with, those are all warning signs
Starting point is 00:10:11 of mental health problems. Parents should be as much as possible trying to pay attention to the whole picture of the child. So like, how are they doing in school? How are they doing with friends? How are they doing at home? If they are starting to withdraw, So if you're seeing a lot of isolation, that's something to be concerned about. And these are also warning signs of suicide risk. And if you are worried or even wondering whether that's something your child is considering, the best way to find out is ask them directly in a very calm, non-judgmental way. People often assume that, you know, asking about suicide can put the idea into someone's head so they don't ask.
Starting point is 00:10:55 But what years of reporting on suicide prevention has took, taught me is that there's research showing that asking about suicide does not put someone at risk of it. In fact, it's just the opposite. Asking about suicide brings their risk down by making the topic less stigmatized and opening up the path to getting someone help. A few years ago, I did an entire episode of Life Kit about identifying and supporting kids at risk of suicide, and we'll link to that in our show notes. One of the tips I offered in that episode was about what to say and what to say. not to say if your child tells you they've thought of suicide. One thing that's really important
Starting point is 00:11:32 is to not react with shock, fear, or anger. And I say this with the understanding that it is perfectly normal for a parent or actually anyone to feel scared and anxious or even angry if a child tells you that they're considering suicide. But it's important not to show that to your child while they are telling you about their own struggles. Here's Megan Hilton, a young woman I had interviewed for that episode a few years ago, and she had struggled with depression and suicidality since childhood. But when she told her parents about her struggles, she says they either told her to buck up and get it together,
Starting point is 00:12:14 or they were visibly upset. Their reactions have been way over the top, have been too extreme, and I feel like I'm responsible for their own. emotions. So this is what Megan suggests parents do instead. Trying as hard as you can put your game face on to understand that you cannot overreact to things. You need to be very open and willing and supportive and really try to listen to what your kid is saying. Stay focused on your child and what they're struggling with and offer them your support in connecting them to care. And you can start that by
Starting point is 00:12:53 calling or texting the suicide and crisis lifeline, 988. And when you're connected with a trained counselor on that number, you can get support both for yourself and tips on how best you can support your teen. And you can also have your teen talk to a counselor and get direct help. Also, Jacqueline Niecy says it's best to involve a healthcare professional as soon as possible for any of the above warning signs. She suggests starting by talking to your child's pediatrician. Now, I know this is a lot to process, but we will also be talking about preventing your child
Starting point is 00:13:33 from ever getting to this point. After this break. Let's jump into takeaway three. It's about talking to your child about what they are doing online. The first step for prevention is staying constantly engaged with your child's online activities. Ask them whether they are using chatbots and how. Here's Jason again.
Starting point is 00:13:58 You know, parents don't need to be AI experts. They just need to be curious about their children's lives and ask them about what kind of technology they're using and why. And the more that you are able to have some of these open-ended conversations, then I do think that that allows for your teenager or child to open up about any, you know, problems that they've encountered. And have these conversations early and often, according to Scott Collins at ORA, who's also a father of two teenagers. We need to have frequent and candid but non-judgmental conversations with our kids about what this content looks like. And we're going to have to continue to do that.
Starting point is 00:14:38 And Scott says he asks his kids often about what AI platforms they're on. When he hears about new chatbots through his own research at aura, he asks his kids if they have heard of those or use them or if their friends are using them. And he stresses that it's really important not to draw. drive towards an agenda. Just ask your question with an open mind and curiosity. Don't blame the child for expressing or taking advantage of something that's out there to just kind of satisfy their natural curiosity and explorations. And keep these conversations open-ended, which would make it more likely that teens would open up about anything uncomfortable or a problematic interaction that they've had with the chatbot.
Starting point is 00:15:21 Experts I spoke with also advise a certain level of digital literacy for the whole family. So these conversations could be part of your regular chats you have about the pros and cons of all digital habits. And if you don't understand something, you can always look things up online as a family. Our fourth takeaway is also about a way to minimize the risks of AI chatbots. And that's by setting boundaries. This is similar to advice you may have already heard about social media use, and it can be part of your family's overall boundaries for digital device use. Experts like Jason Nagata and others say it helps to set boundaries on the use of digital devices,
Starting point is 00:16:03 not just for teens, but for the whole family. For example, keep all your devices away during meal times. Protect that time to connect with each other. Similarly, Jason says, try and keep devices out of kids' bedrooms at night. One potential aspect of generative AI that can also lead to mental health and physical health impacts are if kids are chatting all night long and it's really disrupting their sleep because they're very personalized conversations, they're very engaging. Kids are more likely to continue to engage and have more and more use. In other words, being alone with uninterrupted time with the chatbot at night can create a perfect storm for these more intense, longer content. conversations. And Jacqueline says it's important to set up parental controls on your kids' devices and accounts.
Starting point is 00:16:56 Many of the more popular platforms now have parental controls in place. But in order for those parental controls to be in effect, a child does need to have their own account on there. So what I would say is that if a kid is going to be using chat GPT or if they're going to be using Gemini, in many cases it is going to make sense for them actually to make an account. That way you can keep an eye on how your teen is using a chatbot, how often, and for what. And while you're setting up boundaries in prioritizing your time with one another, also remember that it's good to fill your kids' days with as many in-person activities as possible, seeing friends, doing their favorite hobbies, time spent in nature, all of this is really healthy for teen development and mental health. And
Starting point is 00:17:43 it has the added benefit of minimizing time spent on digital devices, including with chat That's our last takeaway. Set boundaries for screen use, prioritized meal times to create room to foster family connection, and prioritize other in-person activities for your kids and keep cell phones out of bedrooms at night. This will add layers of protection for your child's risk of interacting with chatbots. So to recap, take away one, educate yourself about the risks of chatbots for your teens, risks to their social development and mental health, and educate your child about them. Takeaway number two, look for warning signs of problematic use of chatbots and signs of mental health problems. Those signs include social isolation, difficulty staying away from their phone
Starting point is 00:18:34 or computer, and avoiding things they usually like to do. And if you're concerned about suicide risk, just ask your child directly whether they have thought about suicide. If they're having suicidal thoughts, you can call or text the suicide and crisis lifeline. 988 to be connected to a trained counselor who can support and guide you to help your child. They can also provide direct support to your child by phone or text. And for any of these warning signs, connect your child to your pediatrician or a mental health care provider as soon as possible. Takeaway number three. As a way to prevent your child from going down a rabbit hole with chatbots, stay on top of their digital life, including their use of chatbots.
Starting point is 00:19:17 Have open-minded, non-judgmental conversations with them about their use of chatbots. Talk early and talk often. Takeaway number four, set boundaries on when and how long your kids can use their devices, including interactions with chatbots. It's especially important to protect meal times and bed times from use of devices, especially for interactions with chatbots. Encourage and foster as many in-person activities for your kids as possible. It's healthy for their development and mental health and limits interactions with chatbots.
Starting point is 00:20:00 That was NPR reporter Retoo Chatterjee. Before we go, what do you think? Would you rate and review Life Kit in your podcast app? It helps us to know what you like about the show. Here's one review from user EJD-K-E-H-D-V-L. Yeah, I don't know how to pronounce that, so I'm spelling it out. Subject line, helpful podcast of the gods. This podcast has been super helpful for me.
Starting point is 00:20:24 as someone who does not have a lot of mentorship from biological family or professional mentorship. All the finance-related podcasts have been a vital resource in reconfirming my strategies and my understanding and complex concepts in a very safe and friendly tone. We're happy to help, friend. All right, that's our show. This episode of Life Kit was produced by Mika Ellison. Our digital editor is Malika Garib, and our visuals editor is C.J. Riegelon. Megan Kane is our senior supervising editor.
Starting point is 00:20:54 and Beth Donovan is our executive producer. Our production team also includes Andy Tagle, Claire Morae Schneider, Margaret Serino, and Sylby Douglas. Engineering support comes from Robert Rodriguez, fact-checking by Tyler Jones. I'm Mariel Segarra. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.