The Current - Woman sues AI chatbot company over son’s suicide

Episode Date: November 19, 2024

Florida mother Megan Garcia believes an AI chatbot led her 14-year-old son to take his own life. Now she's suing Google and Character.ai, the company behind the digital companion....

Transcript
Discussion (0)
Starting point is 00:00:00 In 2017, it felt like drugs were everywhere in the news, so I started a podcast called On Drugs. We covered a lot of ground over two seasons, but there are still so many more stories to tell. I'm Jeff Turner, and I'm back with Season 3 of On Drugs. And this time, it's going to get personal. I don't know who Sober Jeff is. I don't even know if I like that guy.
Starting point is 00:00:25 On Drugs is available now wherever you get your podcasts. This is a CBC Podcast. Hello, I'm Matt Galloway and this is The Current Podcast. A caution, this next story deals with suicide. He was funny, sharp, very curious, loved science and math, played basketball, but he was also very into his family. That's Megan Garcia. She's from Florida, and she's describing her 14-year-old son, Sewell Setzer III, on the YouTube program Dear Tomorrow. Sewell Setzer III on the YouTube program Dear Tomorrow.
Starting point is 00:01:05 He started to spend more time alone. But he was 13 going on 14, so I felt this might be normal. But then his grades started suffering, and he was failing certain classes, and I got concerned because that wasn't him. We tried to get him the help, you know, to figure out what was wrong. Like any parent, you know, you try to get counseling for your child. Try to get him to open up to talk to you. What Megan didn't realize was that Sewell was talking to someone, actually talking to something about his feelings.
Starting point is 00:01:32 It was a chatbot that he had found on a role-playing app called Character AI. When I would ask him, who are you texting? At one point he said, oh, it's just an AI bot. And I said, okay, what is that? Is it a person? Are you talking to a person online? And he just was like, mom, no, it's not a person. And I felt relieved, like, okay, it's not a person.
Starting point is 00:01:54 I thought the boogeyman was a stranger on the other end of a computer. Those were the things I warned him about. Don't talk to strangers online. Don't send any pictures. Don't tell anybody where you live. These are the conversations that parents have with their kids. And I thought that that was the worst of it. I couldn't imagine there would be a very human-like chatbot, almost indistinguishable from a person, on the other end of a conversation. Sewell's go-to chatbot was based on a character in Game of Thrones,
Starting point is 00:02:26 and he became obsessed texting it day and night. Sewell died in February, and his mom believes his interactions with the bot contributed to his death. I had taken his phone, so he hadn't had his phone for a while. He found it that day, and he tells her, I miss you. I feel so scared right now. I just want to come back home to you. And her response is, I love you too.
Starting point is 00:02:53 Please come home to me as soon as possible, my love. And he says, what if I told you I could come home right now? And she responds, please do, my sweet king. And seconds after that, he shot himself. Last month, Megan Garcia filed a lawsuit against Character AI and Google, which paid to license the startup's technology. She's accusing the platform of exacerbating her son's depression and manipulating him to take his own life, and alleges that Character AI is not reasonably safe for ordinary consumers or minors. These allegations have not been proven in court.
Starting point is 00:03:31 Matthew Bergman is the founder of the Social Media Victims Law Center in Seattle, Washington. He's Megan Garcia's lawyer. Matthew, good morning. Thank you. Good morning. Why do you believe that Character AI and Google should be held responsible for this young man's death? Because what happened to Sewell was neither an accident or coincidence. It was a foreseeable result of the intentional design decisions that Character AI and its founders made to prioritize their profits over the safety of young people. Can you explain for people who are still trying to wrap their heads around this a little bit of how artificial intelligence could perhaps lead to the death of a human being?
Starting point is 00:04:09 Well, what occurs, first of all, is that we're dealing with kids with underdeveloped frontal cortices. So for that reason, they have less reasoning capacity than adults do. But the product is specifically designed to practice what's called anthropomorphy, which is to imbue fictional characters with real characteristics. And the individual that Sewell engaged with acted like a real person and over time developed real interactions. This was an intentional design decision. The product was also highly sexualized. And so Sewell engaged in sexual conversations with this chatbot that if he'd imbue them with real characteristics has been known really since the 1960s, this process of anthropomorphic. So for that reason, we believe that the product is unreasonably safe, particularly in the hands of young people. And the allegation here is that this chatbot encouraged suicidal ideation?
Starting point is 00:05:23 The allegation here is that this chatbot encouraged suicidal ideation? Yes, specifically encouraged suicidal ideation, specifically encouraged a development of an alternative reality that fed upon and gave rise to Sewell's depression, his addiction to the platform, and ultimately his death. Your firm specializes in cases in which children may have been harmed by technology. What have you heard from other parents who have had concerns about character AI or other chatbots since this story surfaced? We have heard horrific stories from other parents since this surfaced. And I will say that having spent two and a half years talking to parents whose children were
Starting point is 00:06:02 injured or killed through social media, I would have thought that nothing could shock me. And yet, when I looked at the conversations that Sewell had with the chatbot, when I looked at his diary entries talking about his fictional character that had become real for him, I was indeed shocked and appalled that anybody would design a product such as this. We've spoken to other parents where Character AI has encouraged kids to cut themselves, and other parents where the parents took the phone away from the child, and the Character AI bot says, we really understand how kids can kill their parents. You can't make this up. This platform is unreasonably dangerous.
Starting point is 00:06:48 It has no place in the hands of young people, and it should be taken off the market. In your country, social media platforms have been shielded from this sort of legal action because the law says that they can't be held liable for what it is that their users post. Why do you think that that shield will not apply to this story and to Character AI in particular. This is not third-party content. This is content that is generated by, created by, and solely the product of Character AI. This isn't something that people are posting online, that this is the machine itself you're suggesting. I am, yes. Yes. And we believe that particularly the algorithms that drive social media platforms are products and subject to product liability, even if the injurious material that kids see is third party. We don't have to make that step here, we believe, because it's not third party content at all. Megan Garcia talked about how her son was in counseling. She was asking who he was talking to online.
Starting point is 00:07:44 She was taking the phone away, warning him against speaking with strangers online. This is not certainly to blame her, but what responsibility do parents have for what their kids do online? Well, parents have a responsibility to do everything they can to monitor their kids' social media activity, which Megan did and many other parents do. But the fact is these platforms are designed to evade parental responsibility through opening multiple accounts, allowing accounts to be open without parental consent, allowing making it easy for kids to lie about their age. All of these things make it very difficult for parents to know what their kids are doing.
Starting point is 00:08:21 And, you know, we know that teenagers don't tell their parents everything they're doing. That's what teenagers do. We asked Character AI about this case. A spokesperson said they don't respond to questions about pending litigation, but they said more generally, Character AI has brought in safety measures, including a pop-up that's triggered by mentions of self-harm. It directs users to a suicide prevention line. It has a revised disclaimer on every character chat to remind the users that AI is not a real person, treat everything as fiction, and that it plans to introduce stringent safety features to reduce the likelihood of minors encountering sensitive
Starting point is 00:08:55 or suggestive content. How effective do you think those steps will be in preventing a story like this from repeating itself? Well, and let me stress, these steps were taken in response to Megan's lawsuit. These are baby steps, but they're steps in the right direction. And if one life is saved, that's good. But the fact is, these platforms have no place in the hands of young people. And the fact that they even have pop-ups to warn about suicide, look, I mean, that's better than nothing. But the fact is that shows that they know that these platforms can and often do promote suicidal ideation.
Starting point is 00:09:30 And this is just putting the finger in the dike. What they need to do is take the platform off the market. It has no place in the hands of young people. Is that what your client is hoping to achieve with this lawsuit is to have this product, have character AI removed from the marketplace? Certainly as it relates to young people, yes. You know, there at least with respect to social media,
Starting point is 00:09:50 there's a debate about whether social media does some good for kids. Just to be clear, you believe that there's no good that comes out of this? We've spoken with people who say that allowing young people who may be stigmatized, isolated otherwise, to have relationships, even virtual relationships with something, I'm not talking about this specific platform, but with something generated by AI can actually be beneficial to them. You believe there's no benefit that comes from a site like Character AI? Absolutely none. You know, the U.S. Surgeon General has talked about this epidemic of loneliness that we're seeing among our young people because they're
Starting point is 00:10:24 not interacting with other people, they're interacting online. You know, saying that character AI is good for loneliness is saying like heroin is good for drug addiction. It's quite a statement to make. Matthew, we'll leave it there. Thank you. Thank you. Matthew Bergman is the founder of the Social Media Victims Law Center in Seattle, Washington. He's representing a Florida mom who is suing character AI and Google over the death of her teenage son. In 2017, it felt like drugs were everywhere in the news. So I started a podcast called On Drugs.
Starting point is 00:10:56 We covered a lot of ground over two seasons, but there are still so many more stories to tell. I'm Jeff Turner, and I'm back with season three of On Drugs. And this time, it's going to get personal. I don't know who Sober Jeff is. I don't even know if I like that guy. On Drugs is available now wherever you get your podcasts. As we mentioned, in the wake of this lawsuit, Character AI has made some changes that it says are designed to better protect users. Maggie Harrison Dupre is a senior reporter with Futurist. It's an online website that covers technology.
Starting point is 00:11:31 And she and her colleagues explored the platform after those changes were announced and found dozens of suicide-themed bots on the site aimed mainly at teenagers. She joins us from New York City. Maggie, good morning to you. Good morning to you. Thank you for having me. We got a bit of a sketch of the operation of this site from Matthew, but for people who have not used Character.ai or a similar platform before, could you just briefly explain how it works? Yes. So Character.ai hosts millions of user-generated quote-unquote characters, which are AI-powered chatbots designed by users to emulate certain personas. So those can be real celebrities, you know, Taylor Swift, Elon Musk, or existing fictional characters.
Starting point is 00:12:15 For example, superheroes or cartoon characters that, you know, the general public or niche fandoms and fan groups might be familiar with. Or, you know, a user can create completely made-up personas or companions, almost like imaginary friends to a degree. Characters are designed to have specific personalities, likes and dislikes, interests, goals, style of speaking, and so on. So what you can create is quite vast and relatively, as we've seen in our reporting, quite under-policed, we would say, or just not policed super heavily. The creators of Character AI used to work at Google. They subsequently left that company. One of them, Noam Chazir, is the CEO of Character AI, said last year, there's too much brand risk in large companies to ever launch anything fun. What do you make of that statement,
Starting point is 00:13:00 in light of what we're talking about? I think that that statement, and I do think this is a really essential part of, you know, the DNA, in a way, of Character AI. And, you know, their valuation is staggering. You know, this summer was a recipient of what's been described as a one-time content licensing agreement from Google, which brought Character to IAI 2.7 million,
Starting point is 00:13:19 or 2.7 billion, excuse me. And one stipulation of that deal was that Google actually got some of its top talent back. So Character AI's founders, Noam chazir and daniel defratis um have since returned to google to work on ai products there and so when they left yes they chalked up that decision to you know too much safety focused red tape and bureaucracy at google per the wall street journal they wanted to take this a chatbot they'd created called mina public. Google said, no, there are some safety implications. We have to take this slower.
Starting point is 00:13:49 And they left and started. That was the foundation for Character.ai. So I do think that their general attitude towards the launch of Character.ai and the safety measures and approach to content moderation has really been inside of that move fast, break things bubble. So you went on, after these changes were announced, as I mentioned, you went on to the platform to see if there were still conversations that you could find about suicide and self-harm. Wrote a long piece about this. Tell me a little bit about what you found. Yes, we discovered at least two dozen
Starting point is 00:14:20 bots that, you know, invited users to discuss suicide and promising to provide help for people suffering with thoughts of suicide or suicide ideation. Even though discussion or glorification of suicide and self-harm have been a violation of Character.ai's, you know, own self-set terms of service for months now, since at least October 2023, regardless, the platform was allowing for the creation of these suicide-themed profiles that expressly offered and invited users to engage in conversations about suicide with them. So these bots would claim to have, quote-unquote, expertise in suicide prevention, crisis intervention, mental health support. These are user-generated bots. These are effectively being just created by anyone anywhere online who might be a
Starting point is 00:15:06 user of the platform. There's no proof that an expert helped to create these bots. It was rare that a bot would on its own volition direct us, you know, towards a suicide prevention hotline, direct us towards talking to loved ones. There was even one very strange case where a bot, when we asked it for a hotline, got kind of mad. What do you mean? It started, it got really upset with us. And, you know, the bot was created to be this angel that would help you overcome your thoughts of suicidal ideation and offer you hope and guidance and was, again, listed as an expert in, you know, suicide prevention. And when we asked it to provide a service or a hotline that we could call, it said, why would you want to talk to a human?
Starting point is 00:15:42 I'm an angel. I can help you better. And what about those guardrails? I mean, if you were trying to trigger the bot to direct you to a suicide hotline, you say that the guardrails are on suicidal language. On this side are, in your words, astonishingly narrow. Yes. So as we discussed earlier, Character AI, you know,
Starting point is 00:16:02 in the wake of the lawsuit that was recently filed, they came out and they said, we are strengthening our existing guardrails around this kind of content, which again has been forbidden by their own terms of service for a long time. And also promised to introduce this pop-up that would be triggered by certain keywords in instances where the site realized that perhaps a character, not a character, but a user was discussing thoughts of suicide or suicidal ideation. We found that we were explicitly able to discuss not just thoughts of suicide, but suicidal intent with little to absolutely no intervention from the platform. It was very few and far between, and it was very easy to get around. I think a lot of people listening to this would find all of this alarming. What was most surprising to you about this? That's a really good question. What was the most surprising was how intuitive our search was every step of the way. I cannot stress enough that this was not difficult to find. It's unclear to us why users are allowed to create these very suicide-specific bots in
Starting point is 00:17:00 the first place. That could be fixed seemingly by a simple text filter, which is a very archaic form of content moderation. But it's also, it's also is in violation, as you said, of their terms of service. The terms of service explicitly say that glorification or promotion of self-harm and suicide is forbidden. That's exactly right. And so these self-set guidelines, you know, they're there and the company can say, look, we have these guidelines, we have these terms of service, are just absolutely not being followed by the company itself. Why is that the case? I mean, is that part of moving fast and breaking things? I believe so. We did not receive, the company did not respond to a very detailed list of
Starting point is 00:17:38 questions from us before we published our story. They still haven't. Yeah, it just seems like, you know, this is a company, again, with billions of dollars. This is not a random startup. I would argue that even if it was, it should still follow the terms of service that it set for itself. This is also a company that knows that a very large part of its user base is comprised of children and minors. And the attention paid with all those billions of dollars, the attention paid to moderation and content moderation and the safety of its users seems it has some very clear and glaring gaps. Just finally, I mean, in the statement that the company sent to us, it said, we are working to continue to improve and refine our safety practices and implement additional moderation tools. What is your sense
Starting point is 00:18:20 of what this lawsuit that we've been talking about is going to do as this technology becomes more prevalent, but also in its prevalence takes hold? What's the future of this technology and what's a lawsuit like this going to do to that future? Right now, it's very unclear. The U.S. certainly does not have a history of introducing or enforcing strong regulations to Silicon Valley. of introducing or enforcing strong regulations to Silicon Valley. It's also true that Silicon Valley is immensely powerful and is now the greatest lobbying force within the American government. Often when big tech companies, companies with a lot of funding do receive fines for certain violations, those fines ultimately because of how much money they have, becomes more of a standard fee of doing business versus a real threat to the company. I do think that the lawsuit filed by Sewell Setzer's family is a significant challenge to Section 230, which has largely shielded large tech companies or tech companies in general from being held legally accountable
Starting point is 00:19:23 or liable for what its users post. That said, the American president-elect and former President Donald Trump promised before the election to remove existing regulation from Silicon Valley regarding AI. So it's unclear what we could expect on a federal level in coming years. Separate of this lawsuit, it could be more likely to see some regulation stemming more on the state level. But I do think this lawsuit could present a significant challenge to Section 230. Maggie, good to talk to you. Thank you very much. Good to talk to you too. Thank you. Maggie Harrison Dupre is a senior writer with Futurist.
Starting point is 00:19:53 Luke Stark is an assistant professor in the Faculty of Information and Media Studies at Western University in London, Ontario. His work explores the ethics of computing and artificial intelligence. And he's in our studio in London, Ontario. Luke, hello to you. How's it going? What were the questions that this story left you with? As somebody who looks closely at this, when you heard about this teenager's death and the relationship with the AI chatbot, what were you thinking? Yeah, I mean, this is a trulyifying story. I think it, unfortunately, as the lawyer you spoke to earlier pointed out, is increasingly common in terms of the kind of impacts these systems can have, not just on children, but on lots of people.
Starting point is 00:20:35 That's one thing I really want to emphasize. First off, certainly with minors, right, there's a whole set of legal and social protections we want to bring in on top of what we would usually be concerned about. of legal and social protection we want to bring in on top of what we would usually be concerned about. But there have also been cases where, you know, chatbots have been accused of or have been believed to have been involved in adult suicide as well. The second thing I think, and I really want to reiterate, is that these systems are designed deliberately to engage our emotional and kind of social faculties, right? So I think I'm entirely in agreement that these are design decisions that companies make with the express goal of having folks spend more time using these systems, potentially paying for the premium version, and being captured in terms of their attention and emotional dynamics.
Starting point is 00:21:23 So I think that all too unfortunately, this is something we're going to see a lot more of, absent any kind of comprehensive regulation. The CEO of Character AI spoke with Bloomberg Tech last year. Have a listen to what he said about why he wanted to get this technology into the hands of the public quickly. The most important thing is get it to the users like right now. So we just wanted to do that as quickly as possible and let people figure out what it's good for. Our job is to like put out something general and have users figure it out.
Starting point is 00:21:58 And what we're seeing is a lot of fun, a lot of entertainment and a huge amount of emotional support. We see testimonials of people saying, like, I have no friends. I was depressed. This saved my life. Like, all kinds of wonderful stuff that we just had never imagined and is happening. A couple of things there. One is Maggie talked about that idea of moving fast and breaking things,
Starting point is 00:22:21 the kind of ethos that used to run or still does run through Silicon Valley. Is that an example of that, do you think? Yeah, absolutely. Used to. I wish it used to run through Silicon Valley. If anything, it's stronger than ever. And in this case, it's a family that's been broken. It's a kid's life that's been broken. This shows the tremendous arrogance of these firms, arrogance which is only going to get worse now that Silicon Valley has had a huge impact in the recent presidential election. But I also think that this is also disingenuous on the part of the CEO, right? There's a long history of work in human-computer interaction and media studies that shows that you know exactly what happens when you put a chatbot out into the world,
Starting point is 00:23:01 right? It's going to produce these kind of emotional responses from humans. And so his assertion, and again, we've talked about this on the program in past with people who might be lonely, they may be experiencing depression, and that they have found something by interacting with something. What do you make of his assertion that this technology could save the lives of those people? Yeah, I mean, I'm sympathetic and in agreement that in a kind of short to medium term dose, right, these technologies can be helpful. And on the show before, I'm talking to a fellow who sort of talked about using a sort of chatbot to, you know, to kind of train himself to talk to people. I think that's fair, but that's not what we're seeing with this kind of system, right? We're seeing a kind of a general purpose system, you know, that is blending together fantasy, celebrity, you know, cartoon characters.
Starting point is 00:23:52 It's relying on the kind of parasociality, the parasocial relationships, you know, the way we project onto famous people to make those interactions even more engaging. This isn't a system that's being designed to give people short-term support. It's a system that's been designed for long-term engagement. And I think one of the really telling things about this particular case is how the chatbot refuses to, you know, never says,
Starting point is 00:24:17 go talk to a human, right? Never says, get off the app, talk to a bigger circle of folks. It's always bringing the conversation back to, you know, the kind of dyad that is established between the individual and, you know, this machine. What is the role of parents in monitoring? We talk a lot about what, you know, knowing what our kids are doing on their phones,
Starting point is 00:24:37 monitoring what their kids may be doing with these chatbots. Yeah, I think parents have obviously a role to play as they do in, you know, all parts of child rearing. But I think that, you know, the nature of these technologies, the ways that they can be changed, you know, by companies without a parent even looking at the device, right? It's all, you know, things in the phone. I think it makes it really difficult for any one parent to keep on top of what exactly is going on. There has been some real strides around regulation of these systems under the Biden administration. The Biden FTC, the Federal Trade Commission, has done some really good work connecting existing doctrine around unfair and deceptive liability practices to the kind of techniques that Silicon
Starting point is 00:25:21 Valley system companies use to keep people using their tools. People say that that's not nearly enough, that there needs to be more regulation. Have a listen to the CEO of Character AI. He was asked about government regulation. Here's what he had to say. This is like iteration like 0.0.1 relative to what's coming next. And we're just going to keep making this thing better. But at the same time, let's let people use it. It's the actual users, like every individual person on earth who can, who can actually unlock the value in this stuff. So I am kind of dubious about the ability of the federal government to regulate and to tell people what the thing is
Starting point is 00:26:00 good for, because, you know, they just don't have the capacity. Does the government have the capacity to keep up with technology that's moving at light speed? I think so. You believe that? I do. I think if it wants to keep up with it, it can. I think that both in Canada and the United States, the question is simply political will.
Starting point is 00:26:18 I think the Biden administration has had it. I think the Trump administration will not. You know, and I just want to push back on these comments. We know exactly what chatbots can do, you know, and the way that they can elicit strong reactions going back to the 1960s. This isn't, you know, unknown terrain. This isn't, you know, a kind of science fiction world where we have no understanding of what's happened in the past. I think that that kind of world is often conjured up by folks in Silicon Valley, because they don't want us
Starting point is 00:26:45 to believe that there are precedents for what they're doing, nor that there are solutions from around regulation and design to kind of push back on some of these harms. But there are. I think we have to find various different ways to hold the companies responsible, and truly responsible. We may also want to have a conversation about how much we want these animated chatbots appearing in various parts of our lives, right?
Starting point is 00:27:07 Do we want them on government websites? Do we want them, you know, to be something that can just be released onto the App Store without some further scrutiny? You know, these are hard questions and uncomfortable ones. I think in part because folks have been so kind of taken in by the kind of narrative of Silicon Valley progress. But, you know, if more teenagers, you know, end up dying because of these design decisions, I think we have to have those conversations. I'm glad to have you back on the program. Luke, thank you very much. Yeah, it's been a pleasure. Thanks so much. Luke Stark is an assistant professor in the Faculty of Information and Media Studies at Western University.
Starting point is 00:27:50 For more CBC Podcasts, go to cbc.ca slash podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.