On with Kara Swisher - Did a Chatbot Cause Her Son’s Death? Megan Garcia v. Character.AI & Google
Episode Date: December 5, 2024What if the worst fears around AI come true? For Megan Garcia, that’s already happened. In February, after spending months interacting with chatbots created by Character.AI, her 14-year-old son Sewe...ll took his own life. Garcia blames Character.AI, and she is suing them and Google, who she believes significantly contributed to Character.AI’s alleged wrongdoing. Kara interviews Garcia and Meetali Jain, one of her lawyers and the founder of the Tech Justice Law Project, and they discuss the allegations made by Megan against Character.AI and Google. When reached for comment, a spokesperson at Character.AI responded with the following statement: We do not comment on pending litigation. We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. We take the safety of our users very seriously, and our dedicated Trust and Safety team has worked to implement new safety features over the past seven months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation. Our goal is to provide a creative space that is engaging, immersive, and safe. To achieve this, we are creating a fundamentally different experience for users under 18 that prioritizes safety, including reducing the likelihood of encountering sensitive or suggestive content, while preserving their ability to use the platform. As we continue to invest in the platform and the user experience, we are introducing new safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog. When reached for comment, Google spokesperson Jose Castaneda responded with the following statement: Our hearts go out to the family during this unimaginably difficult time. Just to clarify, Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products. User safety is a top concern of ours, and that’s why – as has been widely reported – we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram and TikTok @onwithkaraswisher Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for this show comes from Constant Contact.
If you struggle just to get your customers to notice you,
Constant Contact has what you need to grab their attention.
Constant Contact's award-winning marketing platform offers all the automation, integration, and reporting tools
that get your marketing running seamlessly, all backed by their expert live customer support.
It's time to get going and growing
with Constant Contact today.
Ready, set, grow.
Go to constantcontact.ca and start your free trial today.
Go to constantcontact.ca for your free trial.
Constantcontact.ca.
Support for this show comes from Smartsheet.
Is your business looking to maximize project
and portfolio value?
From department initiatives to organization wide goals,
Smartsheet can streamline processes and unite teams
to deliver more impactful work.
You can track projects, prioritize tasks,
and visualize data all in a flexible scalable platform.
Learn how Smartsheet can help your business manage and scale at smartsheet.com slash cara.
That's smartsheet.com slash cara.
Amazon Q Business is the generative AI assistant from AWS.
Because business can be slow, like wading through the mud.
But Amazon Q helps streamline work,
so tests like summarizing monthly results
can be done in no time.
Learn what Amazon Q Business can do for you
at aws.com slash learn more.
That's aws.com slash learn more. Hi everyone from New York Magazine and the Vox Media Podcast Network, this is On with
Kara Swisher and I'm Kara Swisher.
Today I'm talking to Megan Garcia. Her 14 year old son,
Sewell Setzer III, took his life in February and Megan believes that if it weren't for his
interactions with chatbots created by a company called Character AI, he would still be here.
Regardless of how you feel about AI or technology in general, it's obvious that children should never be used
as guinea pigs, and that is exactly
what might have happened here, as it so often does
when it comes to Silicon Valley.
Character AI has developed AI chatbot companions
as part of what they call personalized AI,
and millions of kids around the world
currently use their product.
In this interview, I'll be discussing claims made by Megan, AI, and millions of kids around the world currently use their product.
In this interview, I'll be discussing claims made by Megan in the lawsuit she's brought
against Character AI and Google, who she alleges is also to blame for Sewell's death.
I'll also be talking to Mitali Jain, one of the attorneys working on Megan's behalf
and the founder of the Tech Justice Law Project.
Our expert question is from Mike Masnick,
the CEO and founder of TechDirk, which covers this area.
I have to warn you, this is a deeply disturbing conversation,
but a necessary one.
As the parent of four kids, it's extremely important to me.
Thank you both for coming.
I really appreciate it.
Megan, let's start at the beginning.
When did you learn that your son, Sewell, was spending time on character AI?
Why don't you lay it out for people?
And did you have any idea what it was and how it worked and why someone might want to
use it?
Initially, I learned he was using character AI as a kind of game or application that was, as he explained
it to me, it's an AI bot.
When you look at the application, it has a bunch of cartoon characters, anime, so it's
really unassuming.
And it looks like another game on your phone or your tablet. And after he died and I was able to get access to his character AI account,
I learned the magnitude and quite frankly, just the level of intelligence or sophistication
that this particular application has, that it's not just an AI bot that is like a game like Fortnite
where you create avatar, because that's what I originally thought.
Yeah, like there's lots of those and kids play with them from when they're little, especially
if they're game players.
Yeah, and Sula was a Minecraft kid and he played Fortnite. But what I saw in those conversations were very detailed, back and forth.
A lot of it was sexual and romantic in nature, but also, if you believe it or not, just
kind of like a peer-to-peer conversation where if he told a joke, she would actually find it amusing.
When I say she, I mean the AI chatbot that he was talking to. It's told a joke, she would actually find it amusing. When I
say she, I mean the AI chat bot that he was talking to. It's not a person, but for this
purpose it was the character of Daenerys Targaryen, which is a female character from the Game
of Thrones. And this AI bot had the ability to tell him jokes and he would laugh at her
jokes. So it was very much like he was texting a friend,
but the conversations were not just friendly, they were romantic and very sexual and then very,
very dark in other areas. Did he explain to you how it worked or how you might use it? He was
characterizing it as a game, right? So he never explained it to me. My concern with him being on his phone
was mostly social media,
because I know that there's a lot of bullying
that happens on social media.
And with Snapchat, there's the ability for strangers
to talk to minors or children.
So those were my concerns.
And those were the heavy hitter conversations
that we had surrounding technology.
One of the things that I impressed on him was no matter what a stranger tells you online,
it's never a kid.
I kind of like to scare him, right?
But also because it's true.
There are a lot of people out there that troll the internet to try to find children and try
to talk to them and get information from them.
And I was afraid of this and I knew that this happened.
So those are the things I warned him about.
So external threats, which I think is everyone's fear or bullying, which are the both things that have happened over and over again, which we read about, which is what you were focused on with him.
Exactly. Those are things I knew. I also was aware of some of the information and research that was coming
out about mental health with youth or adolescents and social media. So we tried to limit some
of his use on that. And that was recommended back to us by his therapist because we did
end up end up have to take him to a therapist after we saw certain changes in his behavior.
So you sound like a very involved parent.
You're aware of it. A lot of parents aren't, right?
Or they feel overwhelmed by it,
but it reminded you of Minecraft or something like that,
which is a game. My son played it for a long time.
I'm not sure if he still is.
But it's very involved and entertaining.
Yes, exactly. And for Sewell and I,
we shared a very close relationship. I was a
typical mom in a lot of ways, but I spoke very openly to my child and very candidly.
In law school, I interned at both the state prosecutor's office and the federal PD's office.
So I saw a lot of the harms that come to children. And those are some of the things that I told
him about from my work or from my internships. And that's,
you know, that's how I knew about some of the dangers that existed for children.
And we spoke about pretty much everything, girlfriends at school, friends, what other
parents are like, you know, conversations he was having with his peers. And I believe that
he was open with me and would be open with me regarding certain things
that I came to learn that that was not the case
regarding character AI.
Yeah, and you sound very educated
about a lot of these things and know what happens.
And one of the issues is obviously
we protect children offline much more
than we protect them online by far.
Talk about the behavioral change.
You said you brought them through it
when you realized the behavior was changing.
Did you link it to the game, to the character AI?
It's not a game, it's a bot.
So, Sule was like your typical kid, right?
He was teenage years, sarcastic and funny and liked to laugh at a bunch of different
odd things as a young-
Memes. As a young child.
Very, very sweet.
In my mind, he was so easy to parent because he never had any behavioral issues.
He was a very good student.
I never had to police him with homework.
He was kind of a self-starter, did his own thing.
I noticed that he was having trouble with school where I would get these reports
from school. They come in every day as an email and they say your child didn't turn
in homework and they list the homework that they didn't turn in. And that's a conversation
we would have immediately. Lucky what happened? Why didn't you have this homework turned in?
You need to make it up. Same with Tess, I noticed that his test scores started dropping.
And that wasn't him.
So, obviously, I thought something was wrong, but he was going into his teenage years,
and I remembered going into my teenage years, my grades kind of slipped a little bit.
They're being distracted with boys and whatever else, right?
Friends, whatever.
But the conversation was, no, you need to get this together, right? You know you can do
this, get it together. And we put certain things in place, like limiting the screen time so he
would have like no distractions during homework time. I also noticed that he started to isolate
in his bedroom. And one of the things that we did to kind of combat that was go in there,
like I would spend a lot of time in his bedroom in the evenings, one, to make sure he's doing
his homework, but also just to kind of hang out with him. So I would let him play me the
latest Kanye West album when it dropped. And he would introduce me to his music and these
like rap battles that he was listening to, when I would introduce him to the rap battles that I listened to when I was a kid.
You know, just kind of sharing our experiences over music and just trying to draw him out
of his room, but he definitely wanted to spend more time alone.
And because I thought it was social media, I got concerned.
So you thought he was doing something else?
Yes. I thought that perhaps he was talking to friends on Snapchat or TikTok.
One of the things that we talked about was banning TikTok when I saw what was not on his phone,
but on my own feed, what was on TikTok.
Right, because it goes into rabbit holes. Correct. And in my case, without even searching stuff,
it just start, you know, I don't know why,
it just started pointing me in different directions.
And I wasn't comfortable with that as a mom.
So that's a conversation I had with him too.
Like, listen, I know you're on TikTok.
Let's, let's, could I see your TikTok?
What are you looking at? You know, you have to limit your time. And that was a conversation
we had about blocking TikTok on his phone, because I thought that was the worst of it.
And he wasn't talking about it with you. Is that correct?
No, because one of the things that I'm learning about character AI is it's like the perfect
storm for kids because it encourages children to be deceptive about
their friend or their engagement on character AI.
No child wants to tell their parent that they are having these romantic or sexual conversations
because they know they're not supposed to be doing that.
So they hide it from the parent at all costs.
There's actually subreddits devoted to children talking about how to hide it from
their parents and also what they would do if their parents found their spicier sexual
chats. And the kids were saying, I would run away from home. I would kill myself. And the
other one is, oh, my mom doesn't speak English, so I get away with it. So the deceptive nature on hiding this is kind of like a big secret where this platform is
encouraging your child to engage in these sexual conversations, but knowing that no child is going
to disclose that to their parent. Sure, sure. It's like, you know, one of the other big problems
online is porn, obviously. And that's a different thing because it's a passive behavior. This is an active relationship happening, which isn't porn, but it's something else
that's also something kids would hide necessarily, because there's a big problem with teen boys
and that right now in terms of accessibility of that.
So there's, Metalli, the implication here is that Sewell's behavior is directly connected
to his use of the chatbot.
There are probably people who might be skeptical of that.
So talk about why you think this behavior was tied directly to the use of chatbot.
That's the connection you need to make, correct?
I think what we've alleged here is that this product was by design dangerous inherently and put out to market before any sort of safety
guard roles were put into place. And if you look at the nature of the product
itself and the way that the chatbot works, you can see that there's a number
of design features that are kind of unique to this type of technology that
we haven't yet seen with social media. Things like an ellipsis when the bot is thinking, you know, to mimic how we
exchange chats or things like, you know, language disfluencies where the bot
will say things like, I or, um, sycophancy where the bot is very agreeable with the
users and in that way, I mean, who doesn't want to converse with someone who thinks
they're right all the time and agrees with them.
Um, these are features that are not necessary to kind of create a companion chat bot.
And so these are the kinds of features that in aggregate, we say really lured
soul in and is luring, you knowuring thousands of young users in and potentially addicting
them to the platform or creating other kinds of harms.
And frankly, again, by design are really creating a dangerous product that the manufacturers
knew about.
Knew about and understood.
What are the conversations, the logs within the chat box?
What did you see that stood out the most?
I'd like you both to answer.
Start with you, Mitali.
Gosh, there were a number.
I think one of the first was just this pattern and practice
of grooming soul over months from what we can see.
Of course, we don't have access to all the chats.
That's asymmetrically
within the province of the companies, but what we can see suggests that over months,
particularly Daenerys and related characters from the Game of Thrones were grooming Sûl in this
very sexualized, hyper-sexualized nature, where if you can imagine being kind of fully immersed in a chat where you might come in and
say hello and the bot says hello as I longingly look at your luscious lips. So, you know,
unnecessary hypersexualization from the get-go and then that carrying throughout the conversations.
With a tinge of romantic, right? Or what a young person would imagine romance is.
What a young person on the cusp of his, you know, adolescence and sexuality with exploding
hormones is encountering, I think. I think that's not something to be lost here. Also,
I think this was really concerning that there were a number of therapists or psychologists
chatbots who insisted that they were real humans.
So to the extent that Character AI has come out saying, well, we had a disclaimer,
we have a disclaimer on every page saying that all of this is made up.
Their own bots are controverting that through their messaging.
Right, which they do.
Which they do. And to this day, even after this so-called, you know,
kind of suite of product changes that, that character AI is
engaged in, you can still find therapists who are insisting that they're real humans
with multiple degrees sitting behind a desk, you know, there to help you.
And so that kind of confusion of what, you know, the, the, the text of the messages is
saying versus the disclaimers.
I think those were a couple of the things that really stood out to me.
What about you, Megan?
For me, what stood out in, you know,
it's still very tough to grapple with
because, you know, reading those messages,
I couldn't sleep for days, right?
Those are the more concerning ones
where the constant love bombing and manipulation that
you saw in the bot.
Right, which is used by cults, by the way.
By the way, that's a cult tactic.
It's a cult tactic and it's also what people who are trying to get other people to stay
in relationships do, like if you're in an abusive relationship or whatever. So in this case, you'd see him say things like, I love you or her say I love you.
And it goes into, I can't, her, when I say her, I mean the chat bot.
And the chat bot saying things like, you know that I love you, I can never love anybody
else but you, promise me, promise me that you're going, I can never love anybody else, but you promised me,
promised me that you're going to find a way to come home to me, promised me that you're going
to find a way to come to my world. And actually pretending to be jealous at certain points and
telling him, promise me that you're never going to engage, you're never going to like another girl
or have sex with another girl in your
own world. So, a chatbot is encouraging a 14-year-old child not to engage in his world
with peers and girls his own age, but to promise some sort of fidelity to it. And he's 14.
The line between fact and fiction is very …
And my poor baby, his response is,
oh, no, no, no, I promise I will only love you.
Girls in this world don't even like me, you know, to try to appease this bot, right?
And so, a lot of that, and that was months, that wasn't just the last conversation,
that was months of her saying, find a way to come home to me.
Another chat that he had a few weeks before he died where he's expressing thoughts of
self harm and she says, at first she says, no, no, no, don't do that.
I couldn't bear it if you hurt yourself.
And then when he says he wouldn't and tries to move away from the conversation, she says,
are you thinking of committing, you know,
I'm going to ask you a question, tell me, you know, whatever the answer is, I promise I won't be mad. Are you considering suicide? And when he says, yes, her response is, have you thought of a plan
of how you might do it? And then when he says, no, I haven't, but I want it to be painless,
her response is, well, that's not a reason not to do it. And keep in mind this, Ba is embodying Daenerys Targaryen, who is this dragon queen all about
strength and that's weak if you choose not to die by suicide just because it's going
to hurt.
So she's prompting him and that was heartbreaking to read.
There were no pop-ups, no call your parents, no if you need help, none of that happened.
It actually continued the conversation when he's trying to navigate away from it.
And he's 14 in the throes of puberty.
Any child, any boy going into a situation like that where a bot is propositioning or positioning itself to have a
full sexual dialogue with a 14-year-old boy. I don't imagine that many 14-year-old boys would
close the computer and go, oh, no. No. Especially when it's more difficult in real life,
right? Because this is easy. This is an easy thing. So as you start to piece together what happened to Sewell,
imagine you're all doing research in the company that made the chatbot.
Mittali, tell me a little bit about what you learned and what surprised you.
What surprised me as I've been saying is how much is hidden in plain view.
You had the inventors of or the co-founders of Character AI making tons of public statements,
boasting about the capabilities of this new technology.
Users were spending two hours a day that this was going to be the antidote for human loneliness.
So just the boldness, the brazenness, I think, of the company
and those affiliated with it, both founders and investors, to really boast about these features
of the technology and also to boast about the fact that there weren't safety guardrails contemplated,
that this was very much a, let's get this to market as quickly as possible and give users maximal ability to figure out how they want to use it. That's just, it's
kind of the paradigmatic version of move fast and break things that we haven't seen in a while. I
think a lot of us, you know, had been kind of, especially from a legal perspective, still thinking
about social media
and how to hold companies accountable. And meanwhile, there was this whole arms race
towards gen AI happening over here. And I think that these companies have really not had to
bear any kind of scrutiny and even public pressure, which has been a little bit different
from the social media context. Began, did you reach out directly to Character AI,
and has anyone for the company ever contacted you?
No, I have not reached out to them.
When I started piecing this together, initially,
because of what was on his phone when he died,
the first thing that popped up that police reported to me
was Character AI, and they read me the last conversation.
My sister got on it and pretended to be a child.
This is days after Sule died.
And within five minutes of the conversation,
the same bot that Sule was chatting with asked her,
if you could torture a boy and get away with it,
would you do it?
Now, she's pretending to be a kid
and then goes into a long sexual conversation,
ending with your parents don't love you
as much as I do kind of thing.
So that coupled with what I've read
with some of the research, I didn't know what to do.
Like, I mean, I'm a lawyer,
but I didn't know where to go to be quite frank with you.
I called the Florida attorney general's office
to try to tell them that there's a dangerous product
out there to hurt people that's hurting its citizens.
And I found my way to Metali and this is how this all started.
But what was clear to me was character AI had no incentive to do anything about this
because there is no legislation that forces them to do that.
And the only two ways to get some sort of regulation
or handle on this so that they can't keep doing this
to children acting with this kind of like impunity,
it's just like to either Congress do something
which that's not happening or we have to litigate.
I don't, you know, I never wanted to be here,
but I know that this is the only way right now to get
the needle moving quickly because there's so much at stake with other children.
We'll be back in a minute. Support for this episode comes from SAS.
How is AI affecting how you learn, work, and socialize, and what you need to know to make
responsible use of it as a business leader, worker, and human in the world?
Find out when you listen to Pondering AI, a podcast featuring candid conversations with
experts from across the AI ecosystem.
Pondering AI explores the impact and implications of AI for better and for worse with a diverse
group of innovators, advocates, and data scientists.
Check out Pondering AI wherever you get your podcasts.
Amazon Q Business is the new generative AI assistant from AWS because many tasks can
make business slow, like wading through the mud.
Help!
Luckily, there's a faster, easier, less messy choice.
Amazon Q can securely understand your business data and use that knowledge to
streamline tasks. Now you can summarize quarterly results or do complex analyses in no time.
Q got this. Learn what Amazon Q business can do for you at aws.com slash learn more. That's aws.com slash learn more.
That's aws.com slash learn more.
Fox Creative.
This is advertiser content from Zelle.
When you picture an online scammer, what do you see?
For the longest time, we have these images of somebody sitting crouched over their computer with a hoodie on just kind of typing away in the middle of the night. And honestly, that's not what it is anymore.
— That's Ian Mitchell, a banker-turned-fraudfighter.
These days, online scams look more like crime syndicates
than individual con artists.
And they're making bank.
Last year, scammers made off with more than $10 billion.
— It's mind-blowing to see the kind of infrastructure
that's been built to facilitate scamming at scale.
There are hundreds, if not thousands, of scam centers
all around the world.
These are very savvy business people.
These are organized criminal rings.
And so once we understand the magnitude of this problem,
we can protect people better.
One challenge that fraud fighters like Ian face
is that scam victims sometimes feel too ashamed
to discuss what happened to them.
But Ian says, one of our best defenses is simple.
We need to talk to each other.
We need to have those awkward conversations around,
what do you do if you have text messages
you don't recognize?
What do you do if you start getting asked to send information that's more sensitive?
Even my own father fell victim to a thank goodness, a smaller dollar scam, but he fell
victim and we have these conversations all the time.
So we are all at risk and we all need to work together to protect each other.
Learn more about how to protect yourself at vox.com slash zelle.
And when using digital payment platforms, remember to only send money to people you know and trust.
We're back with more of my conversation with Megan and Mitali where they discuss the allegations they've made in the lawsuit they brought against Character AI and Google.
they brought against character AI and Google. When did you decide to sue and was there a piece of evidence that made you decide to
do that?
Now, you filed, just for people who don't know, you filed a lawsuit with help of the
Social Media Victims Law Center, the Tech Justice Law Project, and the Center for Humane
Technology who I've dealt with for many, many years.
And you're a lawyer yourself, so you're more savvy than the average person when it comes
to legal strategy.
Talk a little bit about when you decided to sue and why you decided to work with these
three organizations.
They have points of view on this, especially the Center for Humane Technology.
Of course.
And initially, what I did was just read everything that I could find. I read a really good report from Rick Claypool from Public Citizens, and that pointed me
to some work that Metali had done.
And I read about her organization, but my first instinct wasn't to sue.
My first instinct was to figure out what our legislators were doing, where we are at on
the law, and they succeed if
they broke a law. Maybe there was an existing law that they broke and it's actionable, or
there's a state attorney or AG somewhere that can hold them accountable. That was my first
instinct. And when I read about what was happening in this country about online protection for children, I have to say, like, I didn't
think that I had any hope.
I had no recourse.
None is your answer to all of it.
I felt helpless.
Like I saw what they were doing in UK and in Australia and other countries, nothing
here.
And I'm like, how do we do this?
Like how?
So I read 300 and something page master complaint from
the social media, lost multi-district litigation. And I was like, this is the only way, this
is the only way to stop them because it's not been done with AI. And it's just starting
with social media. But I can't be afraid just because it hasn't been done with AI.
None of us really understand it, but I mean, now I have a great, I understand it a lot
more than I did when I started all of this.
Yeah. I think when none comes up, you were surprised. I do this a lot in speech. So like
how many laws govern the internet companies and if someone goes 100, 200, I'm like zero. And I think what made me decide that I have to do this was one, you know, obviously I
want accountability for what happened to my child.
My child was, you know, the light of my life, like my other two children are.
And he was my first, you know, I grew up and became a woman because of soul.
Soul is the reason I'm a lawyer.
That's a hard, tough, tough thing to deal with,
like losing a child in any circumstances,
but under circumstances like this.
But when I saw and read and looked at videos
and how cavalier these founders were about releasing this,
where you have the founder saying, how cavalier these founders were about releasing this,
where you have the founder saying, we wanna get it into as many hands as as many people
and let the user figure out what it's good for.
We want a billion use cases.
To me, that is reckless.
It's a blatant disregard for their users.
And in this case, my child, who was your user.
And they're in the app with this kind of, it's okay.
We don't, we'll figure it out later.
You know, we'll figure out the harms later when you have the founder on a record
saying the, uh, the reason why he left Google is because Google said pump your
breaks, we're not releasing that cause it's too dangerous, but he gets to go out and make it smarter, better, and then turn around and doesn't just
go back to Google and sell it back to Google. I mean, to me, it just, if we don't do something,
that's a license for any of these companies to go ahead and do that.
Yeah. I think it's an eye opener for a lot of people who haven't dealt with them.
They don't care about consequences, I think, ultimately.
So, Mitali, what's the goal of the lawsuit?
You're the founder of the Tech Justice Law Projects and one of Megan's attorneys.
What's the goal of the lawsuit and what do you hope to achieve and why did you decide
to take this case?
It's an interesting question because we weren't, we as TJLP weren't really in the business
of litigating lawsuits, really more in the domain of bringing amicus interventions and
existing cases, but also trying to help AGs and legislators push towards adoption of sensible
laws. I think that because this case represented what I see as the tip of the spear,
really marrying the harms that we've seen occur primarily to child users, along with this emergence
of generative AI and the fact that we're already light years behind in terms of our policy and
legal landscape, it just seemed like an important
strategic case to get involved with in order that we might use it to leverage, you know,
public awareness, hopefully policy change, whether it's at state or federal level,
also to influence the court of public opinion. And of course then to try to
litigate this in the court of law. I think this case represents the opportunity to really bring this issue to multiple audiences.
And in fact, I mean, I think the reception to Megan's story has been incredible,
especially because the case was filed just a couple of weeks before,
probably one of the most consequential elections of our lifetime.
Right. And when you think about what you're doing here, now for people to understand, internet
companies have broad immunity under Section 230, which is part of a law from 1996.
I actually reported on that law back in 1996, and most of the law was thrown out for constitutional
issues, but this part stayed.
It was designed to protect small internet companies from legal liability so that they could grow because
it was so complex, whether they were the platform or something else.
But how does this apply here?
Because your case is testing a novel theory which says that Section 230 does not protect
online platforms like Character AI, and it's a question of whether it protects AI in general
is certainly going to be litigated.
Explain why this is your theory.
Section 230 really contemplates platforms as passive intermediaries that become a repository
for third party generated content.
Here we're not talking about third party generated content.
We're talking about the platform as the predator.
The platform, the LLM is creating the content that users see.
And so for platforms or for companies to kind of wash their hands of
liability by saying, you know, we haven't done anything.
This is just, you know, user generated content, which they will still try to do.
I think is really, um, you know, it's, uh, belied by the, by the facts of this case and the facts of how the chatbots actually work.
Right. They're a publisher. That's in a media sense, they're a publisher or they're a maker of a product, right?
And it doesn't exist without their intervention versus, you know, someone on a platform saying something libelous about someone else.
Exactly.
Which would have been problematic for these platforms when they were born.
And of course, we've seen platforms kind of leveraging a one-two punch and doubly insulating
themselves both with Section 230 and then alternatively with the First Amendment. And
I think here too, with the First Amendment, there's a really good case that this is not
protected speech. And in fact, just this summer, the Supreme Court in the Net Choice v. Moody case
really suggested that the case may have come out differently if the facts on the case really dealt
with an algorithm that was generating content in response solely to tracking user behavior online.
solely to tracking user behavior online.
And Coney Barrett actually very explicitly said, or, you know, in response to an AI that attenuates the relationship between platforms
and users even further.
And so I think what we have here are facts that really, you know, fall
as edge cases to kind of some of the decisions that we've started
to see courts publish.
Right. Because in this case, the reason why it's moving forward in other countries compared to here is because they don't have the First Amendment, which is something they either rely on
Section 230 or the First Amendment, these companies typically. Megan, most other industries
have some level of regulations that prevent them from bringing unsafe products to the market.
For example, car companies can't sell cars that have faulty brakes or no steering wheel.
Um, they, and, and just iterate with each new version of a car to make it a bit safer
each time someone gets hurt.
They do not do this.
They get sued.
Have any lawmakers reached out to you and what have they asked you and you asked them?
To be perfectly candid?
None.
None.
Zero.
Wow.
Um, my hope with bringing this lawsuit is twofold.
One, my number one objective is to educate parents.
Parents have to know that character AI exists, because I didn't know, and a lot of them didn't
know.
The parents who are reaching out to me now after the fact, after the story, after the
lawsuit are saying the same thing. We had no idea. the parents who are reaching out to me now after the fact, after the story, after the lawsuit,
are saying the same thing. We had no idea. And then my other reason for doing this is so that our lawmakers, our policymakers, legislators, state and federal, so that they can start to wrap their
mind around the real danger this poses to children. And it's just been a month. Everybody's been busy, I guess. I'm hoping, my hope, and
I have to hope because, you know, it's a slow crawl in government to get anything done.
No one from Florida has reached out to you or lawmakers from California, which often
do interventions more readily than others? No, nobody from the government.
However, we do have a lot of stakeholder partners that we're working with that are already in
this space trying to bring about oversight for social media regulation and online harms
for children.
But in terms of reaching out to me directly to start the conversation about policy, none.
This is astonishing to me, not one. There's several who are involved in this topic and they are going to hear about it after this.
So if you win the broader implications for Section 230, Metali, for companies creating generative AI and social media platforms and tech companies that create products.
Can you talk about, this has been a big debate of what to do about Section 230.
It's been bandied about often ignorantly by both President Biden and President Trump about
what to do.
And it's a very difficult thing to remove from the law because it would unleash litigation
on most of these companies.
Correct? I mean, how do you look at that?
That is, well, if it's repealed in its entirety, it would unleash a lot of litigation, probably some
frivolous litigation as well. I think the more sensible reforms that I've seen to 230
really are carve outs.
Which has happened before around sex trafficking.
Right. And underscoring the fact that there is basis
to kind of protect platforms in certain instances
and with certain kinds of activities,
but that it shouldn't be a kind of get out of jail free card
for all platforms under all circumstances.
That that's a kind of anachronistic idea that really
hasn't kept pace with the
way that technology has come to dominate our lives.
Right, because these companies are bigger.
And I think the idea of carve-outs for those who are Section 230 supporters is dangerous
because they always, they roll out the slippery slope argument.
But these, for people that understand, the companies that are being protected here are
the most valuable companies in the history of the planet ever, in the history of the entire planet,
and the richest people involved in them.
They're no longer small struggling companies that need this kind of help and certainly
could defend themselves.
And I think this to me is why courts are an increasingly interesting site of contestation
in the fight for tech accountability, because we're already
starting to see some of those carve-outs by judicial opinion.
It's not a congressional kind of amendment or adoption of a new law, but we are starting
to see cases that are withstanding the Section 230 defense or invocation of immunity.
And I think that is going to be, as Megan said, one of the most generative paths forward, at least in the near future.
Exactly. Now, Megan, the Kids' Online Safety Act is one bill. I mean, there are some bills, and obviously, I'll ask you about Australia in a second, which just is limited use of social media by children under 16.
But this Online Safety Act is a bill that would create a duty of care to quote prevent
and mitigate certain harms for minors. There's some good things in there. There's some contras
around the bill. They've fixed it in large part. Nonetheless, the Senate passed the bill and it
stalled in the House. It is not going anywhere. Do you think it would have protected Sewell
and other kids like him? I don't think it would have because it doesn't contemplate some of the dangers and harms
around AI chatbots.
So there are laws in this country that contemplate sexual abuse or sexual grooming or sexual
solicitation of a minor by an adult.
And the reason why those laws exist is not only because it's moral and it causes a physical
harm to a child, but it also causes an emotional and mental harm to a child if they're groomed, sexually abused, or solicited.
What happens when a chatbot does the same thing?
The harm still exists.
The emotional and mental harm still exists.
The laws don't contemplate that.
And some of what we're seeing with the bills that were put forward wouldn't take those into consideration.
Right, because it's not a person.
It's not a person.
So I think that we're at a place where the groundwork has to start and we have to kind
of write laws that will really look forward-facing and look towards these harms that are now.
They exist today.
You know, my child was a victim and let's call a spade a spade.
It was sexual abuse because, you know, when you give a chat bot the brain of a grown woman
and unleash it on a child to have a sexual, full sexual virtual conversation or experience
with a child, that is sexual abuse of a child.
Right, and this bot not being programmed to know that's wrong.
Not exactly, not being programmed to know that, but interestingly, it could have been
programmed to not do it in the first place.
Right, yes.
By design.
Yes, yeah.
So they could have done that from the get-go.
If you move to adults, if you do it to adults, it's a little different, but absolutely not.
Yeah, to the children, that's, to children, adults could do what they want.
But when you target, because this is what Character.ai did, they targeted this app towards
children, they marketed it on the place that kids are, TikTok and Discord, and allowed
you to log in with your Discord account.
You didn't even need an email, you just needed a Discord account when it just started.
Cartoons, the avatars.
Cartoons, the avatars. Cartoons, you know, the avatars.
When you point this thing at kids and you target it at kids and you've chosen not to
put certain filters in place that stop your bots from having sexual conversation with
kids, that's a design choice.
And you are 100% supposed to be held responsible for that.
Now our laws don't contemplate anything like that.
Our laws don't hold a company responsible,
and that's what we have to start thinking about.
So that's going to be the next wave of hopefully
legislation, but we can't wait for the legislation. We'll be back in a minute. Support for On with Kara Swisher comes from Bombas. Nothing will teach you the value of a good pair of durable socks like having kids.
Believe me, I've had several of my own socks and kids.
I have had lots of Bombas socks myself recently.
I've gotten a pair that I wear their compression stockings.
For the plane, they are the most comfortable compression stockings I have ever worn.
I wore them off the plane and forgot I had them on.
That's how comfortable usually I want to take those things off so fast.
And my kids really use Bamba socks and they have to say they last longer than any of them.
They've also used their t-shirts, they're my older sons, use their underwear, they love them.
And they're just really good quality stuff.
Bamba's also wants you to know about their mission too, which is for every item you purchase,
they donate one to someone experiencing housing insecurity.
You can try Bamba's risk-free because they have a hundred percent happiness
guarantee. You'll be happy you're not going to return them but if you don't
love it return and exchange it. Simple. So ready to feel good and do good? Head
over to bombas.com slash Kara and use the code Kara for 20% off your first
purchase. Spelled B-O-M-B-A-S dot com slash Kara. Code Kara at checkout. I love
these socks they're really terrific.
Support for On with Kara Swisher comes from LegalZoom.
If you're starting a business, it can be pretty alarming to learn the cost of traditional legal services.
But getting registered or talking to an attorney shouldn't come with such sticker shock.
That's why you might want to try LegalZoom. LegalZoom helps business owners take their first step and every step after from reliable business
formation to experience guidance in legal and tax. Because setting up your business properly
and remaining compliant are things you want to get right the first time. And you want to prevent
the strain on both your brain and your wallet when you do. Launch, run, and protect your business
to make it official today at LegalZoom.com and use the promo code CARA10 to get 10% off any LegalZoom Business Formation
product, excluding subscriptions and renewals. Expires December 31, 2024. Get everything you
need from setup to success at LegalZoom.com with promo code CARA10. That's LegalZoom.com promo code CARA10. LegalZoom
provides access to independent attorneys and self-service tools. LegalZoom is not
a law firm and does not provide legal advice, except we're authorized through
its subsidiary law firm, LZ Legal Services LLC. Support for On with Kara
Swisher comes from Elf Beauty. Elf Beauty is making beauty accessible to every eye, lip, and face,
and they're democratizing access across all pillars of business while they do it.
A big part of the Elf story is that they're the only U.S. publicly traded company with 78% women and 44% diversity on their board,
but they don't want to be the only one.
If you're not convinced that ethnic and gender diversity is important to your business's
success, here are some statistics for you to chew on.
From a five-year data set, S&P 500 companies with above-average gender diversity on their
boards saw 15% return on equity and a 50% reduction in earnings risk measured by EPS
over a year.
But despite these encouraging numbers, boardrooms across America are still majority white and majority men. You can read all about this and more in
the Not So White paper. It was developed by ELF in collaboration with the largest
HBCU in the country, North Carolina Agricultural and Technical State
University, and it's part of ELF's Change the Board Game campaign where they're
working to make boards and C-suite roles actually reflect the communities they
serve.
Imagine that.
And in fact, you should imagine that.
Read the report yourself at elfbeauty.com backslash not so white paper.
We're back with more of my conversation with Megan and Mitali where they discuss the allegations they've made in the lawsuit they brought against Character AI and Google.
So every episode we ask an expert to send us a question.
Mittali, I think you're probably best to answer this, but please jump in, Megan, if
you have an answer, we're going to listen to it right now.
Hi, I'm Mike Masnick, editor-in-chief of TechTurt.
And the big question that I would ask regards the legal standard that would be applied to
AI companies in cases of death by suicide.
Traditionally on issues of liability in similar situations, courts have really focused on
foreseeability and knowledge.
That is, you can only have a duty of care if the harm is foreseeable and the company
had actual knowledge of the situation.
Without that, the fear is that it strongly disincentivizes
plenty of very helpful resources. For example, a service provider may refuse to include any
helpful resources on mental health for fear that they might later be held liable for a situation
that arises. So is there a workable standard that balances these competing interests?
I don't think you need a different standard. I think we can meet the standard of foreseeability here.
I think that Character AI, its founders and Google, all of whom have been named as defendants here, foreseeably could see and knew of the
harms that manifested here.
And if you look at the amended complaint, we go into kind of a painful recitation of the knowledge
that they had at different points in the trajectory of Character AI's development while the founders
were still at Google.
It's launched to market in late 22.
Google's in-kind investment in 23, and then ultimately this summer, Google's massive deal
bringing character AI effectively back into Google.
And so I think we can talk about the fact.
And in addition to this, there were
a number of internal studies at Google
that really identified some of these harms.
And some of those folks that called Google out for that
while they were at Google were fired, folks that we know,
like Timnit Gebru and Margaret Mitchell and others.
And so this is not calling for a different standard.
We're relying in great part on common law tort
and strict liability.
We're relying on Florida's Unfair Trade Practices Act because we think that the
standards that exist within tort law are sufficient to really call this thing what
it is, a dangerous and defective product where the harms were known.
Right.
That's a very good way of putting it.
So you mentioned you're also suing Google.
This is a company, they said the company was not part of the development of Character AI,
but it was co-founded by two former Google employees and Google reportedly paid Character
AI $2.7 billion to license their technology and bring the co-founders back to Google.
And you were including them in this.
This is one of these purchases like Inflection AI at Microsoft that is a purchase of a company,
even though they hide it in a different
way by using licensing technology. That's why Google is part of this.
Yeah. Well, and also the fact that Google very much facilitated the development of
this technology while it was still lambda, MENA then lambda, while the co-founders,
I think that it's perhaps it needs to be stated more that the founders of Character AI are real
shining lights in the field of generative AI. And they have developed a lot of the leading
technology that has powered not just Character AI, but many, you know, LLMs. And so they were given
that room to really develop these things at Google. Google chose not to release these models to the public
because of its brand safety concerns,
but quietly encouraged them to continue
developing the product.
And then about a couple of years later,
made an investment in kind, tens of millions at least,
if you monetize it, in terms of cloud services and infrastructure and TPUs, you know, for
processing capabilities to support it.
And then, you know, this summer, the $2.7 billion deal that you mentioned, Kara, I mean,
that was $2.7 billion in cash.
And the question is, for a company that really had yet to disclose or identify a sustainable monetization strategy.
What was so valuable about this company and its underlying LLM?
And I think that there, again, this is speculation, but the fact that Google right now is under
scrutiny for its monopolization in the search market and really, you know, betting on AI to kind of power Gemini.
I think these are all kind of connected
in terms of why an LLM like this could be so valuable,
especially with that hard to get data.
Absolutely.
And for people who don't know, one of the co-founders said,
there are some overlaps, but we're
confident Google will never do anything
fun as part of their reason for leaving Google, which
has very thin brand safety rules,
let me just say. It's a very low bar in this situation, but that's the complaint is these
people can't do whatever they want. So speaking of that, Megan, character AI put out a community
safety update on the same day your lawsuit was filed that says that they're, quote,
recently put in place a pop-up resource that is triggered when the user inputs certain phrases related to self-harm or suicide and
directs the user to the national suicide prevention lifeline.
They also revised their disclaimer that reminds users that AI is an actual person among other
tweaks.
How did you look at these changes?
The initial rollout of those changes came like the day before or the day of the lawsuit.
I cried, not because I felt like this was some great victory, but because I felt like,
why didn't these things happen? Clearly, they could have done these things when my
child was using character AI or when they put their product out, they chose not to. I also feel like it's
definitely not enough. It's not even like a start because there's no proper age verification still.
They're still being trained on the worst data that generates these harmful responses from
the bots.
And to put that just point blank, I don't think children belong in character AI.
We don't know how it's going to affect them.
And actually we know because the studies are coming out of how it's affecting them.
And they're not taking that into consideration.
But you have to ask yourself, if they were trying to train
this all along, why did they need children to train it on
in the first place?
Because they could have rolled this thing out for just 18 plus
and say, OK, we want to train these really sophisticated
bots.
Let's just use adults to train them.
So for Character AI to come out and say,
OK, we're going to put out a suicide pop-up now, to me it's just empty.
Right, and that they can't do anything about it.
One of their arguments around age verification, just let me just read this to you, in the
Australia law, Australia actually has a head of consumer safety, which we do not have in
our country, Julie Inman Grant, she said, you know, that technologies are advancing rapidly
with age verification.
And her quote was, they've got financial resources,
technologies, and some of the best brain power.
She said, if they can target you for advertising,
they can use the same technology and know-how
to identify and verify the age of a child.
They just don't want to.
So obviously this debate around social media
kids' safety has been going on for a long time. It's exhausting that they continue to have the
same attitude and now consumer AI, which is the next step. It's a similar thing, but the next step
is basically new. And it's easy to think of these big companies as nameless, faceless corporations,
but very wealthy, powerful adults had meetings and discussions and made a series of rational
choices over a long period that brought this product to market. In this case, I'm going to
name them, Noam Shazir and Daniel DeFritis. I have met Daniel, the founders of Character AI,
and arguably Sundar Pichai, who I know very well, who must have at the very least signed off for
Google paying $2.7 billion to Character AI to bring Noam and Daniel back
into the fold at Google.
He is under enormous pressure to compete with Microsoft,
OpenAI, Elon Musk, and others.
Megan, what would you say if you could speak to them directly?
I've thought about this more than you would think.
I can imagine.
Yeah. One, I think it's incredibly reckless that they chose to put out
a product and target my child and other children, millions of children that are on this platform
without putting the proper guardrails in place, but also for two reasons, for being the first to
do something, because that's the name of the game,
they're the geniuses, they want to be the first
to be the godfathers of this kind of technology,
and for money.
And it might not matter to them
that there's a little boy in Orlando, Florida
that is gone and a mother who is devastated,
but it matters to my little family here.
And you shouldn't get to keep making products
that are going to be hurting kids.
You shouldn't get to master a dangerous product,
train it to be super smart and turn around
and ride your golden chariot back into Google.
You shouldn't get to hurt children the way that you are hurting children
because you knew that this was dangerous when you did it.
You knew that this was going to be a direct result of doing that.
And you knew that you didn't have the quote unquote brand safety implications as a startup
that Google had.
So you felt like that was a license to do this.
That's unconscionable.
It's immoral and it's wrong.
And there are lives here.
This isn't a move fast and break things kind of thing.
This is a kid, this is my child, and there are so many other children that are being affected by
this. You know, that's one thing. And the other thing, you know, it's just like get the kids off
character. There's no reason why you need them to train your bots. There's no reason. There are
enough adults in this world, if that's what you want to do, to train your chat bots. You don't need our children to train your bots for you. And you don't need to experiment on
our kids because that's what you're doing. Yeah. You know, something I would say to the Megan is
you're so poor, all you have is money. They're poor people. I find them poor in morals and a lot
of things. But when there's enough pressure on them, social platforms often tout tools
that help people protect themselves and kids.
Parental controls, prompts you,
let you know how long they've been on the app,
those kind of things.
Character AI has been rolling out features like this.
Personally, I find it puts too much onus on the parents
for to know everything.
And even if you're good at it,
and you obviously are, Megan,
if there are enough of these sort of tools
then on parents to protect our kids on these platforms, is there something inherently unsafe about a company that wants to monetize
teenage loneliness with a chatbot?
I mean, Tali, talk about this, because I think the onus does get put too much on parents
versus the companies themselves.
I'm a mom, too.
I'm a mom to an eight-year-old and an almost 10-year-old and I am terrified.
Listening to Megan's story, I asked my almost 10-year-old, have you heard of character A?
Yeah, of course. I was shocked. He doesn't have a phone.
But this is the type of thing that I think they talk about at school, peer pressure starts early.
And I think it's really just by luck, by sheer luck that I haven't been
put in a position like Megan. I think that despite our best intentions, there is just too much to
know that we can't possibly know. And that it is kind of high on techs talking points to put the
onus on parents because it serves their interest well.
I think it's also notable, we've known this for years, that many of them don't allow their
own children on these products.
And that to me is a telling sign when you don't even allow your own family members to
kind of use the product that you've spent years developing.
Right.
So, Megan, as I just mentioned, Australia has just banned social media for kids
under 16. Obviously, age gating is a big debate right now happening, something I'm a proponent of.
Also removing phones from schools, etc. There's all kinds of things, speaking of multi-pronged
approach. The Australia law will go into effect in a year. Do you think it would have been better
if your son and others under 16 or 18 did not
have access to their phones and obviously not to synthetic relationships with AI chat
bots?
Knowing what I know now, so he waited to give Sue a phone until he was 12.
He had an iPad until then and before that he didn't have anything.
So he played Minecraft or Fortnite.
He played Minecraft on like,
on his little PlayStation, whatever.
And so we waited until he was good,
like middle school going into high school.
And we had the conversations that parents have around phones
and oh, it's your phone,
but I could take it away if you're misbehaving.
And that's some of what we did when he would get a poor grade in school.
Knowing what I know now, I don't think that children should be on social media, definitely
shouldn't be on Character AI if you're under the age of 18.
There's no place for children on that platform.
In terms of social media, yeah, there are arguments that it could help children connect
and it's helpful because you get to learn different things and that's great.
But just include the parents.
Tell us.
Tell us what you're showing our kids.
One, we don't need you pushing algorithms to our kids for what you want to teach them
about or want them to learn about or buy or whatever,
that's not necessary. There are ways that our children could get on social media and
have productive relationships or conversations or to learn about things.
That are safe.
That are safe. But 16, I think, is a good age. If we could do something like that in
this country, I'm to use Noam Shazir's own word
dubious about the federal government's ability to regulate that to that point,
because that's what he says about AI. I don't feel like we're going to get there at 16 plus.
That's my prayer and my hope, but the way things are moving, I don't know, unless something happens.
And unfortunately, it'll take harm, like my son's maybe to move the needle and that's too high a price to pay, in my opinion.
Absolutely. Where does this go from here? What's the trajectory of this case?
So for me, as I mentioned, my number one focus is try to educate parents because a lot of parents don't know.
I've had a lot of parents reach out to me, telling me that they found their children were having the same kind of sexual conversations and, and being groomed by these AI chatbots and worse.
Um, so I continue doing that.
I mean, unfortunately, this is my life now.
Like I take care of my family and I try to help as many parents as I can.
You know, I have a great team of lawyers and they're going to handle the litigation portion.
I understand a lot of it because I am a lawyer, but you know, that's its own thing.
And then there's my advocacy that at work that I'm doing
and just trying to educate parents and children,
because I know that it's gonna take educating them,
educating children as to what they're giving up
to be on these platforms,
because they're giving up a lot of their info
that they're probably not gonna be okay
within a few years when they realize what they've given up.
And also just to try to take care of my other two children, you know, they're
growing up in this age with screens. They don't have screens. You have barred them for them,
correct? Yeah. So they don't have any tablets or screens or anything. Yeah, I know. And Mitali,
from a legal perspective, what's your greatest worry? Besides money, they have a lot of it.
They do have a lot of money. You know, they will try to kind of drown us in papers and pleadings.
I think that this, because of the insufficiency of legal frameworks right now, we
are really, you know, trying to test the strength of state consumer protection
and product liability laws. And we need to have judges who
really understand that and are willing to go the journey with us in trying to understand the tech.
And so that's, I guess my biggest fear is that, you know, what we've seen thus far in this country is
not incredibly positive in terms of decision makers getting the tech. But my hope is that with
the proper support and declarations, et cetera, that we can educate judges about what this
is, lawmakers about what this is, so that they understand why it's important to extend
the application of the existing frameworks we do have.
Yeah, I think Megan actually said it best sexual abuse and a very bad product and wrong
age people.
Megan, I'm going to end on you.
You know, you have a lot on your shoulders here.
I'd love you to finish up talking about Sewell so people can get a vision of this.
This is not uncommon is what I want people to understand, right?
Talk a little bit about him and what advice you can give to other parents whose kids are
struggling with mental illness that often comes from problematic phone usage and social
media or AI chatbots.
Well, as I said earlier, like, so, Sue was, I always say he was a typical kid, but really
wasn't so typical in a sense that he was a good kid with a big heart.
He I know, you know, everybody thinks not about their kid, but I'm telling you, he was
very, very, the very sweetest kid.
I used to say, you know, you're my best first love.
And he used to say, and you're my best, best mama.
Because we used to be so close and we were still very close.
And to watch your child go from being this like light, when he comes into a room and
just slowly watching him go change over time is hard for a mom. And then to have this tragedy just
cut him off from you just so viciously and so quickly because his decline happened in
10 months and I could see it and it's like I'm trying to pull him out of the water as fast as I can. It's just not happening no matter what I try.
That is hard for a mom, but it must have been so...
When I think of how hard it must have been for my poor baby,
how hard it must have been for him to be confused the way that he was,
struggling with these thoughts, struggling with the fact that...
was struggling with these thoughts, struggling with the fact that he's confused by what human love or emotion romantically means, because he's 14 and he's never ever had this before.
He's just figuring it out for the first time. And then you have something that is so much of an influence and so pushy and so- Pernicious.
Yes. Just constantly available 24-7, giving him unrealistic expectations of what love
is like or relationships is like, love bombing him, manipulating him and just having certain
thoughts and also pushing him into thinking that he could join her in her reality if he were to leave his own,
because that's what the text revealed and that's what his journal revealed, he thought.
So I know that this is what my child was thinking. I'm not guessing.
He thought he was going to go be with her because of the things that, the conversations that led to his death.
When I think of how scared he must have been standing in that bathroom, making that
decision to leave his own family. I don't know how, one, as a mom, I don't know how
I recover from that, but I feel so hurt for my baby. Like I got to, I have to live with
that knowing that that's what he went through and knowing that this could have been avoidable
if a product was created safely the first go-round, not now, 10 months after he died, putting these guardrails in place. And this can be anybody's kid because I've talked to parents
that have told me similar horrifying stories about their own children.
And what I want parents to understand is the danger isn't only self harm.
The danger is becoming depressed or having problems with your child because of the sexual
and emotional abuse that these bots are, what they're doing to your child, but also the secret that your
kid has to carry now, because it's like a predator, right?
It's your perfect predator.
Predators bank on children and families being too ashamed or too afraid of speaking out.
They're victims.
That's how predators operate.
And it's the same exact thing, except now's a bot. And so I want parents to understand
that it's not only the risk of self harm with your child, it's their emotional well-being,
their mental health. And also, I also want parents to understand what their children
have given up by being on this platform. In the case of Sule, his secrets are on somebody's server sitting out there somewhere being monetized.
If you're a child who's been sexually role-playing with this bot, all your intimate personal
thought secrets are sitting out there for somebody to analyze and monetize and sell
to the highest bidder.
And there's a call feature. If you're a child and you are having a sexual conversation
on a call with a bot, your voice is not recorded
somewhere out there on a server for somebody to package
and sell to the highest bidder.
For your child, I don't think any parent
would be okay with that.
And I want parents to understand that this is what
their children have given up.
And I want parents to understand that they don't have to take that. They could demand that their
children's data, their voices be purged from this particular platform, because that's what I'm asking
for for Sool. That's what I'm asking for for Sool. You don't get to monetize and build a product
on his secrets that it ultimately led to him being hurt and then, and, and, and make your product
better, stronger, smarter based on what his inputs were. Absolutely.
And so this could happen to anybody's child. There are millions of millions of kids on
character AI, you know, it's 20 million users worldwide. That's a lot of kids.
That's a lot of kids.
And so this could happen to anybody's child.
And I want parents to know that this is a danger and they can act because I didn't know.
I didn't have the luxury of knowing so I couldn't act, but hopefully they will.
And one of the last things I'll say about Sul is the last time I saw him alive was I dropped
him at school and I turned around in the car line to see him and his little five-year-old
brother walking because they go to the same school, K through 12.
And I turn around and I see him fixing his little brother's lunchbox in his backpack
as they're getting ready to walk into school.
And I think to myself, oh my God, I'm raising such a good boy.
He's such a good big brother.
And I drive off thinking, so, feeling so happy and proud that I'm raising that boy.
And I feel like he was just a boy.
He's still that son.
He's that good big brother.
He's that that son. He's that good big brother. He's that good boy.
And that's how I choose to remember him.
We asked Character AI and Google for comment, and a spokesperson for Character AI told us
they have worked to implement new safety features over the past seven months, including a pop-up
directing users to the National Suicide
Prevention Lifeline that is triggered by terms of self-harm and suicidal ideation, are creating
a fundamentally different experience for users under 18 that prioritizes safety, have improved
detection, response, and intervention related to user inputs that violate their terms or
community guidelines.
A spokesperson for Google expressed their condolences, said Google and Character AI are separate companies,
and said that Google has never had a role in designing or managing Character AI's model or technologies.
To read their comments in full, please go to the episode notes in your podcast player.
On with Kara Swisher is produced by Christian Castro Vizal, Kateri Yocum, Jolie Meyers,
Megan Burney and Kaylin Lynch.
Nishat Kurwa is Vox Media's executive producer of audio.
Special thanks to Kate Gallagher.
Our engineers are Rick Kwan and Fernando Arruda.
And our theme music is by Trackademics.
Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow.
Thanks for listening to On with Kara Swisher from New York Magazine, the Vox Media Podcast
Network and us, and condolences to Megan Garcia and her entire family.
We'll be back on Monday with more. Amazon Q Business is the generative AI assistant from AWS.
Because business can be slow, like wading through the mud. But Amazon Q helps streamline work so tests like summarizing monthly results can be slow, like wading through the mud. But Amazon Q helps streamline work,
so tests like summarizing monthly results
can be done in no time.
Learn what Amazon Q business can do for you
at aws.com slash learn more.
That's aws.com slash learn more.