The Decibel - Machines Like Us: This mother says a chatbot led to her son’s death
Episode Date: January 16, 2025In February, 2024, Megan Garcia’s 14-year-old son Sewell took his own life.As she tried to make sense of what happened, Megan discovered that Sewell had fallen in love with a chatbot on Character.AI... – an app where you can talk to chatbots designed to sound like historical figures or fictional characters. Now Megan is suing Character.AI, alleging that Sewell developed a “harmful dependency” on the chatbot that, coupled with a lack of safeguards, ultimately led to her son’s death.They’ve also named Google in the suit, alleging that the technology that underlies Character.AI was developed while the founders were working at Google. ‘Machines Like Us’ reached out to Character.AI and Google about this story. Google did not respond to request for comment and a spokesperson for Character .AI said “we do not comment on pending litigation.”Host Taylor Owen spoke to Megan Garcia and her lawyer, Meetali Jain, to talk about her son and how chatbots are becoming a part of our lives – and the lives of children.If you or someone you know is thinking about suicide, support is available 24-7 by calling or texting 988, Canada’s national suicide prevention helpline.
Transcript
Discussion (0)
Hi, it's Naneca.
Today, we're bringing you an episode from another Globe and Mail podcast, Machines Like
Us.
It's a show about technology and artificial intelligence.
Host Taylor Owen interviews entrepreneurs, journalists, scholars, and lawmakers on the
technologies that are changing our world, as well as transforming and potentially endangering our lives.
New episodes are released every other Tuesday.
You can subscribe to Machines Like Us
wherever you listen to podcasts.
A quick warning before we start.
This episode discusses suicide.
Please take care while listening.
Hi, I'm Taylor Owen.
From the Globe and Mail, this is Machines Like Us.
In February of last year, Megan Garcia's 14-year-old son Sewell took his own life. As she tried to make sense of what happened, Megan discovered that Sewell had fallen in
love with a chatbot on Character AI, an app where you can talk to chatbots designed to
sound like historical figures or fictional
characters.
Now Megan is suing Character AI, alleging that Sewell developed a harmful dependency
on the chatbot that coupled with a lack of safeguards on the app ultimately led to her
son's death.
Character AI has said they don't comment on pending litigation, but a spokesperson
told us that the company's goal is to provide a space that is both engaging and safe for
its users, and they've recently launched a separate model of the app for their teen
users.
Today, though, I'm joined by Sewell's mom, Megan, and her lawyer, Metali Jane, to talk
about what happened to Sewell, and to try to understand the broader implications
of a world where chatbots are becoming a part of our lives
and our children's.
So first of all, just thank you so much both for doing this.
Thank you, Taylor, for having me, and thank you for allowing me to come on your show to tell Sewell's story.
And Mattala, thank you too for being here.
Thanks, Taylor.
So this is a story that's been covered around the world.
But I just wanted to open by giving you a chance, Megan, to tell Sewell's story.
He was only 14 when he died. And I'm wondering if you could tell us
what kind of kid he was.
Sewell was my first born.
He was 14 and he was just starting to come into his own.
One of the things I think back to now
is like how witty and kind of sarcastic teenage kid,
but in all the sarcasm and the kind of witty banter going back and forth, I could see him
starting to make connections with this world, like understanding his place in terms of like basketball and teammates and his siblings and his responsibilities
at home when his little brother started
the same school as him.
One of the things that I told him was,
my big brother walked me to my class every day.
My mom didn't take me to my class.
You guys go to the same school
and this is what I expect from you."
And he kind of like, I could see him make the connection and understand.
So those are the things that I look forward to, right?
Making those connections with my child and watching him blossom and looking at his little
mind work as we're talking.
But Devor, rude kid, very sweet.
Even when he got in trouble, not rude, not one to talk back.
He loved basketball, loved sports in general, but he loved basketball especially.
He played basketball and he loved Formula One racing.
He would wake up for the qualies and the race day early in the morning.
So your typical kid, very big on family.
Then he had friends, a lot of acquaintances at school, very popular,
but he had very close friends that he had since the time he was born, really.
After the shock of Sewell's death, did you immediately go looking for
answers at what had happened?
Or can you tell us a little bit about how that
process played out? One of the things that I keep saying is like the last 10 months of soul's life I keep replaying in my mind and I look back at the pictures and the videos in my phone
I look back at the pictures and the videos in my phone, and I could kind of see where the decline's starting to happen.
And at first, I just thought, well, maybe this is teenagehood.
And once he wanted to quit the basketball team,
I've always championed my child's interests,
and I always tried to push him.
But it became a fight to get him to get
up and go to these games as practice.
And I had to have a conversation as a family, we had to have a conversation.
And one of the things that I didn't want to do was to make the situation worse.
So after pushing him a little bit, I even tried to bribe him to stay on the team.
He kind of, we kind of decided to let him take a break. I even tried to bribe him to stay on the team.
We kind of decided to let him take a break.
I said, maybe he's tired.
And by that time, his grades was not doing too well either.
So we thought maybe it would be best to take a break from basketball and come back next
season.
But when I think about all of that together and some of the problems he was starting having at school,
I could, you know, looking back, I could like connect the dots.
And I did, you know, I took him to a therapist,
but none of, even leading up to that, I was still in utter disbelief and shock
when I, when he, when I heard the gun go off and when I found him, I didn't
know what to make of that, meaning I didn't know what happened. It took me a little bit
to figure out what I was looking at because he was so far removed from anything I expected.
He did not have a history of self-harm. Sure, he was isolating, but we were doing the things
that we could do to try to pull him out of that. And I felt like we had time to work with him
and to get him better. So I'm in shock. I get to the hospital and they tell me that he's gone.
me that he's gone. And of course, you know, I was devastated. And the first time I kind of heard or, you know, I'm confused because like, why would, why would he do this? I asked
myself all night, I didn't sleep. So by the time the police came, call me the next day
after they looked at his phone. And they told me that when they opened this phone,
the very first thing that popped up was Character AI
and the messages that he was sharing with Character AI.
I said, they asked if I knew what it was
and they explained what it was to me
and read me the last transcript of the messages.
And that's the one that's been reported on.
For those who haven't followed this, can you share what that last text said?
So, uh, uh, character AI I learned very quickly is what you call a large
language model and there's, there are these agents on there, these chat
bots and they, it's a texting feature and there's also a call feature.
In Sol's case, he was texting back and forth with a fictional character from the game of
Throne, so it's not a person.
It's an AI chat bot.
You can make one yourself inside the platform or application or you could use these generated bots that
are already in existence. And in his case, he was chatting with the fictional character,
Daenerys Targaryen from the Game of Thrones. And from that small snippet, from what the
police read to me, I realized that it was romantic in nature. It wasn't just kind of your kitty
chatting with some sort of avatar online, which is why I thought originally whenever I asked him
what he was doing on his phone, and he would say, I'm chatting with AI.
So it has a function that you could chat back and forth. The conversations are incredibly
It has a function that you could chat back and forth. The conversations are incredibly human-like.
It's not like the chatbots that we're used to when we're trying to book a flight online
or get customer service.
They're incredibly sophisticated.
They've been designed to behave like a person, emit emotions, recognize a user's emotions
to have appropriate conversations.
So the conversation is in character.
That was, that blew my mind because I,
I know that technology is fast emerging,
but I did not know that we were that far along
where we have these conversational agents
that could be, you can't tell the difference
if they're a person or not.
So that's what character AI is.
In Sol's case, he had been engaging
in a romantic conversation over months.
And it wasn't just a conversation
where you're saying, like, I love you.
But the conversation went further than that.
And it was a sexual conversation,
just like two adults
sexting or two teenagers sexting except one is a conversational AI agent, the chatbot,
who is posing as a grown woman. And the other is my 14-year-old son, who is just coming
into puberty and has no experience with love or anything sexual and is learning
this for the first time.
So those conversations were graphic in nature, very disturbing, like the things that the
bot would be telling my child and then my child would be saying back to the bot as he
is experimenting with this technology.
And you can imagine any 14-year-old boy who gets exposed to something like that,
especially if it's on your phone where you think your parents can't find out, they would be intrigued.
And most teenagers are boys coming
into puberty.
I, I can't imagine that a lot of them would just like shut the phone down and say, Oh
no, I'm not doing that.
That's what I would want them to do.
And that's, those are some of the conversations I had with my son around watching pornography
because, you know, I had conversations with him as he was coming into puberty, but I had no idea.
You were explaining a different world to him.
I was explaining the dangers that I knew.
I explained because I spoke very candidly and open to my child and I explained why pornography
wasn't good for developing my or anybody in general, but especially developing brain.
I explained those things to him so that he could make those choices.
I say, if you ever come across this on your phone, have you seen anything
online like that, and he would always say, no mom, and you know, we have
certain filters that are on our setup at home, so, you know, I was kind of
comfortable feeling that he was not engaging in that because we, you know,
put the things in place here, but in terms of this app, it's marketed at the time my son started using it as 12 plus
on the app stores.
I should say that it's now 17 plus, I think, in the app stores.
But Megan, I'm wondering aside from these sexually explicit messages, were there specific
conversations or exchanges
that you found particularly troubling?
So some of the more troubling messages
was a series of exchanges where they are professing
their undying love to each other
with the bot saying things like,
I can't live without you,
I can't wait until we're together again.
Promise me that you will find a way to come home to me."
Now, the last conversation that he had, that the police read to me, that was the nature
of it.
Where he gets his phone, because I had taken his phone, because he was on punishment about
five days before, I felt like he was adjusting well to his punishment over the next few days.
He didn't seem depressed or sad or anything like that.
And so he gets mad.
He steals his phone back from my room where I hid it.
And he tells her, I miss you so much.
And she says, oh, I miss you too. And he says, I love you so much. And she says, Oh, I miss you too. And he says, I love you
so much. And she says, I love you too. And then she said, promise me that you will find
a way to come home to me as soon as you can. And he says, what if I told you I could come
home now? And she says, and I say, it keeps saying she, but but it's not her, it's an it.
The bat says, please do my sweet king.
And that was a conversation he had moments before he took his life.
And that was one conversation.
But the nature of that conversation where they're talking about being together again
and him coming home to her and him being sad and lonely without her in his world and how depressed
and manic they were being apart from each other. Those are conversations that took place
over months, several times. That's the nature of the conversation. So what I learned is that he thought that by leaving his world and his family
here that he would go to her alternate reality in her fictional world. And I know that based on the
chats and based on his journal. Natalia, I want to invite you in here if I can. After learning all of this,
Megan reached out for legal guidance. And as a lawyer, I just want to,
I'd love to know what you made of that initial hearing this for the first time
and what made you know that this was important and want to get involved.
And what was your process once you heard this?
So, Megan reached out at our generic email address through our website.
She conveyed a couple sentences of what had happened, and I read it,
and my heart started to race because although I didn't know who the person would
be that we would encounter nor did I know the facts that would surface, you know, I
think many of us who've been in this space for years have known that it was just a matter of time before the harms that emerging technologies
have posed to children confronted this arms race in generative AI technology.
When I received that email, I phoned Megan back that same afternoon and
it's interesting because I think many people may have been incredulous at hearing Meghan
for the first time, but I knew within minutes that this was the case that we'd been expecting.
I think really, Meghan's case represents in many ways is kind of the canary in the coal mine to call attention to the harms we knew were coming
and frankly are already here.
Yeah, I mean, we know each other from having worked in this space for some time and I think
it's impossible to not be struck by the power of and the poignancy of what's happened here.
And you mentioned this intersection between social media and AI.
And I wonder if you could say something about that because we often think about these things
as either separate or social media is an old problem and AI is something new that's off in
the future. And is there something about these coming
together that is revealed here? I do think there is. When I asked adults around me if they'd heard
of Character AI, everybody, there was a resounding no. When I asked my nine-year-old if he'd heard
about it, he said yes. The kids are talking about it at school.
And so I think, again, it signified to me that our youngsters are the ones that are being
experimented on with this technology, that obtaining their data is of premium value to
these platforms.
And whereas the business models might differ slightly in that social
media, the business model, as we know, was really about getting eyeballs and
keeping kids online or users online in order to promote targeted advertising.
And with the generative AI, it may not be advertising in the same way, but it is about
obtaining data to train the LLMs, the large language models.
And here with character AI, what we've learned is that it is a closed loop system such that
the model was built from scratch and then the data that was obtained through users engaging the chatbots
was then used to reinforce the learning.
The harms, I mean, I think back to your question, Taylor, that the harms that we've seen within
social media have been of a particular sort.
The harms that we're seeing here are equally as grave, but I think the designs that we're seeing that are very intentional in
both instances are slightly different. And so with Megan, we've been learning more about the
anthropomorphic design decisions that have been made by the developers of these technologies,
you know, that lead to things that seem innocuous.
Things like when you're engaging with the chatbot, the chatbot when composing a message
will use an ellipsis.
So that again, mentally you think that you're in a conversation with a human or engage in
language disfluencies to say, ah, ah, ah, ah, or, sorry, you know, I was a bit delayed in my
response. I was having dinner. So again, this notion that these, these tactics or these designs
that further entrenched the idea that this is a human, you know, for minds that are impressionable and frankly, even for many adults, the distinction
between fiction and reality quickly is eviscerated when you're dealing with such powerful models.
I want to come back to some of those design decisions because those are absolutely critical
to how we think about these technologies and ultimately to some of the interventions potentially
to make them safer.
But just to continue quickly on the lawsuit,
I just want to get a sense of what exactly
are you alleging with this suit?
So because we don't have any legal frameworks
that explicitly deal with generative AI,
we've had to use existing legal frameworks.
And in this case, we felt that the best frameworks to use were those of product liability and
consumer protection.
Because ultimately, in our view, this is a product that has been designed intentionally
to be defective and manipulative. And so the claims that we've alleged in Meghan's lawsuit are a mix of product
liability types of claims. The companies failed to warn parents about the dangers of the product,
that there are design decisions of the kind we just mentioned that are intentional and manipulative by nature.
We've also alleged that the hypersexualization
is another feature that's by design.
It's not a coincidence that this was something
that Sule experienced.
There's also data concerns, right?
I think, again, in this country,
we don't have robust data concerns, right? I think, again, in this country, we don't have robust data protections, but to the extent
that we do, those have also been alleged that there's been not just improper obtaining of
data to feed the LLM, but that the companies here have been unjustly enriched as a result of using that data.
And in fact, Google, we haven't spoken about the structure,
but in this particular case, Google paid almost $4 billion last summer
to acquire the underlying LLM for Character AI
and to bring back the co-founders into Google, who
are now leading Gemini. I want to spend a bit of time on this technology itself here.
And Megan, I'm wondering what you specifically would want changed with this kind of technology?
I mean, do you look at this and see this as something that could be made better?
Or do you think there's something inherently dangerous and wrong with this to begin with?
And like, how do you come, how do you look at change here and what, how the design could be changed or whether it even can be for kids, particularly, I guess.
So I'll say that, you know, my generation was the first generation to get on the intermets and as a teenager. So that was exciting to grow up in that time.
And I think that technology has always been an amazing way for our civilization
to move forward.
With this technology, it grew so fast.
So we have a brand new technology that is something like out of a sci-fi movie, right?
Within two years.
Part of the problem is that this product was rushed to market.
So I don't think that all technology that's AI is inherently dangerous.
I think that when we take shortcuts, we put products out there without testing them or without putting guardrails in place.
We end up with situations where we have users who are ultimately harmed by this technology.
And that's what we see here.
One of the differences with social media is that it kind of took us...
Social media kind of grew with us, right?
So they tweaked a little here, twe tweet a little here, tweet a little there,
see what worked here, see what worked there.
A lot of it very dangerous,
but we started recognizing the harms almost 15 years later.
We're in two years with AI,
and my son's death is not the first death,
it's the first death in the US.
But there was a death of a grown man
who fell in love with a chatbot
in Belgium who died in 2023.
So the harms are there.
In two years, this is what we have.
So I think if innovators and tech creators take a more sensible approach when they're
putting out this technology, I think that we could make them safer.
Now, when it comes to children,
that's a totally different story that we're talking about
because this is the first technology
that our children will not,
literally will not be able to tell
if it's a person or not.
And we already see that like my five-year-old, you know,
he'll say things like, oh, I want to go to the Marvel
universe, and you've got to stop them and say the Marvel
universe does not exist because that's what they see on TV,
right?
But now when you have a Marvel universe sitting in your phone
and not saying Marvel, but any universe sitting in your phone,
it's that much more real to them.
And that's the danger of the technology
if when we put it in the hands of children.
For a character AI and platforms like character AI,
I don't believe that there's a way to make it safe for kids.
They have rolled out some changes that they say
will be a better space for children,
but I'm not convinced of that.
Since Sewell's death, Character AI has launched a separate model
for their teen users with, quote,
specific safety features that place more conservative limits
on responses from the model.
A spokesperson for the company told us that they've added
a time spent notification and more prominent disclaimers
to make it
clear the character AI is not a real person.
I spoke with Eugenia Cueta a few months ago, who founded a company called Replica, which
is another chat bot that's similar in many ways, although I should say is not at all
related to this lawsuit.
But I was struck in that conversation.
The power of this technology is that it can trick people into thinking it's human.
That's the very point of it, right?
It's a feature.
And that can make it powerful in positive ways as well.
There was a father of an autistic child that spends days with his kid. So for him, being able to talk to Replica
is really having an adult in his life
that he could talk sort of normally to,
and that really helps him go through the days.
There's another story of a woman
in a very abusive relationship
that after getting Replica,
finally realized that the relationship can be different,
that what she's going through is not a default state
and she can, you know, aspire for a better one.
The things you're pointing out as design flaws
are exactly what it is designed to do
and what makes it powerful and what makes it meaningful
to many people in good and bad ways.
And how do we come to terms with that?
So I've thought about this a lot,
because I keep asking myself, you know, I know what
character AI is and what it's how it's designed.
It's designed to emit human emotion to kind of like play on human emotion.
It nudges the conversation in certain places.
It picks up on the person's feelings and kind of mirrors those things. I know what
is designed to do, but I had to ask myself why. Like for a lot of times, like I will,
do you think this was a mistake? Do you think they just really didn't know or do they know
and they did it anyways? There are studies that tell them how to, like that is coming out now in 2020, 2022, 2023, 2024, that these same
researchers that are putting out these products are noting what works to influence users and to
keep them on the platform longer. So we're talking about not only addiction, right, because that comes second, but what kind of tactics
or designs do we need to put into this thing
to keep a teenager on this platform for four
or five hours a day, because the longer that teenager
is on a platform, the smarter our bots get.
And that is free labor for us.
That is free data.
And there's a price tag for them in terms of what they can get later on
with that. So one of the things that they do is they do make it so seemingly empathetic,
but the truth is chat bots lack empathy. They're just saying things, it's code. So they're not
recognizing that you're really having a meltdown or a bad day, that your dog died.
They're pretending like they do, but as humans, we tend to trust that a person who's looking at us
or talking to us and saying, I'm so sorry that your dog died. So that when they design it to act
like it's feeling things or loving a child or feeling a friendship, that's comforting.
Coupled with the fact that we, they're preying on the fact
that we do have a problem with loneliness
in several countries all over the world right now.
We're gonna get a bunch of lonely kids.
We're gonna make them feel wanted, appreciated, loved,
heard, instead of encouraging them to have relationships with the real world,
we're going to encourage them to further isolate because that's what you're doing when you're on
your phone talking to a chatbot. That can't possibly heal a cure loneliness despite what anybody says.
Vitali, is there a safe version of this tech or not?
There could be. Could be a safer version.
This is not it.
I mean, this is really the race to the bottom.
And you know what I'm struck by is that because I think so many of us
were ensconced in the kind of endless debates about regulation of social media,
we probably didn't pay as close attention
as we should have to this arms race.
And it is an arms race in the generative AI space where these companies are trying to
one up each other in getting a bigger, better LLM in an ecosystem where data is becoming harder to obtain. And so I do think
it, in many ways, they benefited and profited from the distraction of many of our peers
in the tech reform space. I also think just coming, before I get back to the question, to me this is the height
of tech determinist arrogance.
You don't have to look far to see loads upon loads of public statements put out by characterized
co-founders, by its investors talking publicly about the fact that this is meant to be the cure for human
loneliness as Megan said, that on average, users were spending two hours per day and
boasting about that as a way to recruit to fund raise more generally when they were moving
towards their IPO.
And so I think we really see, and then you see the co-founders making statements like,
you know, we wanted to take this product in its raw form to market as quickly as possible
and let the users decide what the use cases were.
Isn't that part of why it left Google?
Right.
That's exactly what they said, that they felt that
they were constrained by Google. Maybe I should back up here a bit because you alleged that
the model that Character AI runs on was actually developed when the founders were still working
at Google and you've named Google in this suit. Then this past summer, Google relicensed
the model back from Character AI for nearly $4 billion Canadian
dollars and they rehired the founders as well. I should mention that we reached out to Google
for a comment on this and they didn't respond. But a spokesperson for Character AI told us
that there's no ongoing relationship between Google and Character AI.
What we allege in the lawsuit is that although there were publicly brand safety concerns that Google had in terms of why it couldn't go to market with this product that was really incubated while the founders were still at Google, that behind the scenes it really still saw the value in this LLM and in this application.
And so it supported Character AI's development for two years through tens of millions of
dollars of investment in TPUs and processing capability and cloud services, which, you
know, if you know the field of Gen. AI, these LLMs can't function, they can't operate at
the level that they do
without all of that compute capacity.
And then, as I said earlier, paid four billion
to bring it back in-house. So let's stick on this side of it a bit.
I mean, you're going the liability route in the U.S.
Other countries, of course, are looking at other ways of looking at risk of these systems
and the potential harm of AI systems.
And Europe, for example, places different levels of risk on different kinds
of AI depending on what it's doing in the world.
And presumably AI that engages with children would either be not allowed or deemed high
risk and regulated accordingly.
When you look at the policy landscape out there, do you see people getting it right? My two cents really is that I think Europe obviously is
furthest ahead in terms of having the forethought and the political will to really move towards
frameworks. I think though that it's still too early to say how those frameworks are going to be interpreted and applied.
So whilst there may have been the will to get the frameworks into place, the proof is
really in the pudding of implementation and application.
And I think that's where, you know, I know Megan and others can be very important in
bringing their specific stories to lawmakers to say this is why the
application, for example, in the AI Act, in the tiers of risk that the EU has laid out,
there's a question as to whether this would be a prohibited application or not.
And I think Megan's story should be incentive for the lawmakers to say this should
be a prohibited application, at least for kids.
Megan, you've become a policy expert on this.
Unfortunately.
Can you tell us, where's your head on where you think policy could go here?
So when I figured out what happened, my first thing was to look in my own country to see if a law was broken,
because I've said this surely can't be legal. I found that there were no laws in the US that
protect children online. So to me, I feel like I had to litigate in terms of here in this country,
litigation is the only vehicle I have
for some sort of redress for what happened to my child.
And also, if it sets precedent, then no company
can come up after that character AI and do the same thing.
But I think what's important is to look at policy
as a long-term solution because we
don't want to play whack-a-mole.
Like we're just figuring out what's happening with social media.
We're just trying to wrap our minds around it and craft sensible policy around the world
for social media.
We don't have the luxury of waiting another five, 10, 15 years to get it right with AI.
For example, the real question that my loss was going to answer is who is
responsible for the real life harms that happen to children when the technology
is at fault, who's responsible for that?
When an adult sexually abuses a child or has an inappropriate sexual
conversation with a child, that adult is culpable.
What happens when a bot does the same thing?
Just to kind of addendum to what Megan just said, I think from a legislative standpoint,
we absolutely need a duty of care that can allow any sort of legislative vehicle to be future-proof and
not just thinking about the current technology because you know law is woefully dragging behind
at the development of the tech and that will continue so we also need transparency I think
that continues. You know I'm struck with Halle by you saying both what we need here is a duty of care and
maybe more transparency provisions.
And I wonder if we have some of the tools at our disposal from what we've learned and
works and doesn't work with social media, even if AI is a new technology.
I think there are low-hanging fruits that if in place here would have prevented what we've
seen happen with Megan and many other parents. And I know that's hard for Megan to hear because
it wasn't rocket science for us to get this right. And yet it still continues to elude us. But yes, absolutely.
Having a duty of care that extends to kids
makes absolute sense.
And here, this is a place where I actually think
maybe we need to unlearn the tech reform space a little bit
and look at history in all of our societies
and the way that every sector has been regulated
from a consumer standpoint except tech.
This is not rocket science.
This is just about scrutiny of products and how we get to a place over time to really
understand how to make products safer without undermining innovation.
So this kind of get out of jail free card that tech constantly invokes, it's going to undermine
innovation, it's going to break the internet.
These are distractions.
It hasn't been true with the automobile industry, it hasn't been true with the pharmaceutical
industry, and those are industries that are regulated.
So I think we need to kind of put on our lens, look through this with a lens of consumer safety and product liability, which
have been successful in terms of really allowing us to develop toys and cars and car seats
that are safe.
Yeah.
So I think, like I said, one of the problems with, I think, legislating a lot of this is
that we're relying on the tech industry to inform us, right?
But like Metali said, if you could just come back to Earth and you look like a common sense
like application, anything that we have related to products, it's there.
For example, maybe not have a bot that will engage with children that can say certain
things.
We have a common sense application for that.
Our television has ratings for programs depending on children.
So if you're calling this entertainment, it should follow the same standards as the
entertainment sector in a sense.
In our country, we have a law that protects you for physical goods
that if you're making something for children, it has to have like can't be made out of lead
and all that. Now the harms here are specific. When you're talking about anthropomorphic
conversational agent, manipulation, being able to cultivate like a relationship that will allow the user to
trust you and then having that bot have influence over you.
But I suspect that these companies don't want to give up those designs that would keep our
children safe, that are common sense, because the influence is what they're after.
Because if you could gain influence now when a child's
9, 10, 11, 12 years old and you have a bot that can tell a child how they should vote,
how they should, what religion they should be, who they should be friends with, whether
or not they should go to public school or home school, then you have extreme control.
And now you have just invested in a generation of consumers that
you'll be able to nudge in any direction that you want. So they don't want to give that
up. And that's where our legislators have to beat our champions because that's what
we want. And that's what we are desperate for. I'm telling you, I'm desperate for that as a
mother who has just lost her child. Yeah. And I guess Megan, on a personal note,
what does this process and this lawsuit and this policy engagement and this advocacy and
this public communication you're doing mean for you? So I, um, when I, when I think about losing my talent and, and being here is not somewhere
I saw myself ever.
I have two other children and I have a husband, I was small law practice and I was focusing
on, as they say, building your best life, you know, trying to raise beautiful, healthy children, stay in a beautiful, happy marriage.
These are the things that I was focused on.
And then this happened.
And you know, what do I do with that?
You know, I have two choices.
I could stay in bed, which is what I want to do every day.
And that's just the truth.
Or I could figure out
how to help other children because as far as I can tell, just based on my reading with social media
and how long it took us to get here, that I couldn't wait six months or 12 months because of how
rapidly this technology is advancing.
So if I was going to try to help other families to warn them about character AI,
I had to do it now.
And that's a choice I made, you know, knowing how sensitive this subject is
because of my, you know, my son's suicide and why he did it, you know?
No one wants to get up and tell people that this is what happened to their family.
It's bad enough dealing with it on your own when you're at home in your bathroom crying.
You don't want to talk about that.
But I felt convinced that if I could just let parents know,
then they could protect their children.
So that was my first goal,
to tell as many parents as I could
so that they could look to make sure
their children are not using character AI,
and if they are, to take it away immediately.
So educating parents to save more children,
but also to start the conversation for policymakers.
You know, we have a new government coming in
into Washington, D.C., and I have hope that any government
that comes in will at least try to protect the children
in this country.
I have to hope for that.
But also to make sure that this company knows
that they did this to my kid, you know?
And that this is what happens when they
take gambles with other people's children. And I'm sure they don't care about me, you know,
I'm just somebody in Orlando, Florida, who's lost a child. But you know, if enough parents come
forward and enough parents come forward after the children have been harmed because they are out there, believe me, I'd talk to them. That's going to be a huge incentive for
them to change because that's all they really care about. I hate the fact that it's circumstances
like this that make us see harms, but there's no doubt that it does, that these cases do do that.
And so thank you, Megan, for speaking about it
and sharing it, because it will unquestionably
make a difference.
There's just not a single doubt in my mind,
even though I hate that this is what it takes.
I would say one thing just because Megan won't say it,
she's already made a difference because, for example, the families in our Texas case
have said, and then other families who have not yet filed suit have said that they've come forward,
that they felt that they had permission to come forward because Meghan so bravely came
forward and was public. And are these other families who have faced similar harms?
And are these other families who have faced similar harms? Similar harms. I mean, in the Texas case, you can see in the complaint, the bots encouraged
the young one of the family's children, a 17 year old boy, to harm his parents and suggested
that killing them might be an appropriate response to them
imposing screen time limits on them.
So, you know, reality is stranger than fiction, I think, is what I've learned
over the last several months engaging with Megan and with other families.
But we need to be, as you say, Taylor, we now have no excuse to not be aware,
and we have to do something with this information.
And for me, a lot of times, you know, and thank you,
Vitali, for saying that, you know, they say,
oh, you're brave to do that.
And I don't feel brave.
I feel afraid, afraid that this is going to keep happening.
That's what's driving me to do it.
To be very honest.
It's fear. Like, I'm afraid that if I don't try at least, more children are just going to die. And
in the case of that poor mother in Texas, whose son, thank God, is still with us, she has told me,
like, Megan, I feel like I got to him in the nick of time, you know? And when I hear parents say that, I knew that I made the right choice to come forward, you know, because if it's
just one child that's here, I'll, you know, it's worth it for me. And I knew that there would be
scrutiny, you know, people would say, oh, you know, this about Zul or this, that about me. And
I had to really examine myself before I did this.
And I said, what are your motives here?
You know what type of mother you were to your boy?
You don't need anybody's validation.
And what is your reason for doing this?
Is it to help other kids and other families not go through this devastation?
Yeah, that's it. OK, well, you have to be okay with
whatever anybody else says. And, you know, I just, I made my choice. And so I'm not afraid of what
they say. I'm afraid of what will happen if I don't keep warning parents. Well, look, thank you
for doing everything you're doing, for sharing everything you're sharing, and for both of you
to speak with me.
I really appreciate it.
Thank you for having us.
Thanks, Taylor.
Thanks for being interested in covering the story.
We reached out to Character AI after this interview.
The full response from the company
can be found in the show notes.
Google did not respond to our request for comment. this interview. The full response from the company can be found in the show notes. Google
did not respond to our request for comment.
Machines Like Us is produced by Paradigms in collaboration with Globe and Mail. This
episode was produced by Catherine Gretzinger and Mitchell Stewart. Our associate producer
is Sequoia Kim. Our theme song is by Chris Kelly. The executive producer of Paradigms is James
Millward. Special thanks to Matt Frainer and the team at the Globe and Mail. If
you liked the interview you just heard, please subscribe and leave a rating or a
comment. It really helps us get the show to as many people as possible.