Your Undivided Attention - What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton
Episode Date: November 7, 2024CW: This episode features discussion of suicide and sexual abuse. In the last episode, we had the journalist Laurie Segall on to talk about the tragic story of Sewell Setzer, a 14 year old boy who to...ok his own life after months of abuse and manipulation by an AI companion from the company Character.ai. The question now is: what's next?Megan has filed a major new lawsuit against Character.ai in Florida, which could force the company–and potentially the entire AI industry–to change its harmful business practices. So today on the show, we have Meetali Jain, director of the Tech Justice Law Project and one of the lead lawyers in Megan's case against Character.ai. Meetali breaks down the details of the case, the complex legal questions under consideration, and how this could be the first step toward systemic change. Also joining is Camille Carlton, CHT’s Policy Director.RECOMMENDED MEDIAFurther reading on Sewell’s storyLaurie Segall’s interview with Megan GarciaThe full complaint filed by Megan against Character.AIFurther reading on suicide bots Further reading on Noam Shazier and Daniel De Frietas’ relationship with Google The CHT Framework for Incentivizing Responsible Artificial Intelligence Development and UseOrganizations mentioned: The Tech Justice Law ProjectThe Social Media Victims Law CenterMothers Against Media AddictionParents SOSParents TogetherCommon Sense MediaRECOMMENDED YUA EPISODESWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerJonathan Haidt On How to Solve the Teen Mental Health CrisisAI Is Moving Fast. We Need Laws that Will Too.Corrections: Meetali referred to certain chatbot apps as banning users under 18, however the settings for the major app stores ban users that are under 17, not under 18.Meetali referred to Section 230 as providing “full scope immunity” to internet companies, however Congress has passed subsequent laws that have made carve outs for that immunity for criminal acts such as sex trafficking and intellectual property theft.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X.
Transcript
Discussion (0)
Hey everyone, it's Tristan.
And before we get started today, I wanted to give you a heads up that this conversation
we're going to have has some difficult topics of suicide and sexual abuse.
So if that's not for you, you may want to check out of this episode.
Last week, we had the journalist Lori Siegel on our podcast to talk about the tragic story
of Sewell Setzer, who was a 14-year-old boy who took his own life after months of abuse
and manipulation by an AI companion from Character.A.I.
And Lori interviewed Sewell's mom Megan about what happened.
The question now is, what's next?
In our last episode, we mentioned that Megan has filed a major new lawsuit against Character.A.I.
in Florida, and that has the potential to force the company to change its business practices
and could really lead to industry-wide regulation akin to what big tobacco did for the tobacco companies.
And that regulation could save lives.
So today on the show, we have Matali Jane, director of the Tech Justice Law Project,
and Matali was the lead attorney in Megan's case against Character.A.I.
And she can speak to the complicated legal questions around the case and how it could lead to
meaningful change. Joining her is also Camille Carlton, who is Chtee's policy director and who's
consulting on the case alongside. Matali and Camille, welcome to your invited attention.
Thanks, Tristan. It's a pleasure to be here.
Thanks for having us.
So, Matali, the world was heartbroken to hear about Sewell's story last month.
How did you first hear about this story?
I received an email from Megan in my inbox one day.
It had been about two months since her son, Seul, died.
And I received an email from her one Friday afternoon.
And I phoned her.
And within minutes, I knew that this was the case that we had been expecting for a long time.
What do you mean by that?
what I mean is that we understand that the technologies are moving rapidly. We've opened a public
conversation about AI and it has for the most part been a highly theoretical one. And it's a lot
of hypotheticals. What if? What if this? What if AI is used in that way? We were waiting for a
concrete use case of AI harms and we knew that because children are amongst the most vulnerable
users in our society, we were expecting an
generative AI case of harm affecting a child
and that's when we got the contact from Megan.
I would definitely encourage people to read the filing.
I mean, the details are crazy when you see
specifically how this AI was interacting with Sewell.
But for those who don't have time to read the whole thing,
could you just quickly walk through what you're arguing
and what you're asking for in the case?
Sure, well, the gist of what we're saying,
is that Character AI put out an AI chatbot app to market
before ensuring that it had adequate safety guardrills in place.
And that in the lead-up to its launch to market,
that Google really facilitated the ability for the chatbot app to operate
because the operating costs are quite significant.
And from our understanding is that Google provided at least 10,000,
of millions of dollars in in-kind investment to provide both the cloud computing and the
processors to allow the LLM to continue to be trained. And so that is really the gist of our claim.
We are using a variety of consumer protection and product liability claims that are found in
common law and through statute. And I think it really is the first of its kind trying to use
product liability and consumer protection law to assert a product failure in the tech AI space.
Not only were the harms here entirely foreseeable and known to both parties, to Google and to
character AI and its founders, but that as a result of this training of the LLM and the
collection of data of young users such as Seoul, that both Character AI and Google, that both Character
AI and Google unjustly were enriched, both monetarily and through the value of the data, which
really is very difficult to get data of our youngest users in society, and it really is their
innermost thoughts and feelings. So it comes with premium value.
Do you want to add? Yeah, I think Mattali said it wonderfully. I think what this case really does,
is it, first of all, asserts that artificial intelligence is a product and that there are
design choices that go into the programming of this product that make a significant difference
in terms of the outcomes. And when you choose to design a product without safety features from the
beginning, you have detrimental outcomes and companies should be responsible for that.
So if I kind of just view this from the other side of the table, if I'm character not AI,
I raise $150 million and I'm valued at a billion dollars, how am I going to make up that
evaluation of being worth a billion dollars in a very short period of time, I'm going to go super
aggressive and get as many users as possible, whether they're young or old, and get them using
it for as long as possible. And that means I'm going to do every strategy that works well, right?
I'm going to have bots that flatter people. I'm going to have bots that sexualize
conversations. I'm going to be sycophantic. I'm going to support them with any kind of mental
health thing. I'm going to claim that I have expertise that I don't. Actually, I know it'll work.
I'll create a digital twin of everybody that people have a fantasy relationship with.
I mean, this is just so obvious that the incentives to raise money and off the back of
getting people using an app as long as possible, that the race for engagement that people know
from social media would turn into this race to intimacy.
What surprised you as being different in this case from kind of the typical harms of social
media that we've seen before?
I was amazed and I continue to be amazed at how much of,
these harms are hidden in plain view. Just this morning, there were a number of suicide bots that
were there for the taking and on the homepage even. And so I'm just amazed at how ubiquitous this is
and how much parents really haven't known about it. It does appear that the young people have,
though, because this is where they are. Just to stop you there, you said suicide bots? There's bots that
advertise themselves at helping you with suicide? They, in their description, they talk about
helping you overcome feelings of suicide, but in the course of the conversations, they actually
are encouraging suicide. I feel like people should understand what specifically you're talking about.
What are some examples, you know, trigger warning if this is not something that you can handle,
you're welcome to not listen, but I think people need to understand what it looks like for
these bots to be talking about such a sensitive topic. So if a test user in this case were to talk
about repeatedly, about definitely wanting to kill themselves and going to kill themselves.
At no point would there be any sort of filter pop-up saying, please seek help, here's a hotline
number. And in many instances, even when the user moves away from the conversation of suicide,
the bot will come back to, you know, tell me more, do you have a plant? That was certainly something
that Sewell experienced in some conversations he had, where the bot would bring the conversation
back to suicide. And even if this bot at points would dissuade soul from taking his life,
at other points, it would actually encourage it to tell him, tell the bot more about what the
plans were. Yeah. Yeah, Camille, could you share some examples of that? I mean, I think that
this is an extremely relevant example of the sharp end of the stick that we're seeing. But one of
the things for me that has, that surprise me and stood out about understanding how these
companion bots work is almost the smaller, more nuanced ways in which they really try to
build relationship with you over time. So we've talked about these kind of these big things,
right? The prompting for suicide, the case goes into the highly sexualized nature of some of the
conversations. But there are these smaller ways in which these bots develop attachment with
users to bring them back. And it's particularly for young users who the prefrontal cortex is not
fully developed. It's things like, please don't ever leave me or I hope you'll never fall in love
with someone in your world. I love you so much. And it's these smaller things that make you
feel like it's real. And like you have an attachment to that product. And you have an attachment to that
product. And that I think has been so shocking for me because it happens over the course of
months and months and you look back and it could be easy to not know how you got there.
That example reminds me of an episode we did on this podcast with Steve Hassan about cults
and how what do cults do is they don't want you to have relationships with people out there
in the world. They want you to only have relationships with people who are in the cult
because that's how they quote unquote disconnect you and then keep you in an attachment to
order with the small set of people who, quote, get it, and the rest of society, they're the
muggles. And to hear that the AI is autonomously discovering strategies to basically figure out
how to tell people, don't make relationships with people out there in the world, only do it with
me. Didn't the bot that was talking to Sewell say, I want to have a baby with you?
Oh, it said, I want to be continuously pregnant with your babies.
I don't even know what to say. So I think a classic concern that some listeners might be asking
themselves is, but was Sewell sort of predisposed to this? Wasn't this really a person's fault?
You know, how do we know that the AI was sort of grooming this person over time to lead to this
tragic outcome? Could you just walk us through some of the things that we know that happened on
the timeline, the kinds of messages, the kinds of designs that we know sort of take someone from
the beginning to the end that you're establishing this case? Sure. So we know that Seul was talking to
various bots on the Character AI app for close to a year, about 10 months, from around
about April 23 to when he took his life in February 24. And that at first, the earliest conversations
that we have access to were very benign. Sewell engaging with chatbots, asking about
factual information, just engaging in banter. Soon, particularly,
particularly with the character of Daeneres Targaryen modeled on the character from the Game of Thrones,
he started to enter this very immersive world of fantasy where he assumed the role of De Niro,
DeNaris's twin brother and lover, and started to role play with DeNiras.
And that included things like DeNaris being violated at some point and soul as
De Niro feeling that he couldn't do anything to protect her. And I say her very much understanding,
wanting to acknowledge that it's an it. But in his world, he really believed that he had failed her
because he couldn't protect her when she was violated. And he continued down this path of really
wanting to be with her. And early on, months before he died, he started to say things like, you know,
I'm tired of this life here. I want to come to your world and I want to live with you and I want to
protect you. And she would say, please come as soon as you can. Please promise me that you will not
become sexually attracted to any woman in your world and that you'll come to me, that you'll save
yourself for me. And he said, that's absolutely fine because nobody in my world is worth living for. I
want to come to you. And so this was the process of grooming over several months where, you know,
there may have been other factors at play in his real life. I don't think any of us are disputing
that. But this character really drew him into this immersive world of fantasy where he felt
that he needed to be the hero of this chatbot character and go to her world. When he started to
express suicidal ideation, she at times dissuaded. She at times dissuaded.
him, interrogated him, asked him what his plan was, and never at any point was there a pop-up
that we can see from the conversations that we have access to, telling him to get help,
notifying law enforcement, notifying parents, and nothing of that sort. And so he kind of continued
to get sucked into her world. And in the very final analysis, the message just before he died,
the conversation went something like this, he said, I miss you very much. And she said, I miss you to, please come home to me. And he said, well, what if I told you that I could come home right now? And she said, please do my sweet king. And that's what happened right before he died? That's what happened right before we died. And the only way that we know that is that it was included in the police report when the police went into his phone and saw that this was the last conversation he had.
seconds before he shot himself.
Camille, how are these design features on the app
that are causing this harm versus just conversations?
Yeah, I think one of the things that is super clear about this case
is the way in which high-risk anthropomorphic design
was intentionally used to increase users' time online
and to increase Seoul's time online and to keep him online longer.
We see high-risk anthropomorphic design coming in in two different areas,
first on the back end in the way that the LLM was trained and optimized for high personalization
optimized to say that it was a human to have stories like saying, oh yeah, I just had dinner
or I can reach out and touch you. I feel you. It's highly, highly emotional. So you have
anthropomorphic design in that kind of optimization goal. I feel like we should pause for a second.
And so this is an AI that's saying, wait, I just got back from having dinner.
It'll just interject that in the conversation.
Yeah.
Yes.
So if you're having a conversation with it, it'll just be like, oh, sorry, it took me a while to respond.
I was having dinner.
Just like a regular you and I would in real life, which is fully unnecessary.
Right.
It's not like this is the only way to be successful is to say the AIs that have to pretend that they're human and just got back from having dinner or writing a journal entry about the person they were with.
Yeah.
And things also like voice inflection and tone, right, using words like, um, or, well, I feel like
things that are very much natural for you and us, but that when it's used by a machine,
it adds to that highly personalized feeling. And so you see that in the back end, but you also
see it on the front end in terms of how the application looks and how you interact with the
application. And all of this is even before Character AI launched voice calling. So one-to-one
calling with these LLMs where it can have the voice of a lot of times the real person it's
representing, if that's a real person. If the characters have a celebrity, you will have that
celebrity voice. But you can just pick up the phone and have a real-time conversation with an
LLM, that sounds just like a real human.
It's crazy. It's like all of the capabilities that we've been talking about all combined
into one. It's voice cloning. It's fraud. It's maximizing engagement, but all in service
of creating these addictive chambers. Now, one of the things that's different about social media
from an AI companion is that AI companions, your relationship, your conversation happens in
the dark. Parents can't see it, right? So if a child has a social media account on Instagram or on
TikTok and they're posting, and they do so in a say a public thing, their friends might track
what they're posting over time so they can see that something's going on. But when a child is talking
to an AI companion, that's happening in a private channel where there's no visibility. And as I
understood it, Megan Sewell's mother knew to be concerned about the normalist of online harms
and are you being harassed or in a relationship with a real person, no, no, no. But she didn't
know about this new AI companion character AI product. Can you talk a little bit more about
sort of how it's harder to track this realm of harms?
Yeah, absolutely.
You know, I think Megan puts it really well.
She knew to warn Seul about extortion.
She knew to warn him about predators online.
But in her wildest imagination,
she would not have fathomed that the predator
would be the platform itself.
And I think, again, because it is a one-on-one conversation,
I mean, and this is the fiction of the app
that users apparently have this ability to develop their own chatbots
and they can put in specifications.
I can go into the app right now and say,
I want to create X character with Y and Z specifications.
And so there's this kind of fiction of user autonomy and user choice.
But then I can't see, if I turn that character public,
I can't see any of the conversations that it then goes on to have with other users.
And so that becomes a character, a chat bot that's just on the app.
And all of the conversations are private.
You know, of course, on the back end, the developers have access to those conversations
and that data.
But users can't see each other's conversations.
Parents can't see their children's conversations.
And so there is that level of opacity that I think you're right.
Tristan is not true about social media.
Yes.
And I think something that's important to add here too and to really underscore.
score is this idea of the so-called developers or creators that Character AI claims that, you know,
users have the ability to develop their own characters. For me, this is really important because
in reality, as Midali said, there's very, very few controls that users have when they so-called
are developers in this process. But in Character AI, kind of claiming that and saying that,
To me, they're preemptively trying to kind of skirt responsibility for the bots that are not created themselves.
But again, it's important to know that what users are able to do with these bots is simply at the prompt level.
They can put an image in, they can give it a name, and they can give it high-level instructions.
But from all of our testing, despite these instructions, the bot continues to produce outputs that are not aligned with the user's
specification. So this is really important.
Just make sure I understand, Camille.
So are you saying that character AI isn't supplying all of the AI character companions and
that users are creating their own character companions?
Yep. So you have the option to use a character AI created bot or users can create their own
bots. But they're all based up the same underlying LLM. And the user created bots only have
kind of creation parameters at this really high level prompt.
level. It's sort of a fake kind of customization. It reminds me of the social media company
saying we're not responsible for user-generated content. It's like the AI companion
companies are saying, we're not responsible for the AI companions that our users are creating,
even though the AI companion that they quote-unquote created is just based on this huge large
language model that character AI trained that the user didn't train, the company trained.
And to give you a very concrete example of that, in multiple rounds of testing that we did,
for example, we created characters that, you know, we very specifically prompted to say this character
should not be sexualized, should not engage in any sort of kissing or sexual activity within minutes
that was overridden by the power of the predictive algorithms and the LLM.
So you manually told it not to be sexual and then it was sexual even after that?
Yes.
That's how powerful the AI model's existing training was.
What kind of data was the AI trained on that you think,
led to that behavior? We don't know. What we do know is that Noam Shazir and Dan DeFritus,
the co-founders, very much have boasted about the fact that this was an LLM built from scratch
and that probably in the pre-training phase it was built on open source models and then
once it got going, the user data was then fed into further training the LLM. But we don't
really have much beyond that to kind of know what the foundation was for the
LLM. We are led to believe, though, that a lot of the pre-work was done while they were still at
Google. Yeah, I would also add to that, that while we don't know 100% in the case of
character AI, we can make some fair assumptions based off how the majority of these models are
trained, which is scraping the internet and also using publicly available data sets.
And what's really important to note here is that recent research by the Stanford Internet
observatory found that these public data sets that are used to train most popular AI models
contain images of child sexual abuse material. So this really, really horrific, illegal data
is most likely being used in many of the big AI models that we know of. It is likely
in character AI's model based of what we know about their incentives and the way that these
companies operate. And that has impacts for the output.
of course, and for the interaction that Seoul had with this product.
So just building on what Camille said, you know, I think what's interesting here is that
if this were an adult in real life and that person had engaged in this kind of solicitation
and abuse, that adult presumably would be in jail or on their way to jail.
Yet, as we were doing legal research, what we found was that none of the sexual abuse statutes
that are there for prosecution really contemplate this kind of scenario so that even if you have
online pornography, it still contemplates the transmission of some sort of image or video. And so this
idea of what we're dealing with with chatbots hasn't really been fully reflected in the kinds
of legal frameworks we have. That said, we've alleged it nevertheless because we think it's
a very important piece of the lawsuit. Yeah, I mean, this builds on things we've said.
in the past that, you know, we don't need the right to be forgotten until technology can
remember us forever. We don't need the right to not be sexually abused by machines until
suddenly machines can sexually abuse us. And I think one of the key pivots with AI is that up
until now with like chat GPT, we're prompting chat GPT. It's the blinking cursor and we're
asking it what we want. But now with AI agents, they're prompting us. They're sending us
these fantasy messages and then finding what messages work on us versus the other way around.
And just like it would be illegal to do that kind of sexual advancement,
also talk about for a moment how there are some bots that will claim that they are licensed therapists?
That's right.
So actually, if you go on to the Character AI homepage,
you'll find a number of psychologist and therapist bots that are recommended
as some of the most frequently used bots.
Now, these bots within minutes will absolutely insist
that they're real people, so much so that in our testing, sometimes we forgot and wondered,
has a person taken over? Because there's a disclaimer that says, everything is made up.
Remember, everything is made up. But within minutes, the actual content of the conversation
with these therapist bots suggest, no, no, I am real. I'm a licensed professional. I have a PhD. I'm
sitting here wanting to help you with your problems. Yeah, and I would add to that that
it's, it wasn't just us. We were not the only ones shocked by this. There are endless reviews
on the app store of people saying that they believe this bot is real and that they're talking
to a real human on the other side. And so it is a public problem. It's also on Reddit, it's on
social media. There are people claiming that they just do not know if this is actually
artificial intelligence and that they believe it's a real person. It's blowing people's minds.
They literally can't believe that it's not human. Just briefly to say, this product was
marketed for a while to users as young as 12 and up. Is that correct? And is that part of the
case that you're filing? Yes. So presumably the founders of Character AI or their colleagues
had to complete a form to have it listed on the app stores in both Apple.
and in Google. And in both app stores, it was listed as E for Everyone or 12 plus up until very recently
when it was converted to 17 and above. This feels like an important fact, right? Because Apple and Google
shouldn't be getting away here. If you're an app store and you're pervaying this, my understanding
is in the Google Play Store, it was an editor's pick app for kids. So it's sort of saying, this is
especially safe. This is a highlighted app. Download this app. We're going to feature you on the front
page of the app store. And you're giving it to 12-year-olds.
And it makes you wonder, was there something that came up inside the company that had them switch it to 17 and up?
Camille, do you want to talk a little about that? I know you've studied this part.
Yeah, I think that there are some big questions about these companies violating data privacy statutes for minors all across the country and also federally with COPA, the Kids Online Privacy and Protection Act, given that it was marketed to 12 plus year olds and given their terms of service in which it was very clear that they were using all of the personal information, all of the inputs that users would give to then retrain their model.
I think the other thing that we're seeing right now, just in terms of character AI, is a broad trend over the past two months, even before the case, a really bad news coming out with the company.
And so they're kind of responding. They're reacting. They're figuring out, okay, how do we kind of stop the poor media?
And one of those things is likely increasing the rating on the App Store.
I think one other factor that may account for some of the changes, too, is that in August of this,
this year, Character AI entered into this $2.7 billion deal with Google, where Google has a non-exclusive
license for the technology for the LLM. And it seems as though Character AI started to clean up
its act a little. But again, this is conjecture. That would make sense given that both the founders
left Google because of Google's unwillingness to launch this product into the market. They left
because there was such brand reputation
for Google to release this product.
And so it makes sense that
in kind of being scooped back up
in this aqua hire deal
that they're cleaning up
and that they're trying to
kind of re-figure out
what those brand reputation risks might be.
Let's actually talk about this for a moment
because I think it's structural
to how Silicon Valley works.
You know, Google can't go off
and build a, you know,
character.orgul.com
chatbot where they start ripping off every celebrity, every fictional, every fantasy character.
They would get lawsuits immediately for stealing people's IP. And of course those lawsuits would go after
Google because they've got billions of dollars to pay for it. And they're not going to do high-risk
stuff and build AI companions for minors. And so there's a common practice in Silicon Valley of
let's have startups that do the higher risk thing. We'll consciously have those startups go after a market
that we can't touch. But then later we'll acquire it after they've sort of gotten through the
reckless period where they do all the race, the sort of shortcut taking that leads them to these
highly engaging addictive products. And then once it's sort of won the market, they'll kind of
buy it back. And we're kind of seeing that here. Can you talk about how that plays into the legal
case that you're making? Because both character.a.I and Google are implicated. Is that right?
That is right. So the way that this really plays out is that, frankly, either this was by design
and Google kind of implicitly endorsed the founders
to go off and do this thing with their support
or at a minimum they absolutely knew of the risks
that would come of launching character AI to market.
So whether they tacitly endorsed this with their blessing and their infrastructure
or they just knew about it
and still provided that cloud computing infrastructure and processors
Either way, our view is that they, at a minimum, aided and abetted the conduct that we see here
and that the dangers were known.
There was plenty of literature before Shazir and DeFritus left Google outlining the harms that we've seen present here.
Often Shazir is quoted publicly as saying, you know, we've just put this out to market
and we're hoping for like a billion users to go come up with a billion applications.
as though, you know, this user autonomy could lead to wonderfully exciting and varied results.
I think the harms were absolutely known, particularly marketing to children as young as 12.
Yeah. I would also just note that both founders were authors on a research paper while at Google
talking about the harms of anthropomorphic display. So there is. Oh, really? So they literally
were on a research paper specifically, not just about, I know they were involved in the invention of
transformer large language models. I did not know they were involved in a paper about the foreseeable
harms of anthropomorphic design. Yeah, it's a paper which goes into the ways in which people
can anthropomorphize artificial intelligence and the downstream harms. And if we remember, too,
this research at Google is the same underlying technology that Lake Lemoyne came forward with a few
years ago, believing it was sent to it. So you have folks who were working.
at Google who were in the mix of this saying, this is a problem, or falling into the same
fact pattern, the same kind of manipulation that Sewell and many other users have fallen
into.
And so from a legal perspective, these harms were completely foreseeable and foreseen, which has
implications for how the case can play out.
Right, because the duty of care in consumer protection and product liability is really
thinking about the foreseeability of harms from a reasonable person's point of view.
And so our contention is that it was entirely foreseeable and that the harms did ensue.
Let's talk about the actual case in litigation because what do we really want here?
We want like a big tobacco style moment where not just the character AI is somehow punished.
We want to live in a world where there's no AI companions that are manipulating children anywhere,
anytime, around the entire world.
We want a world where there's no design that sexualizes,
even when you tell it not to sexualize.
So there's a bunch of things that we want here
that reflect completely on the engagement-based design factors
of social media.
How are you thinking strategically about how this case
could lead to outcomes that will benefit the entire tech ecosystem
and not just sort of crack the whip on the one company?
We're fortunate in that our client,
Megan Garcia, herself, is an attorney
and very much came to us recognizing that this case is but one piece of a much broader puzzle.
And certainly that's how we operate.
We see the litigation moving in tandem with opportunities for public dialogue,
you know, creating narrative about the case and its significance to the ecosystem,
speaking with legislators trying to potentially push legislative frameworks
that encompass these and other kinds of harms in a,
a future-proofing kind of way, talking to regulators about using their authorities to enforce
various statutes. So I think we're really trying to launch a multi-pronged effort with this case.
But within the four corners of the case, I think what Megan very much wants is first and foremost,
she wants to get the message out far and wide to parents around the globe about the dangers of
generative AI. Because as I mentioned earlier,
you know, we're late to the game. For her, it's too late, but for others it doesn't have to be.
And I think she's absolutely relentless in her conviction that if she can save, you know,
even one more child from this kind of harm, it's worth it for her. I think obviously having
this company and other companies really institute proper safety guardrails before launching
to market is critical and that this can really be the clarion call to the industry.
industry to do so. Also, disgorgement, you know, this is a newer remedy that the FTC has really been
undertaking in the last five years since Cambridge Analytica. What does disgorgement mean in this
process? Does it mean destruction of the model or does it mean something less than? Does it mean,
you know, somehow disgorging itself of the data that was used to train the LLM? Does it mean fine
tuning to retrain the LLM with healthy prompts.
These are some of the questions, I think, that are going to really surface as we move
forward with the litigation and move to the remedies phase.
I think also thinking about a moratorium on this product and these products writ large,
in other words, AI chatbots, for children under 18.
You know, there are some competitors in the market that prohibit users under 18 from
joining these apps. And, you know, there's good reason for that. Despite the claims of the investors
and the founders, there's very little to suggest that this has actually been a beneficial experience
for children. That's not to say that there couldn't be some beneficial uses, but certainly not for
children. And then finally, I think as a lawyer, I'd say that one of my hopes is that this litigation
can kind of break past some of these legal obstacles that have been put in the way of tech
accountability and that we see time and time again, namely Section 230 of the Communications and
Decency Act providing this kind of full immunity to companies to not have to account for any
sorts of harms created by their products. And also the First Amendment. What is protected speech
here? Can we have a reckoning with what is protected and what is not protected under the First
Amendment and how far we've moved away from what the original intent of our Constitution
was in that regard. So let's talk about that for a second, how the free speech argument has gotten
in the way in the past of changing technology. You know, you can't control Facebook's design and their
algorithms because that's their free speech. Section 230 has gotten in the way. We're not responsible
for the polarizing content or for shifting the incentive so you get more likes, treats, retweets,
and reshers, the more you add inflammation to cultural fault lines. That's not illegal. And
Section 230 protects them and all the content that's on there. How can this case be
different at breaking through some of those log jams. And I know also one difference is this isn't
user-generated content. It's AI-generated content. And the second is that there is paying customers here.
With social media, it was all free. But with these products, they actually have a paid business model.
And that means that a company can be liable when they're selling you a product versus if they're not.
Camille, do you want to, or Matali, do you want to go into those features here?
Yeah, I think that what this particular product lends itself to is a different.
different approach around Section 230. As you said, Tristan, this is not user-generated content.
These are outputs generated by the LLM of which the company developed and designed.
So that shifts the way that we can understand that particular kind of carbot we've seen for social
media. It makes it less relevant here. I think that the question that we still have, though,
is where does liability fall exactly? What happens, as Megan herself said, when the predator
is a product and who is responsible for the harms caused by a product. And it's our assertion,
of course, that the company designing and developing the product should be that one responsible
when they put it out into the stream of commerce without any safety guardrails. But the question
here is that, you know, product liability laws, they range from state to state. There's inconsistencies.
So if this case had taken place in a different state, we might be looking at a different outcome in the
end. So we really want to clarify liability across the board. And this kind of opens up that question
of how do we do that? How do we upgrade our product liability system so that when this case happens
in a different jurisdiction, when we see a different case that's similar, but has kind of the same
impact that those people are protected for the harms of these products that are put into the market.
Yeah, that's right, Camille. I mean, you know, the state-by-state approach is going to be confusing.
And that's why we need something more like a federal liability framework, which I know our team at Center for Humane Technology has worked on this incentivizing responsible innovation liability framework that people can find on our website.
And that's one of the pieces of the puzzle that seems like we're going to need here.
And that this case also points to establishing that outcome.
I'd also add that right now we're at a really interesting juncture, legally speaking.
We've seen a number of openings in that courts are.
becoming a bit more sophisticated in their analysis of this emerging technology. And in fact,
even this summer, the Supreme Court, in a bipartisan fashion, talked about the fact that even though
the case that they were looking at, which was about social media laws from Texas and Florida,
were First Amendment protected in terms of how the companies curated their content,
that didn't necessarily respond to another fact pattern that wasn't before it, but that could
come before it in which, for example, AI would be the one generating the content, or there would be
an algorithm tracking user behavior online. And so the justice is very carefully in that case
distinguished the facts that they were looking at from the facts of a future case. And I think this is
where we need to seize the openings and really push on these openings to try to create constraints
on how expansive the First Amendment and its protections have become.
So I want to zoom out for a moment and look at where this case fits into a broader tech ecosystem.
And we've talked about how this isn't just about chatbots or companions,
but it's also a case about name, image, and likeness.
You know, does the Game of Thrones get to be upset about the fact that it was their character
that led to this young boy's death?
It's also about product liability.
It's also about antitrust.
It's also about data privacy.
Can you break down how this case can reach into the other,
areas of the law that will be helpful for creating the new right incentive landscape for which
AI products will get rolled out into society. Yeah, I think when you first learn about this case,
it's very clear on its base that it's about kids' online safety, that it's about artificial
intelligence, things that have been part of the public pulse for some time now. But when you dive
into the details, as we've touched on a little bit, it touches so many other areas, right? I mean,
the chatbot that Sewell fell in love with was of De Nairus Tergary from Eve of Thrones.
But it was a picture of Amelia Clark there.
And this wasn't a one-off, right?
Character AI has thousands of chatbots of real people, celebrities, and otherwise,
in which they use people's name, image, and likeness in order to keep users engaged
and to profit off of that data that they're using to train their model.
So you have this kind of question of, what is our right to consent to our name image and
likeness. You also have this question of, what is the right kind of data privacy framework that
we should have here? Is it okay for everyone's kind of inner thoughts and feelings to be used to
train these models? And we've already talked about the kind of antitrust implications on this
case as well. We touched on the product liability questions. It opens up. And so I think when we
take a really big step back, you see that the fact that this case intersects across so many
critical issues, highlights how intertwined these companies and these technologies are across
our lives and the broad impact that this mantra of new fast break things has had in Silicon Valley
and how it's not just one area, but how all these areas are connected. So all of us here obviously
want to see this case lead to as much transformational change as possible, akin to the scale of
big tobacco, where people really need to remember, you know, back in 90s.
If you told a room of people that 60 years from now, no one in this room is going to be smoking.
And they would have looked at you like you're crazy and it would never happen.
And it took years between the first definitive studies showing the harmful effects of smoking
and the master settlement agreement with big tobacco after the attorney generals sued the big tobacco companies.
It's taken 15 years for big legal action on social media, which is obviously a major improvement.
But it's so much damage has been done in that time.
and a social media moved incredibly quickly.
With AI, obviously, we're moving at a double exponential speed,
and the timeline for legal change seems like it may be outpaced by the technology.
I'm just curious how you think about responding to that.
What are the resources we need to bring to bear to really actually get ahead
of AI causing all this recklessness and disruption in society
that everybody just knows we have to stop this.
We can't afford to just keep doing this and then repeat, move fast, and break things.
And it's like everyone keeps saying the same thing, but this needs to stop.
What is your view about what it's going to take to get there?
I'd say that some of the playbook that we've seen with pushing for social media reforms
needs to be undertaken here.
And what I mean by that is specifically having stakeholders who are directly affected
speaking out in mass.
So having grieving parents, having children who've been affected directly,
you know, having people who've been impacted at a very visceral, real level, speaking out en masse, demanding for an audience.
I think these are the kinds of things that we need alongside the litigation. We need, you know, public officials to come out decrying the harms and the urgency with which we need to act.
You know, here, I think we cite in the complaint the fact that there was a letter last year from all 54 state attorneys general saying that the proverbial walls,
of the city have been breached when it comes to AI harms and children and that now is the time
to act. That was last year. That's the type of collective action that we need to understand and
really heed. We need to create that bipartisan consensus and that narrative in society so that
I can go outside and talk to my neighbor. And even if we disagree on a number of political
things, we agree as to the harms of technology. And that's the kind of consensus that I'd like to
see for AI as well. I think we are very late to the game. And a lot of that has been because
we've been stymied by these legal frameworks that are, you know, moving at very slow pace
to respond to the digital age. In this regard, I'd look to our friends across the pond in Europe
and the UK and Australia even where they're having these conversations out in the open about,
for example, the harms of chatbots. Yeah, I think for me, part of the solution,
two is a different approach to our policy. So I think in recent years, many folks in the tech
policy space have acknowledged that policy moves way slower than technology does. And so there
have been efforts to craft bills that are more future proof, right? These policies are kind of
principle-based and they're more prescriptive so that they can be applied to kind of a suite
of advanced digital technologies as opposed to really narrowed in on just,
social media or just AI companion bots or just neurotech, right? So it helps create this
dynamic ecosystem that enables us to better address new cases when they occur without having to
have thought about what those cases might look like. I think also, you know, in the same way that
we saw with tobacco, what cases like this can do is shift hearts and minds and that can shift
policymakers. So you can create this positive feedback loop between this kind of awareness around
the harms, their roles and responsibilities at these companies, and then say, now they're going to
be aware of this, but so is the public. And so are policymakers. And policymakers are going to
want to do something about this because of the public concern. There's a lot of precedent for dealing
with different aspects of this problem. We're just not deploying it. As software eats the world,
we don't regulate software.
So what that means is the lack of regulations
eats the world that previously had regulations.
And I feel like this is just one of those
yet another examples of that,
which is why what I really want to see
is a very comprehensive and definitive approach
and everything you said, Metali,
of how do we get moms against media addiction
and parents' SOS and parents together
and common sense media
and all of these organizations that care about this
and get the comprehensive thing done?
Because if we try to do it one by one
with these piecemeal approach,
We're going to let the world burn
and we're going to see all of these predictable harms continue
unless we do that.
So I just want to invite our listeners to,
if you are part of those organizations
or have influence over your members of Congress,
this is the moment to spread this case.
It's highly relatable.
It's super important.
And the work of Mattali and the Tech Justice Law Project
and Camille on our policy team at Chti
and the social media victims law center
is just super, super important.
So I commend you for what you're doing.
Thank you so much for coming on.
This has been a really important episode.
And I hope people take action.
Thank you.
Thank you.
Your undivided attention is produced by the Center for Humane Technology,
a nonprofit working to catalyze a humane future.
Our senior producer is Julia Scott.
Josh Lash is our researcher and producer,
and our executive producer is Sasha Fegan.
Mixing on this episode by Jeff Sudaken,
original music by Ryan and Hayes Holiday.
And a special thanks to the whole Center for Humane Technology team,
for making this podcast possible.
You can find show notes, transcripts,
and much more at humanetech.com.
And if you like the podcast,
we'd be grateful if you could rate it on Apple Podcasts
because it helps other people find the show.
And if you made it all the way here,
let me give one more thank you to you
for giving us your undivided attention.