The Journal. - Her Client Was Deepfaked. She Says xAI Is to Blame.
Episode Date: January 27, 2026Ashley St. Clair, a conservative influencer who had a child with Elon Musk, sued Musk’s artificial intelligence company xAI, alleging that its chatbot Grok generated and shared nonconsensual, sexual...ly explicit images of her. St. Clair’s lawsuit is emblematic of the thorny legal issues that surround new AI tools and deepfakes. It also confronts the question: Who is responsible for the content that users prompt chatbots to create? Jessica Mendoza spoke with St. Clair’s lawyer, Carrie Goldberg, about the lawsuit. Further Listening: - Why Elon Musk’s AI Chatbot Went Rogue - How Elon Musk Pulled X Back From the Brink Sign up for WSJ’s free What’s News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Elon Musk's AI chatbot GROC is facing intense criticism accused of allowing X users to generate fake sexually explicit images.
Late last month, the popular AI chatbot GROC came under fire for a new feature.
An influx of explicit content coming after GROC recently enhanced its image generation abilities with a new model.
GROC, which is integrated on the platform X, began allowing users to edit images with text prompts.
X users quickly discovered they could use the feature to have GROC execute instructions like take her clothes off or put her in a bikini.
Within days, X was flooded with non-consensual AI-generated images.
It's impacting hundreds of thousands of women worldwide.
GROC is producing thousands of undressed images per hour on X allegedly.
And experts are saying that the scale is nothing like anything they have ever seen before.
Elon Musk, who owns X, has called criticisms of the same.
of GROC an effort to suppress free speech.
X later said that the platform has restricted users' ability
to use GROC to edit images of real people in revealing clothing.
The company also said it had blocked the ability
to generate images of real people in bikinis
and other revealing attire in places where it's illegal.
But at least one user says the damage has been done.
Ashley St. Clair is a 27-year-old conservative influencer,
who's known in part for having a child with Musk.
St. Clair said Grock undressed images of her and depicted her in sexually explicit poses in response to user prompts.
Here she is on CBS.
The worst for me was seeing myself undressed, bent over, and then my toddler's backpack in the background.
Because I had to then see that and see myself violated in that way in such horrific images
and then put that same backpack on my son the next day.
St. Clair decided to sue X-AI.
the company behind GROC.
AI should not be allowed to generate
and undressed children and women.
That's what needs to happen.
St. Clair's lawsuit gets at the heart
of the thorny legal issues surrounding new AI tools,
and it confronts the question,
who is responsible for the content
that users prompt chatbots to create?
Last week, I sat down with St. Clair's lawyer,
a woman named Carrie Goldberg.
She's known for litigating online sexual harm cases.
and has a particular strategy for holding companies liable for online content.
Carrie told me her goal is to help create new guardrails for an era of artificial intelligence.
I want this to set precedent so that this company and its competitors don't go back into the business of peddling in people's nude images.
Welcome to The Journal, our show about money, business, and power.
I'm Jessica Mendoza.
It's Tuesday, Jayette.
January 27th.
Coming up on the show, a conversation with the lawyer taking on GROC and XAI.
This episode is brought to you by Fidelity.
You check how well something performs before you buy it.
Why should investing be any different?
Fidelity gets that performance matters most.
With sound financial advice and quality investment products, they're here to help accelerate
your dreams.
Chat with your advisor or visit Fidelity.ca.
slash performance to learn more.
Commission's fees and expenses may apply.
Read the funds or ETSs prospectus before investing.
Funds in ETS are not guaranteed.
Their values change and past performance may not be repeated.
Carrie Goldberg has built her reputation around internet abuse cases.
When I spoke to her, she was coming to the end of her workday at her Brooklyn law firm,
which has a memorable tagline.
Suing A-holes, psychos, trolls, and perves, and toxic cases against tech.
Carrie's mission is a personal one.
She started her firm in 2014,
after she said that an ex-boyfriend
threatened to share intimate photos of her.
At the time, most states, including New York,
didn't have laws to protect people from that.
And so I got this unfortunate education,
and I started my law firm
because I felt that other people needed help
against relentless stalkers.
And I quickly started getting cases of
Back then it was called revenge porn.
And so I was like, how, you know, this is intentional infliction of emotional distress.
How are these companies existing?
We should sue them.
And that's when I came up against Section 230 of the Communications Decency Act.
Section 230 is considered the bedrock of the Internet.
It was enacted in 1996, and it protects websites and social media platforms
from being held legally liable for the content that users post.
The law was meant to encourage free speech, and it helped the internet take off.
Supporters say that without Section 230, internet discourse as we know it wouldn't exist.
Platforms and sites would much more heavily censor user reviews, comments, and opinions,
or simply avoid hosting user content at all for fear of being sued.
But critics say the law has been a way for tech companies and platforms to avoid legal liability
for users doing things from selling weapons to posting hate speech and obscene content.
Carrie wanted to hold a company accountable even with Section 230 in place.
She needed a strategy.
And she had an idea.
She came up with a new take on an old legal theory, product liability.
So product liability is an area of law where you're holding companies responsible for the products that they release.
And so companies can be held responsible if they are releasing different.
effective products, defectively designed, defectively manufactured, products where there aren't adequate warnings, they can be held responsible.
Product liability cases have led to things like better airbags in cars and safer beds for babies.
But product liability hadn't typically been used against online platforms until 2017, when Carrie filed a case against the dating app Grindr.
The case involved deep-faked profiles of Carrie's client.
Carrie argued that Grindr was designed with, quote, foreseeable harm, because at the time she said, the app wasn't capable of screening and blocking known dangerous users.
We were like, well, okay, you're a dating app that relies on geolocation technology.
It's an absolute certainty that sometimes your product will be misused by rapists, stalkers, or other kinds of predators.
So if you've not built into your product technology to ban those abusers, then you've released an unsafe product into the stream of commerce.
And so in the Grindr case, the argument you were making was that this product, Grindr, this app was flawed, was designed in a way that was causing harm and needed to be remedied.
That's precisely the argument.
Grindr fought the case in court using Section 230.
And what happened?
The judge dismissed the case, and I appealed and appealed, and it got dismissed at every level.
A spokesperson for Grindr said the company is continually evaluating and enhancing its safety measures to keep bad actors off the platform.
But that theory since has been very effective in other cases and other lawsuits.
Like in 2021, when Carrie sued a video chat website called Omagle on behalf of a minor,
In that case, she argued in part that Omega, as a product,
failed to adequately warn child users of adult predators on the site.
The two sides agreed to settle.
Following the suit, Omega shut down.
When Ashley St. Clair approached Carrie about suing XAI over Grock earlier this month,
Carrie says she saw another opportunity to apply her product liability theory.
This is one of the best arguments I've ever had when it comes to overcoming
a tech company's defense of Section 230 immunity.
In the lawsuit, Carrie and St. Clair alleged that XAI should be liable for its product,
GROC, because the chatbot is, quote, unreasonably dangerous as designed.
We are saying that XAI, because of its grok feature that Andres' people, is not a reasonably safe product,
and that it was foreseeable through its design and manufacture, and its lack of warning.
that it would cause injuries like what befell Ashley.
Carrie filed St. Clair's lawsuit against X-A-I on January 15th.
Just one day before, X said in a blog post
that it had put in new measures to prevent Grock from, quote,
editing the images of real people in revealing clothing.
How would that affect your argument if X is saying that it had made these design changes already?
Well, first of all, my client was still unclosed.
after GROC made that representation.
I have images of her from the 15th of January,
where new images have been created by GROC of her undressed.
But secondly, we are thrilled that XAI has made that change.
But that doesn't account for the fact that they already cause all these injuries
to Ashley and women and children at a mass scale,
and they need to be held accountable for that.
The lawsuit was filed in New York and has since been moved to federal court.
It's currently in preliminary stages.
XAI did not respond to requests for comment.
In a court filing, XAI's lawyers said that St. Clair's claims are subject to dismissal under Section 230 of the Communications Decency Act.
XAI has also launched a countersuit against Ashley St. Clair in Texas, claiming she's breached the company's terms of service agreement with her suit.
One thing Carrie is banking on with St. Clair's lawsuit is that chatbots are relatively new.
Because of that, she says they might present an opportunity for the courts to rethink their interpretation of the law.
I had, for a long time, been thinking about this idea that X-AI, which owns GROC, should not be immune from liability under Section 230.
Because GROC, not a third party, is the one that is actually generating this material.
These companies are liable for their own content.
But couldn't XAI argue that GROC only creates those images at the request of users?
And so, like, it's the users, that third party who are liable.
I mean, does that complicate the argument at all?
I mean, I don't see that.
You know, somebody typing in a prompt is material different from GROC, you know, creating an actual image.
Section 230 is intended for situations where an online platform is just acting as,
a passive publisher, not where it is itself creating the actual content. This situation, GROC is
not working in the capacity as a publisher. It's actually spitting out the content.
It's generating the content is what you're saying. And I mean, certainly you could say that a
third-party user is also contributing to the content, but that doesn't mean that GROC isn't.
In early January, Elon Musk said, quote, they just want to suppress.
free speech in response to criticism of GROC's image generation.
I just want to kind of parse that.
Is there any truth to that statement in the sense that there's a risk to free speech when we restrict what people can create with a chatbot like GROC?
Well, I think that argument might be valid if it came to the government trying to create laws that restrict speech.
But when somebody's been harmed in a foreseeable way by content,
I don't see that argument flying in a situation where Grock is itself spitting out the content.
After the break, we look at what the law says when it comes to deep fakes.
And Carrie tells me why she thinks the courts are still the best venue for victims to be heard.
Use PDF spaces to generate a presentation.
Grab your docks, your permits, your move.
AI levels of your pitch gets it in a groove.
Choose a template with your timeless cool.
Come on now, let's flex those tools.
Drive design, deliver, make it sing.
AI builds the deck so you can build that thing.
Do that, do that do.
Learn more at adobe.com slash do that with acrobat.
GROC isn't the only AI that can declothe images of real people.
But one difference with GROC is that the tool is integrated with a social media platform,
which meant the images generated on X went public right away.
So even if XAI makes good on its promise to stop producing this content,
those images exist forever.
And they're circulated and they're seen by people.
Carrie says the public nature of these images
is why she made another legal claim in the lawsuit against XAI,
that GROC amounts to a public nuisance.
Public nuisance law addresses things like noise, cleanliness,
or safety in public spaces.
Because, you know, there were a lot of people harmed in a public space,
it allows for us to have, you know, reasonable facts to plead that XAI was acting as a
public nuisance, that it was operating in the public sphere and harming lots and lots of people.
And it really lends itself beautifully to this specific product that has long been calling itself
the public square of the internet.
The St. Clair lawsuit is coming at a time
when lawmakers are trying to figure out
how to handle AI and deepfakes.
Last year, Congress passed the Bipartisan
Take It Down Act.
The law makes it illegal for a person
to post a non-consensual,
sexualized deepfake of someone else.
And starting in May of this year,
it also requires that social media companies
take down deepfakes at a user's request
within 48 hours.
Critics say the law could lead to censorship.
Still, Carrie says laws like the Take a Down Act
fall short in important ways.
Well, I mean, I prefer to just go about things
using our court system because it's, you know,
that makes it so that just me and one client can get a ruling
and that can become precedent.
You know, I want more laws, if any or necessary,
that give victims a new cause of action
so that they can be in the power seat
and so they don't have to experiment with claims like product liability,
and they can actually just use, you know, a deep fake claim
that specifically is tailored to this exact behavior.
I think the thing that's so compelling about these digital fraud cases
is that it can really happen to anybody.
Like, if you have a face, you can be deep faked.
And, you know, there's always been a lot of victim blaming
when it comes to victims of other kinds of image-based sexual abuse,
like that person was stupid enough to take the picture
or to send the picture and share it with that unreliable person.
But with deepfakes, everybody listening could become the victim of, you know,
technology altering your image.
It's interesting to me, though, that you said, you know,
you would rather take this through the courts and set precedent in that way,
rather than see laws be passed.
Can you explain why that is?
What's the benefit of doing it through the courts
versus, you know, potentially through Congress
or through state law?
Immediacy.
So when I sue, I mean, I, you know,
filed my lawsuit within nine days of when Ashley
was experiencing the impact of this.
The thing about regulation and laws
is that they're always catching up to the times.
And so I think that they're never,
necessary, they're great, but, you know, they respond to things that have, you know, been a
problem for, you know, sometimes years. So, I mean, I want laws, but I also want to just be
able to sue and go rogue in court. Ultimately, Carrie, what do you hope will come of
Ashley Sinclair's case against XAI? Well, I want to get into discovery and I want to show how, you know,
the quantity of images that were created, the number of other victims that were harmed.
And I want this to set precedent so that this company and its competitors don't go back into
the business of peddling in people's nude images.
I want to know what happened in the boardroom.
So what happened when they, like, found out that all these people were being harmed,
how much longer did they continue to have this product unleashed on the general public?
So I want to see what was happening on a high level before they actually took action.
What happens if you lose or if the case gets thrown out?
Like, are there consequences?
Could it set back efforts to get compensation for others who suffered?
No.
I mean, we would appeal and appeal and appeal.
You know, usually when it's a case like this where I know in my guts that it's the right theory,
I will keep suing under it until it works.
Carrie, thank you so much for your time.
Thank you, Jess, for having me. I appreciate it.
That's all for today, Tuesday, January 27th.
The Journal is a co-production of Spotify and the Wall Street Journal.
Additional reporting in this episode by Georgia Wells.
Thanks for listening. See you tomorrow.
