On with Kara Swisher - Elon’s “Nudify” Mess: How X Supercharged Deepfakes
Episode Date: January 22, 2026On Christmas Eve, Elon Musk’s X rolled out an in-app tool that lets users alter other people’s photos and post the results directly in reply. With minimal safeguards, it quickly became a pipeline ...for sexualized, non-consensual deepfakes, including imagery involving minors, delivered straight into victims’ notifications. Renée DiResta, Hany Farid, and Casey Newton join Kara to dig into the scale of the harm, the failure of app stores and regulators to act quickly, and why the “free speech” rhetoric used to defend the abuse is incoherent. Kara explores what accountability could look like — and what comes next as AI tools get more powerful. Renée DiResta is the former technical research manager at Stanford's Internet Observatory. She researched online CSAM for years and is one of the world’s leading experts on online disinformation and propaganda. She’s also the author of Invisible Rulers: The People Who Turn Lies into Reality. Hany Farid is a professor of computer sciences and engineering at the University of California, Berkeley. He’s been described as the father of digital image forensics and has spent years developing tools to combat CSAM. Casey Newton is the founder of the tech newsletter Platformer and the co-host of The New York Times podcast Hard Fork. This episode was recorded on Tuesday, January 20th. When reached for comment, a spokesperson for X referred us to a a statement post on X, which reads in part: We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content. We take action to remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking Child Sexual Exploitation materials to law enforcement authorities as necessary. Questions? Comments? Email us at on@voxmedia.com or find us on YouTube, Instagram, TikTok, Threads, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Hi, everyone from New York Magazine and the Vox Media Podcast Network.
This is On with Kara Swisher, and I'm Kara Swisher.
On Christmas Eve, Elon Musk announced a new AI feature called Grok Image Edit.
It's an image editing tool built into X, the social platform formerly known as Twitter.
The tool allows users to create AI-edited versions of other users' images and then post the AI version as a reply to the original post on X.
But because X rolled out Grok Image Edit with almost...
no safeguards, strangers flooded X with sexualized deepfakes of real people. For example,
Grock could take a photo of a woman on X and create a realistic deepfake that looks like the
original image, except the woman is now in a bikini and has, quote, donut glaze on her face.
It could do the same thing with images of children, and it did creating countless deep fakes.
To add insult to injury, because those images were often posted as replies to the original tweet,
the people who were victimized got notifications every time someone interacted or replied to the sexualized deepfake image.
So in essence, Elon built a tool for creating and distributing sexualized deepfakes,
and he did so in the most humiliating and degrading way possible for the victims.
AdX has arguably become the tool for creating and distributing AI-generated child sexual abuse material or CSAM,
what used to be known as child porn, and frankly,
still is. No surprise, governments across the globe have begun investigations into GROC,
and after publicly mocking the controversy and accusing his critics of censorship,
XAI has started putting guardrails on GROC with mixed success. It's deeply ironic,
considering that when Elon bought Twitter, he vowed that getting CSAM off the site was
the, quote, priority number one, and quote, will forever be our top priority. But it's not
surprising, given that Elon has positioned GROC as the, quote, spicy and anti-susiness.
anti-woke chatbot, and it's not the first time Grok has made news for generating non-consensual
deep fakes. Also, Elon, as he's proven time and again, always takes the heinous position on any
subject, and this, I can't believe I'm saying this, is his most heinous. The backlash against
Grock has been swift and widespread, thankfully, but so far the consequences have been minimal.
I think it's important to talk about this, because I remember when everyone agreed
the child porn was wrong.
The fact that we're debating it right now
is sickening. As a parent,
as a reporter, as a citizen
of this world, it's grotesque
what these people are doing and
benefiting from it financially
and otherwise.
My guest today are Renee DeResta,
Hani Farid, and Casey Newton.
Renee DeResta is the former
technical research manager at Stanford's
Internet Observatory. She studied
CSAM for years and is one of the
world's leading experts on online disinformation
and propaganda. She's also the author of Invisible Rulers, The People Who Turn Lies into Reality.
Hani Farid is a professor of computer sciences and engineering at the University of California at Berkeley.
He's been described as the father of digital image forensics and has spent years developing tools to combat CSAM.
Casey Newton is the founder of the tech newsletter platformer and the co-host of the New York Times podcast, Hard Fork.
This is a difficult but important topic. It means.
a lot to me. So please stick around. And to all the tech people who continue to resist doing
anything about safety, especially of children, we are not going to stop until you lay down and
change the situation. Support for this show comes from Vanta. Vanta uses AI and automation to get
you compliant fast, simplify your audit process and unblocked deals so you can prove to customers
that you take security seriously. You can think of Vanta as you're always on
AI-powered security expert who scales with you. That's why top startups like cursor, linear,
and replet use Vanta to get and stay secure. Get started at vanta.com slash Vox. That's v-a-n-tta.com
slash Vanta. Vanta.com slash Vox. Reney, honey, and Casey, thanks for coming on on.
Hey, Kara. Hey, this is a topic I really wanted to do a show about. I'm sort of incandescent as a parent
and as a person who's watched this go on.
And so I have feels about it.
So just get ready.
But thanks for being here.
It's good to be with you.
Grock has been in the news for weeks
because it's facilitating the creation
and distribution of deeply disturbing sexualized images.
Some of them are non-consensual deepfakes
and potentially even C-SAM.
But C-SAM deep fakes and revenge porn are serious problems
that they plagued the Internet for years.
This is not a new thing.
So before we dive into the issues with GROC,
let's talk about the scope and scale of the problem.
Let's hear first from Renee,
than Casey, then Hani.
Okay. So as you note, this has been an issue for a very, very long time.
Grock is not the first, you know, model to be capable of doing this.
One of the things that I think we should talk about, first, it began to happen on GROC late December.
So maybe we're almost a month, a month into this issue.
It began producing what you might call nudified images.
So nudification, the ability to do what's called in-painting, to change an image that already exists is a technology that,
again, has been around for a while. It's got a lot of other legitimate uses. It became something
that on GROC, you could see it done very visually in the public eye. So this was something that
had previously been done much more in platforms and communities that were dedicated to this kind of thing,
right? So there were always, you know, communities that you could go and find Discord servers,
places where users would do this. Then there was a spate of apps that began to pop up that
The app stores were playing whackamol with where kids started doing this to harass their classmates.
So there was this sort of small scale little apps that were where this was happening.
But Grok really made it something that was very front-centered.
People were being harassed in their own replies.
And it began to be something that took that harassment out of the dark corners of the web, which was the threat model that a lot of people who have studied this for a very long time.
That was how we had previously seen it manifest.
And it took it and became something where it was happening at around almost 6,700 posts per hour was the stat that some of the researchers who tracked it took it to.
So they essentially just amplified it.
Yeah, it became the place where you could create, distribute, and honestly normalized.
It became something that all of a sudden was normalized.
You could just go and you could do it.
And that was what was different about it.
Okay. Casey?
Yeah.
I mean, as Renee notes,
nudification has been a problem going back many years.
In the recent past, though,
you couldn't get one of the frontier models
to generate a photorealistic image of a naked person.
You would kind of have to hack your way around it
and deal with a subpar image.
But nonetheless, there were tons of unscrupulous app makers
who were buying ads on meta
and who were buying ads on Apple and Google's PlayStores
getting these apps into the hands of thousands of people who are using them to terrorize,
especially young women. And it was a bad problem. And it did lead to things like the Take
a Down Act, which did pass here in the United States. But as Renee notes, what has happened on
Grock has just taken it to an insane new degree where you don't have to go scrounge around on the
dark web to find one of these apps or use a subpar model. You can now use one of those powerful
models that we have and harass people directly in their own replies.
In public.
Right, in public.
And so this is this spicy mode, honey?
Is that what, explain that?
So first, I think Renee got it exactly right.
What Grock did is they centralized the creation, the distribution, and the normalization of this content, which now dates 10 years back.
In fact, the very term, deep fake, comes from the moniker of a Reddit user who used the then-nacent technology to create the first non-consensual intimate imagery.
So this is neither unintended or unexpected.
And it was also preventable.
So your point, Kara, about the spicy mode is open AIs, chat GPT, Google's Gemini, have pretty good guardrails.
And they have guardrails on the semantic inputs, what you type in.
They won't allow certain types of prompts.
They have semantic guardrails on the output.
They will analyze the output image to determine if it shows a child, if it's sexually explicit.
And Grock decided we don't want to do that, hence the spicy mode.
So this is neither, you know, sometimes we have technology where people misuse it in a way that is unintentional.
or unexpected. This is neither. This was we knew exactly what was going to happen. Elon Musk knew
what he was doing and he allowed it to happen. And it's been awful. And Renee also made a really good
point here is that this stuff has been around for a long time. But it was sort of- Yeah, talk about that,
the larger, broader decade. Yeah, let's talk about that. So we're seeing this up and down the stack.
I mean, there's hardly a day goes by where I don't see an article about some school kids
creating this type of images of their fellow classmates and then, of course, sharing it in
distributing it. People are using it to weaponize against reporters, against academics, against
politicians, against people they just don't like. And it is, you know, there's multiple
facts parts to this. So there's obviously the people who are creating the content. There's the
tech they're using. There's the platform that empowers them. But then we shouldn't also forget
there's an entire ecosystem that's enabling this. Advertis are still advertising on X.
Apple and Google are still hosting these apps. Ads are being delivered by Google and
and Facebook. There's an entire ecosystem that is monetizing and profiting from this, not just the
technology companies that create the content. And we shouldn't forget that entire Silicon Valley
ecosystem is profiting them quite handsomely from the creation and distribution of this content.
Including CSAM. Including CSAM, which is, you know, look, maybe reasonable people want to
have a conversation about non-consensual intimate imagery. I don't think there's much of a conversation,
but there's no conversation around CSAM, right? It's illegal and it's awful and it's dangerous.
So this is not, you know, we're not talking about constitutional rights.
Right.
We're talking about illegal content.
We've agreed on a long time ago.
And Casey, a lot has happened in just a few weeks.
Foreign governments have Block Grok.
Other ones are investigating.
So is California's Attorney General.
The UK is working on passing a new law to make illegal to create non-consensual intimate images.
But first, Elon mock the controversy.
Then XAI made Grucks added a premium feature, meaning you had to pay for it, I guess.
Now XAI says they've instituted guardrails to stop it from nudifying images.
with limited success. Give us an update. There is an article in Bloomberg by Kurt Wagner saying that there
is still a trickle of images that the public GROC account is still producing. So I think whenever
GROC says that guardrails are in place, that is a statement we should view with deep suspicion.
It continues to be the case that the standalone GROC app, so this is just something that you can
download onto your phone and is rated for children 13 plus in the Apple app,
you can use that to make nude images of women.
And if you go onto Reddit, there are now thriving subreddits where people are just competing
to see who can make the most explicit porn using GROC.
So, no, there really are not meaningful safeguards in the way that I think most regulators
would want to see here.
And we're regulators right now.
Well, you know, as you know, we have started to see regulators in a number of countries
step up.
The countries with the strictest anti-porn laws have been the fastest to act.
GROC has now just banned wholesale in Indonesia and Malaysia. But if you look in the European Union,
their plan to take action, the U.K. is even the United States Senate passed a law designed to prevent
some of the harms that we're seeing from GROC, although we'll see if it passes in the House.
So this has managed to get the attention of regulators around the world. But an ongoing question with
Elon is what sort of regulation, what kind of fine, what kind of threat actually results in the
harm being addressed because I think it's just been the case over and over again that that's
really hard to do.
He just ignores it.
Now, Renee, you've studied CSAM online for years.
In 2023, you contributed to a paper that predicted that AI generated CSAM would be a problem.
Looking back, what did you get right and what else didn't you see coming besides Elon's heinous
personality?
Well, I think the willingness to fight about it actually was what has surprised me most.
about this recent debacle.
The, you know, I posted about this on substack and I got some comment and notes like, well,
this is all just a plot to censor X.
And I thought, really, is that where we're going here?
We're using the censorship, like the mental stopword censorship is where we're going
with this.
But, but yes, I mean, that is, that is in fact where we're going with this.
So I think that it is that term that he has chosen to use for any regulation is censorship.
Any barrier of any kind is censorship.
any restriction is censorship. And now we are using that term to apply to the generation of images of
underage girls, right, is censorship, anything that throttles that. And it works very well,
unfortunately, because I think that term, that word, you know, it has a couple different meanings,
right? It refers to legally, it refers to like kind of three different things, right? There's
factual claims about speech suppression saying like, this content was taken down by a private
platform. That's one meaning of the word. It's a least.
legal claim about U.S. law, right, a violation of the First Amendment, which doesn't apply to private
platforms at all, but nonetheless, they use it in that way. And then it's a normative claim about free speech
values. And that is the expansive way that Elon is trying to use it at this point to normalize
what is happening on his platform. And that is how it is being used. It is the idea that it is an
affront to your free speech rights to not be able to generate nude images of other people. And that
That's what surprised me about it, actually. Including children. Including children.
And so that's kind of anambla kind of argument. Yes. So seeing that, that expansion is where I was like, man, I really, you know, we used to have these fights about content moderation, right? The people who would scream at you that content moderation is censorship. But then they would say, but not CSAM. Okay, that's not censorship. Okay. That's not censorship. Okay. That's not censorship. Okay. That's not. That's not censorship too. We just, that's not censorship. We just, that's not. We just took that off the.
table. We've expanded it to include that now also. And that's because they have to keep expanding
that term to encompass any possible conceivable restriction. And that's what you're seeing here.
As Ofcom has gone after, Offcom being the British communication safety regulator, as they have gone
after Elon, naturally he is reframing this as, oh, they just want to censor free speech on X. And they're
mad at me for this that and the other thing. So that has been what has surprised me. Nothing related to the
actual content. I mean, you know, when Stanford Internet Observatory existed,
we used to go and we would present at the Crimes Against Children Conference, and we would see
examples of in-painting, examples of all of the different ways in which, you know, the actual
technology has manifested and changed over the last, you know, even just five years, like the,
the advances in which this kind of content creation has shifted. So nothing about the actual
technology is surprising here, nothing about the ways people try to get around guardrails or,
but the mainstreaming of it and then the application of censorship as a normative claim in this way,
and the aggression has been very surprising to see.
So, Honey, you develop technology that can flag previously identified images of CSAM.
You've developed tools to identify AI-created images.
But if GROC and X help users create and distribute sexualized images of minors, there's obviously no technical solution for that.
Even if tech companies make good faith effort prevent their chapouts from creating sexualized images in the first place,
AI and have to be exacerbate what was already a very serious problem.
Talk about how these AI tools complicate the efforts to fight it.
For example, deepfakes can be used to extort miners.
Someone could attempt to make an authentic piece of CSAM
look like it's generated from AI
in order to avoid scrutiny from authorities.
There's all manner of problems here.
Yeah, okay, so there's many sort of threads to pull on here.
So let's see a few things.
So first of all, when the National Center
for Missing Exploited Children, Nick,
has previously identified and categorized CSAM,
we extract what's called the perceptual hash,
and then we can prevent the re-upload of that content
on things like Facebook and Instagram and TikTok and YouTube.
That's called photo DNA.
That's the work that I worked on with Microsoft back in the day.
Right.
And all the companies have been cooperating on that.
Yeah, many of the companies, I mean, I would say, you know, it was kicking and screaming.
They were cooperating.
Now they, now most of the big tech companies are on board, including X, by the way.
But it was not an easy ass because as Renee was just saying, nobody wanted to open the doors of any content moderation.
Because the thought was, oh, if we open it for CSAM, well, whatever.
else are you going to come for? So it was a fight back in the day to do this back in the early
knots. But, okay, here we are. The problem, of course, with AI generated content is that it's not
been previously seen. And in fact, you can produce it, I can produce hundreds of images on
my laptop every minute. And so there's this flood of content that is coming on the internet
that we simply don't have the volume or the capacity to look at every single piece of content
and flag it. That's number one. Number two is that, okay, maybe if the tech was very,
very, very good. We could deal with the public-facing version of this, things that are posted on
action Facebook. But what you are seeing, as you pointed correctly, is that bad guys are taking
innocent photos of children, nudifying them, sending them in things like Snap and Discord and
private messaging that we don't have, we're not privy to, and then extorting them and those kids are
taking their own lives, you know, destroying their lives. And it is horrific what we are seeing.
And this is happening from organized cybercriminals from around the world.
And we have seen story after story after story.
And here's the thing is, you know, when you look at these technologies, everything we always see, and this has been true of Silicon Valley, as you know, for over 25 years, is safety is always an afterthought.
We always try to backfill safety in.
It's not even a thought.
It's not even a thought.
I'm being generous here, right?
And you can't backfill safety.
You can't do what X is trying to do now and be like, okay, we made a mess.
We've normalized this.
We've given everybody a taste of it.
Now we're going to try to backtrack it.
It's not going to work.
I'll add one more person, by the way, if I made, to the list of folks that are going after X,
which is Australia's E-Safety Commissioner, Julie Inman Grant, has open an investigation,
and they're doing really, has consistently doing really good work to try to hold these tech companies accountable for this.
But this is a mess that was both preventable and not unexpected.
We'll be back in a minute.
Support for this show comes from Acorns.
A lot of people focus on where they're not.
money is today, but Acorns is the financial wellness app that cares about where your money is going
tomorrow. And with Acorn's potential screen, you can find out what your money is capable of.
Acorns is a smart way to give your money a chance to grow. You can sign up in minutes and start
automatically investing your spare money, even if all you've got is spare change. I haven't
tried Acorns yet, but Scott has, Scott. Tell me about it. It's a wonderful way to start investing.
It rounds up to the nearest dollar and then automatically invest in a low-cost index fund. It's a great
way to start investing. And it's easy to use if you're not a super seamless. It's fantastic. And it's
it's all about establishing good habits and financial security at a young age pretty seamlessly.
Thank you for that, Scott. That's a great thing about Acorns. It grows with you. Sign up now with Acorns and boost
your new account with a $5 bonus investment. Join the over 14 million all-time customers who've
already saved and invested over $27 billion with Acorns. Head to Acorns.com slash Kara or download the Acorns app
to get started.
Paid non-client endorsement, compensation provides incentive to positively promote Acorns.
Tier 2 compensation provided.
Potential subject to various factors such as customers' accounts, age, and investment settings,
does not include Acorn's fees.
Results do not predict or represent the performance of any Acorn portfolio.
Investment results will vary investing involves risk.
Acorn Advisors LLC and SEC registered investment advisor.
View important disclosures at Acorns.com.
Kara. Support for On with Kara Swisher comes from Grooons. If you're looking for a health goal that you
can actually stick to, you might want to check out Grooons. Grunes is a simple daily habit that
deliver real benefits with minimal effort. Their convenient, comprehensive formula packed into a
snack pack of gummies a day. This isn't a multivitam and a greens gummy or a prebiotic. It's all
of those things and then some at a fraction of the price. And bonus, it tastes great. Grun's ingredients
are backed by over 35,000 research publications,
while generic multivitamins contain only seven to nine vitamins
grunes have more than 20 vitamins and minerals
and 60 ingredients which include nutrient dense and whole foods.
That includes six grams of prebiotic fiber,
which is three times the amount of dietary fiber
compared to the leading greens powders
and more than two cups of broccoli.
It's a daily snack pack because you can't fit the amount of nutrients
grunes does into just one gummy.
Plus, that makes it a fun treat to look forward to
every day. Kick off the new year
right and save up to 52%
off with the code Kara at
groo.co. That's code
Kara K-A-R-G-R-U-N-S dot
CO. This week on
Net Worth and Chill, I'm breaking down the
essential money tips for every stage of your
relationship. From the first date to
forever. Who actually pays on the first date?
How do you split rent when you move in together
if one of you makes way more than the other
person? And yes, we need to talk
about pre-ups. Plus, I'm sharing why I believe in equity over equality when it comes to splitting costs.
Whether you're single and swiping, moving in with your partner, or planning your wedding,
this episode will help you navigate the uncomfortable money conversations that can make or break your relationship.
Listen wherever you get your podcasts or watch on YouTube.com slash your rich BFF.
So Casey, when it comes to a lot of images, we're actually in a legal gray area in the U.S.
at least. The Take It Down Act makes it illegal to publish non-consensual intimate visual
depictions, both authentic and deep-pake, but it doesn't take into effect until later this year.
More importantly, what qualifies as an intimate visual depiction isn't clear cut.
The Senate just passed that, as you said, the Defiance Act for the second time,
which allows victims of non-consensual explicit AI images to sue, but you run into the same
as you do with the Take-It-Down Act. It's not clear if the AI platforms themselves are liable.
realistically talk about the legal recourse of victims of Grog's deep fake binge have right now.
Well, I mean, it's terrible because it places the onus on the victim to go out and get justice for themselves, right?
Like, they have already been attacked in public and, you know, suffered in who knows what way.
And now starting in May, there will at least be some avenue through the Take It Down Act through which they can go and try to get some sort of, you know, justice done.
But, you know, my fear here is that this is going to be yet another horrible customer service experience from a tech platform where you go onto a website and there's a form that you fill out and you send it in and you just pray that someone answers it. Right. So I actually right now, you know, do not have a lot of confidence that this is going to feel like it is getting justice for any of the victims. And if you think about, you know, the sorts of people that are often targeted in these attacks, it is often women. It is often women who are, you know, politicians.
or leaders of some other type. And these sorts of attacks are used in an effort to silence them and to sort of drive them out of public life.
Speaking of which Ashley St. Clair, the former conservative influencer and mother of a child with Elon has sued X over these deepfakes of her. And it's countersued saying she violated the terms of use by actually suing for her abuse.
Yeah. Well, in particular, because she didn't file in the Northern District of Texas, you know, which is just a frivolous, you know, legal action designed to make this, you know, more expensive.
have been horrible for her. But my point is here by the time that it comes to go fill out the form on
the website that says, hi, like, I've been a victim of a horrible attack, the damage has really
already been done here, right? And my fear is that this is just going to drive a lot of women in
particular off of social platforms, which, of course, serves the interests of the fascists who run X.com.
Okay. Can I add one more thing, too, is that the ticket down, the ticket down has a 48-hour window.
That's an attorney on the internet. The half-life of a social media post is about 90 seconds.
48-hour window is completely absurd. The content has been online. It's been online for two days,
and it's spread wildly. There's no taking it down at this point. So the whole thing, I think,
is a fool's errand. So now, Google, as you mentioned, Renee, Apple and Google could have kicked
X off their app stores for violating their guidelines so far, Tim Cook and Satchindella have done
nothing. They've said nothing. Renee, you wrote on threads that the app stores wouldn't do anything.
Let's have you read what you posted. Okay, here we go. It's too big to regulate. It's not a two-bit
nudify app and Elon will howl censorship about second order consequences. He uses the word even when
Europeans find him over blue check fraud. Similarly, congressmen that talk a big game about protecting
kids with Zuckerberg are afraid of Elon. Josh Hawley and Marsha Blackburn should be speaking up.
So talk about why they aren't and why they're upset about Elon in particular. So as we were
talking about a little bit earlier with this small two bit nudify apps, there is one use for those
apps. We all know what they are for. They are for harassment. That is, I, at a,
And they are often made by kind of indie developers.
And so when they are, you know, when the mole is whacked, nobody complains.
Or if they complain, you're not going to hear about it.
Nobody really cares.
When Elon complains, the world hears about it, right?
There is an entire army of people who will come out and will howl censorship right along with him.
As I mentioned, again, we have now seen the expansion of that term to cover your perceived right to generate news of other people.
As we've seen, you know, Ashley St. Clair had the audacity to complain.
She had her blue check revoked. She was demonetized, right? It was immediately made clear that that is the cost of complaining. When you do things to Elon's companies, all of a sudden you get tariffed, right? There are people who get their visas revoked because they have the audacity to, you know, to implement to implement fines on X for dishonest business practices, right? When X doesn't adhere to the terms of the Digital Services Act that becomes an international issue.
So the dynamics around regulating X are fraught to begin with.
Because Josh Hawley and Marsha Blackburn have been very vocal on this issue.
Right. So I'll get to them in just one second. But the other thing I'll say about this, though, is that the other, the realistic point, though, is that the integration of GROC and X is a challenge, right?
So Malaysia and Indonesia block GROC. The challenge, though, is that people use X to communicate. It is a speech platform. And when you block X, you are blocking a lot of other types of speech that have nothing to.
do with notification. And that is realistically one of the reasons why, when I say it's sort of too big to block, it has a lot of other issues that go along with it. And so when you block the notification piece, you have a lot of other kind of second order effects. But you also have Marsha Blackburn, who if you've heard her speak in tech hearings, she gets very, very upset about two things. One, AI doing bad things, right? You've heard her complain about, you know, where they say things like they make up accusations about conservatives and things like this. You'd think she'd be very upset about them nudifying women.
She's a strong proponent of defending women. Two, defending children. These are people who support COSA.
Josh Hawley has a very strong Christian populist sentiment in his kind of ethics underlying why he wants to see tech regulated, constantly writing letters to Zuckerberg, asking him to account for himself. And the two of them were entirely silent about this. The person who spoke up was Ted Cruz, who spoke up, but then also promptly posted a photo of himself with Elon, right? So you have this tension here.
which is he knows this is wrong, but also is very, very afraid of being seen as challenging Elon,
who is a major Republican Party donor at this point.
Right, he's just given $10 million to a candidate in Kentucky, for example.
Crack isn't the only AI chatbot with vulnerabilities around Deepak's, obviously, or C-Span.
Obviously, X-AI seems to care less, but creating robust guardrails for a chatbot image generators is legally perilous.
In order to fix any of the vulnerabilities, you have to find them.
That means red-teaming the chatbots, in other words, trying to get them to actually generate.
CSAM images in order to identify loopholes and close them. But because of the ways the laws are
written, anyone read teaming a chat bot for CSAM might actually be breaking law. Honey, you've
testified about this issue in CSAM. What would you like to see to help create a more robust
infrastructure to fight it? Because that's one of the things that they, that's the problem here,
just like Ashley St. Clair's suing and then being sued for suing. Yeah. Because she's not supposed to
sue under the terms of service. So I do think, you know, we should add another component to the
that as somebody who has worked in the space and has viewed this content as part of my work,
it is horrific.
You don't want people looking at this.
Even the fake stuff is gruesome and is the thing that nightmares are made of.
But I do think it is necessary and we do need robust red teaming in.
We need mechanisms to make sure.
And you're 100% right that we continue to follow the path of move fast and break things.
And look, some of these things are really hard problems.
And nobody's expecting perfection here.
nobody is saying you can never, ever, ever do this.
But when you make it so easy, it is really inexcusable.
And I think there are many, many things we can do from the guardrails on the training data.
This is work that Renee has done, semantic guardrails on the input, semantic guardrails on the output,
really vigorous red teaming.
But the thing, of course, that you know is that none of this is good for profits, right?
And so what we need here is really good either liability or regulatory overhead.
head to create an incentive for the companies to do better because what we know is left to their
own devices, they won't do better. And so it's on us as the public and his regulators and the
court to force that upon them. Yeah, now they've also created a media narrative. So let's take
a step I can analyze the larger implications for the debacle. Elon explicitly created GROC to be,
quote, anti-woke. And the issue with the GROC image edit is, besides making it exceptionally easy
to create these sexualized deep face, it's made it equally easy to use those images to
humiliate and harass victims, as you've all noted. This is part of a much larger problem of bullying
and denigrating women online. Casey, you've mentioned this is part and parcel of the larger trend
of using harassment to, as you said, silence women, push them out of the public discourse.
I'd love to hear each of you talk about how this graph in this larger systemic issue of
harassment and silencing. Casey, you start and then Renee and then Honey. Well, I mean, look,
women have never had an easy time online, right? Like going back to the earliest days of Twitter,
you know, women were also having a terrible time. And, you know, under Jack Dorsey and Dick Costello,
the company also massively underinvested in tools that would just, you know, keep women not just
safe, but also just feel comfortable participating. I think the fresh shock here is just that the
company itself makes a tool that can strip a woman down to her underwear and shove that into her
mentions. And the entire executive leadership team is just laughing it off, right? I mean, it's like,
It's like a level of moral depravity I just didn't think we would get to in my lifetime, but now here we are.
Renee?
I think I would just co-sign that.
I think, again, it's where leadership has gone.
It's the moral degradation on that front.
I think we had seen a trend from 2015 with, you know, Gamergate and attitudes around how should we think about harassment and the fact that if you think about free expression,
there was a balancing act, right, a recognition that certain types of speech push other types of users out of the conversation. And so you wanted to try to maximize that equity to create that environment that allowed the greatest possible number of users to continue to contribute. So saying we're going to allow these people to express themselves by creating nude images of those other people, right? It means that those other people are going to exit and go somewhere else. They're going to cease contributing. Their speech is going to be stifled. They are going to be chilled.
But it's almost like we've sort of reverted back 10 years into that view of equities and conversation as we've just decided that there's a kind of new sheriff in town and this much just doesn't care.
Yeah, doesn't care.
More than doesn't care.
It thinks it's hysterical, honey.
But remember, Kara, remember, you're not allowed to create an account on X that post the recent whereabouts of Elon's plane because that would make him unsafe and it would make him feel like he can't speak.
But if you wanted to strip down any women on X, that's fine.
Fair point, Casey.
Honey?
Yeah, I'll add two things here.
One is, while women are the brunt of a lot of this, it's also all underrepresented groups
have been targeted for years and continue to be targeted, including children.
So that umbrella is actually quite large of people who are being harassed online.
The second thing, and I want to sort of reemphasize something Renee said.
I always like it when people like Elon must talk about free speech because what he really means is free speech for me and people who agree with me.
If you are a true believer in free speech, you want a platform that brings all voices, not just the voices of people who agree with you. And by allowing this type of content, you are in fact attacking free speech for lots and lots of people. And so if you want to be a free speech person, fine, be that. But let's at least be consistent with that worldview.
Oh, you've noticed the hypocrisy.
Shocking, isn't it? Shocking. It's been that way forever, actually.
I know, and I know I'm stating the obvious, but it's, you know, I sort of really, that hypocrisy in particular really frustrates me because, first of all, I think there's lots of restrictions on free speech, first of all, but this is not a free speech issue. This is absolutely a red herring that is used to silence critics.
That is correct. So the right obviously has also been obsessed with pedophilia. I was walking across Union Square Park this weekend. The Q&ON people are still there. Still there, still protesting something. Pisagate and Q&Non were about supposed pedophilia.
of file rings. Obviously so is the Epstein story, though that one is grounded in reality. And then
there's the fixation with the drag queen story hour and calling LGBTQ IA people groomers. And yet we see
this casual attitude towards Elon and Gruck's exploitation and harassment of minors online.
It's never been about pedophilia. It's never been about this. That's the point, right?
We don't like some people. We're going to tag them with something that seems like it's impossible
to ignore. And then when your own guy gets tagged with it, we are silent. This is never.
been about protecting children. Right. This is what Elon did to Yoel Roth, and as you all know,
Yoel was the head of safety at Twitter before you just quit. And then the minute he quit, Elon attacked
him for being a pedophil. Which that's his go-to. Pedophilia is Elon's go-to. Trans or pedophilia's
go-to. Renee, you and many other researchers who study online hate, disinformation and misinformation and
C-SPAN have essentially been harassed out of a job by MAGA lawsuits, congressional letters,
funding threats and now deportation threats. In the process, they've gutted organizations
that research ways to fight CSAM and deepfakes, like your former employer, as you noted just earlier,
the Stanford Internet Observatory. What's the current state of play when it comes to research on
CSAM? Well, I mean, losing SIO is a really big deal. When I got this, I got a text from a reporter,
gosh, it might have been like New Year's Day, I think, talking about some of what was happening and saying,
hey, I've got this, I have this tip, this person saying they think they have AI generated CSAM.
I don't know what to do with it. And I said, well, you know, don't look at it.
Right, right. But then there's this question of like, okay, where do we send it? And, you know, I was kind of DMing with my former colleagues trying to figure out, all right, well, who can we send this to? Right. We used to have special mechanisms for dealing with it. And obviously there's like, you know, Nick Mick and a bunch of other places. But as far as like researchers who are still in the space, this question of.
who were the research groups that are, you know, they're still equipped to do certain types of
work on this. I think there are different groups that had different specialties that are
no longer operational in part because of the funding cuts and other things where, you know,
saying the rather gleeful dismantling of research centers because maybe they studied misinformation
or maybe they had the audacity to say the 2020 election wasn't rigged and stolen, you know, which it wasn't, right?
So it doesn't exist.
Right, exactly.
Well, I mean, SIO shut down as a result of that.
I think I'm heading into year three of the litigation to court cases against us.
And Jim Jordan sent yet another letter to Stanford complaining about researcher speech.
You know, they had a dinner apparently with, funny, I have Julianman Grant and a few people, which Michael Schellenberger and Jim Jordan reframed as some sort of censorship cabal. So they sent another letter demanding emails to try to get to the bottom of that. And so the harassment continues. The stupid conspiracy theories continue. And meanwhile, the question of who does important work to actually solve real problems, you know, the centers that used to really specialize in that get.
shut down.
Do not exist.
And they continue to ask someone like you, especially that ever idiotic Michael Schellenberger.
Honey, your work has been somewhat controversial, though, in the industry, people that are not
Michael Schellenberger, like Mike Maznick, the founder of TechDeviel, you're too willing to
compromise the privacy rights of everyone who use the internet order to try to catch people
who create or distribute C-SAM.
Give us your best-faced version of your critics' argument and then explain the trade-offs
between privacy and child safety online and why you think they're worth it.
Well, first, let's acknowledge that there is tension here between privacy and safety.
And that's not a strictly online thing. It's an offline thing, too. When I go to the airport,
you know, they get to look through my bags and my person. So there's a tradeoff we have to make.
And so one position to say is absolute privacy for everybody and safety is a second, third thought.
The other is privacy for nobody and it's, you know, that's a problem also. So I think reasonable people can disagree here.
My position has been this.
When we talk about privacy, we should talk about privacy for everybody, not just you and me.
Because when I talk about safety measures for children, that is a privacy issue for the children.
And so I reject this notion that I'm throwing privacy under the bus, right?
Now, the argument that I've heard that goes something like this, that if you do this, go searching for child sexual abuse material,
well, then you will enable people to do X, Y, Z, the sloth.
slippery slope argument. And the reality is that when we built photo DNA back in the day, we heard
those same arguments. And it just was not true. People said to me, in my face, you are going to
destroy the internet. You will destroy speech, and it was not true. I reject the notion that we can't
have reasonable safety measures and reasonable privacy. This notion that everything absolutely has to be
end-to-end encrypted and nobody gets to do anything, I think sort of defies the sniff test when it
comes to the physical world. Nobody would say that when you go to the airport, nobody's allowed to
look in your bag because it's a privacy issue. So what I would like is for this to stop being a
conversation about privacy and safety and safety. Because this is an issue of privacy for everybody
and a safety for everybody. And I think as we have done in the physical world, we have to find a
compromise. And I think that that is not a conversation that has been particularly healthy
in the last few years. Yeah, I don't know why it shifts so drastically when it comes to
online child pornography. I find it strange. I find it strange. I find it strange.
too. And here's the thing, I mean, maybe 20 years ago we could talk about an online world and an
offline world. There's no distinction anymore. There's no online and offline world. There's the
world. And it's one world. And we have to start coming to grips with that. We'll be back in a
minute. Support for On with Kara Swisher comes from Indeed. Right now, there's a talented person
out in the world who could help take your business to the next level. But finding that person
doesn't need to be a grind. Just use Indeed sponsored jobs. It boosts your job posting to
reach quality candidates so you can connect with the exact people you want faster, and it makes a big
difference. According to Indeed data, sponsored jobs posted directly on Indeed are 90% more likely
to report a hire than non-sponsored jobs because you reach a bigger pool of quality candidates.
Join the 1.6 million companies that sponsor their jobs with Indeed, so you can spend more time
interviewing candidates who check all your boxes, less stress, less time, and more results
with Indeed sponsored jobs. And listeners to this show will get
a $75 sponsored job credit to help get your job the premium status it deserves. At Indeed.com
slash on, go to Indeed.com slash on right now and support our show by saying you heard about
Indeed on this podcast. Indeed.com slash on terms and conditions apply. Hiring, do it the right way
with Indeed. Support for the show comes from Serval AI. If you ever wonder why it takes long for your
IT department to get the big picture things done, just look at the sheer number of tax that they have
to do day in and day out. Your IT team weighs half their day on password resets, access requests,
and onboarding. And the more your business grows, the more requests pile up. But with Serval,
you can cut about 80% of your help desk tickets in order to free up your team for more meaningful work.
Unlike other legacy players, Serval was built for AI agents from the ground up.
Serval AI writes automation in seconds. Your IT team just describes,
what they need in plain English and Serval generates production-ready automations instantly.
Plus, Serval guarantees 50% help desk automation by week four of your free pilot.
But try it now because pilots are limited.
Serval powers the fastest growing companies in the world like perplexity, Merkhor, Verkata, and Clay.
Get your team out of the help desk and back to the work they enjoy.
Book your free pilot at serval.com slash Kara.
That's s-e-r-v-a-l.com slash Kara.
In the mid-1980s,
Nintendo basically single-handedly saved the gaming industry
with the Nintendo Entertainment System.
And then a toy company convinced Nintendo
that maybe the future of gaming wasn't controllers,
but was instead this weird arm-sleeve glove thing
that let you control video games with your fingers.
It was called the Power Glove, and it was awful.
This week on Version History,
a new chat show about old technology,
we traced the whole story of the Power Glove,
which failed spectacularly and also kind of invented VR.
That's version history wherever you get podcasts.
Let's wrap up talking about accountability or lack thereof really.
There may not be any accountability around this whole ordeal.
It boosted GROC's downloads,
and Secretary of Defense Pete Higgs-Seth announced
that Pentagon will be integrating GROC into its workflows.
And nothing happened with Apple and Google around not doing anything.
They didn't get shame.
They were not, they never responded.
So what's the takeaway from this whole episode?
It'll be just another grimy chapter of the story of X and Grock, like the Mecca Hitler
tweets or explicit Taylor Swift deep fakes going viral on X. Is there any possibility of real change?
I think that to the extent there's going to be real change is just going to come from outside the
United States. I do think that X's existence is just going to become more complicated and more
countries around the world. It's going to become more of a headache for Elon. So maybe we'll see.
some measure of accountability there. If you're looking for any sort of silver lining here, though,
I would just say that the fact that Grock has to stoop to these levels just show you how
unsuccessful it is compared to its rivals. The introduction of its porn chatbot last year was just
explicitly a way to try to boost engagement for what is essentially the fourth or fifth place
AI company. So to me, this is just a sign of increasing desperation over there to get something
working. And yeah, so that's what I'd say about that. And then dress it up like free speech,
essentially. Yeah. So, honey, a review by Wire magazine shows GROC is creating violent sexual images
and videos with minors in them that is much worse than what's being posted on X. We assume the
malicious actors are going to find ways to use AI to create illegal deepfakes and see Sam.
What sort of AI tools can you imagine would be useful in combating the distribution of these
images while respecting these privacy rates you talked about and minimizing false
positive as a cantar an innocent person's reputation, for example. Yeah, yeah, yeah. Well, first,
like everything in cyberspace, this is an adversarial system, right? Like spam, malware and ransomware.
It's a give and take, and it's going to be a back and forth. There are, I mean, if you go to open AIs,
chat GPT, or Google's Gemini, and you try to put in the same prompts that you put into
GROC, it won't work. So we have an existence proof that we can have reasonable measures in place.
And this is the way it works. You solve 80% of the problem relatively quickly, and then you start chipping
it away with a combination of red teaming and responsible reporting mechanisms. And we get to a steady
state the way we have with spam. It's a mostly manageable problem these days. But that takes effort.
And it takes years and years and years to do that. And it takes will and it takes leadership.
And my thing, if I can come back to the question that you ask Casey is, I think there's one
positive thing that has come out of this, which is the really loud outrage from the media about
what is happening. And I think article after article after article just really hammering. And I think
that's good to see that there is some consequence to this, number one. Number two is, I don't think
regulation is going to get us out of this, just because it takes so long. And it's so imperfect in the
way it's crafted. And of course, the lobbyists are going to beat the hell out of the regulation before
it sees the light of day. I think where this starts to get traction is with liability. So putting
aside how Elon Musk is threatening people. But if we start to,
suing these companies back to the dark ages, they will internalize that cost, and they will make
safer products. And so I'm looking to the courts for relief here, because I think that's the fastest
way to get relief. You're absolutely right. You have sue the bastards. That's what I said. But Renee,
we're talking about very dark, disturbing trends. And the underlying assumption behind your work is that
we can have a better internet. So paint a positive picture. Let's imagine a world 10 years from now
where there's a momentum towards creating a safer online world, something I've talked about a decade
to go, speaking of which, one of less hate and harassment. What does it look like? What would you
need to do? What are the pieces that you need to put in place? I mean, some of it is, as Asani
mentions, there is the courts, right? There's the FTC also, which I haven't heard anybody
mentioned, but I know. Oh, please feel free to. Yeah, well, Baron Soka and some other folks
are writing about consumer protection powers. I don't know if this FTC will do it. Again, because of the
unfortunate, you know, integration of Elon into the Republican Party and the political tensions there. But
but that's another potential option.
Brandon has an enormous crush on him.
It's a little bit untoward, frankly.
But one of the things that we saw...
I may create an image or something.
There's the...
You know, there's the exit versus voice phenomenon, right?
Users have voice.
They can say, we don't like this, we don't want it.
You should, I would hope, see more conservatives on the platform saying this isn't appropriate,
this isn't right, this isn't normal.
That's what I would have wanted to see.
instead of people falling in line behind the censorship,
the censorship bullshit.
But the other thing is exit.
You can leave.
There are other platforms.
You can go there, right?
Threads is a little weird, but it exists.
You know, blue sky is a little weird in some ways, but it exists.
There are alternatives, and users can vote with their feet, and at some point they're
going to have to.
And just to say, this isn't normal.
And that's the other option.
And that's where you just say, there are other places to go.
There are other places to be.
There's Instagram.
There's TikTok, I mean, which didn't ever actually get banned, right?
There's so many different options.
It's not that they're necessarily good, but there are alternatives.
And that user outcry in the United States in particular has done more to move platforms in various directions with their policies than any regulation because we haven't had any regulation here.
And that is, I think, the last lever really available to say that this isn't okay.
We don't want it.
It's not moral.
It's not ethical.
and we should be doing something different.
Yeah.
Yeah, I do my best by calling it a Nazi porn bar, as you know, all the time.
But it seems to have caught on a little bit.
So if we take a step back, this episode with Grok is just the tip of the iceberg when it comes to issues created by unregulated AI.
I'd like each of you to finish up by talking about what else is coming our way, what worries you most?
What is the thing you're thinking about next if you had to anticipate?
I mean, I think we have anticipated that we'll do the very worst thing given the choice.
But what worries you the most and what do you think is coming our way?
Casey, then Honey and then Renee.
To me, the worst aspect of Elon's, you know,
underinvestment in content moderation and basically just like rejection of it as a,
as something that anyone should do is that it has just lowered the bar for every other player in the industry.
Like meta is so much worse now than it was before Elon bought Twitter.
And it's just because they feel no pressure.
to do even half the stuff they used to because they look at what Elon gets away with,
and they think, well, you know, what does it matter if we're going to get 10% of our revenue
from literal scams this year, right? Or you look at Roblox and all the issues that they've had with
child predators, and they just look at it as, ah, that's just a compliance function. Let's go ahead
and do the bare minimum. So to me, that's the real issue here, is that when there is someone
who is so publicly able to get away with the worst things imaginable on their platform,
everybody else follows suit because it's cheaper.
That's a very good point. Is there something coming that worries you?
I mean, the thing that worries me is just that we're already in a world where people have trouble distinguishing fact from fiction.
The underlying models continue to get more powerful. There's a huge contingent, particularly on the left, that thinks that AI is fake and a bubble and is all going to collapse tomorrow when, you know, Open AI can't pay a bill.
And so I think a lot of us are just sort of sleepwalking into a future where the media environment becomes ever more challenging and dangerous.
And it doesn't seem like many people in power want to do much about that.
Good point.
Honey, then, Renee.
Sure.
I mean, a lot of things keep me up at night these days.
But here's, Bill Gates had this great line many years ago.
We said we tend to overestimate what tech will do in one year and underestimate what it will do in 10 years.
And that's been true for quite a while.
I think in the age of AI, that 10 years is probably closer to five years and maybe three years.
I think the world is going to look very different in three years.
And here's the thing I'm looking at, particularly for 2026, which is the, the
The age of agentic AI is upon us.
So we, everything we have talked about for the last 45 minutes or so has been humans in the loop,
right?
Humans are powering this in terms of creating the content and then sharing it and liking it.
But soon what we are going to see is fully deployed agentic AI, state sponsored,
cyber criminals, individuals will have agents doing the work for them.
Fully automatic, 24 hours a day, no sleep, no moral, no fear of arrest.
And that's going to be a very weird world that we are entering in.
And I think cybersecurity in general is going to explode with attacks from agents that are working 24-7 and adapting very, very quickly.
And we've already started to see attack from AI-written malware and ransomware.
And I think the world of agentic AI, more likely than not in the next 12 to 18 months, is upon us.
And I think that is going to change an awful lot.
So supersized this, are already difficult?
And huge problem. Okay. Renee? Yeah, I would concur on that. I think it's the prevalence of fraud. I think the financial industry is beginning to realize that systems it had built using voice as an authenticator, insurers have been using voice as an authenticator. A lot of systems that were a lot of money invested thinking that this was going to be secure, the realization it is not secure and efforts to try to find things that are. The privacy tradeoffs with things like biometrics that people are very uncomfortable with, particularly in the hands of government.
And yet at the same time, the recognition that there have to be mechanisms for credentialing in some way in part to mitigate fraud, right?
So what are those tradeoffs going to look like?
What does that, what does a privacy-protecting identity layer look like, again, recognizing that people don't want the government to control it?
And yet the private sector options are not great either.
So that's an area that I've been looking at a lot lately and wondering, you know, what on earth happens on that front.
And what worries you the most in this area?
You know, we've got another election cycle coming up right now. I think those questions around trust and legitimacy are going to play out very heavily in the race. You know, I've never been in an AI election doomer. I think there's always been that it's going to matter in some way. We should prepare. We should recognize it as a threat and yet not overreact. And so, again, yet another cycle coming up, gearing up for that. Because we have an incredible.
you know, real fractured trust environment here. And we've seen the technology advance a lot over the last two years.
And we're going to see what that looks like going into this next cycle. So that's...
All right. But we can be sure Lelam will do something repulsive. Anyway, I never underestimate his repulsiveness.
Anyway, goodbye. And thank you so much. Renee, honey and Casey, it's a really important topic.
I'm going to keep coming back to it all year around chatbots, around kids, around suicide, around everything.
if they're not going to do anything about it, as you said, the media certainly can.
I really appreciate it.
Thanks, Karen.
Thanks, Karen. Thanks.
Today's show was produced by Christian Castro-Roussel, Michelle Eloy, Megan Bernie, and Kaelin Lynch.
Nishat Kerwa is Vox Media's executive producer of podcasts. Special thanks to Catherine Barner.
Our engineers are Fernando Aruta and Rick Kwan, and our theme music is by Tracademics.
Go wherever you listen to podcast, search for On with Carri.
Swisher and hit follow. Thanks for listening to On with Carous Swisher from Podium Media, New York
magazine, the Vox Media Podcast Network, and us. We'll be back on Monday with more.
