Tech Won't Save Us - Elon Musk Profits Off Non-Consensual Deepfakes w/ Kat Tenbarge
Episode Date: January 29, 2026Paris Marx is joined by Kat Tenbarge to discuss the explosion of abusive deepfakes on X, including how Elon Musk is profiting from the sexual exploitation of women and children while his followers use... Grok to engage in gender-based violence. Kat Tenbarge is an independent journalist who writes Spitfire News. Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon. The podcast is made in partnership with The Nation. Production is by Kyla Hewson. Also mentioned in this episode: Kat has been thoroughly covering the Grok and XAI deepfake and sexual abuse story Paris wrote about why the Grok scandal shows we need more comprehensive tech regulation The deepfake documentary mentioned was called Another Body You can see the result of Megan Thee Stallion’s defamation lawsuit here Grok was blocked in Indonesia and Malaysia in response to its generation of explicit images
Transcript
Discussion (0)
Deepfakes are really kind of like a wide-scale war on women as I've come to see it.
And with looking at how Grok was being weaponized, this is a way for misogynists to be able to control and punish women who they otherwise would not have access to.
Hello and welcome to Tech Won't Save Us, made in partnership with The Nation magazine.
I'm your host, Paris Marks, and this week my guest is Kat Ten Barge.
Kat is an independent journalist who writes the Spitfire News newsletter.
You were probably just as enraged as I was to see what was going on on.
on Elon Musk's X platform over the past month or so,
where the safeguards were rolled back on the Grock chatbot
to allow it to generate non-consensual,
explicit deepfakes of virtually anybody,
men, women, but also even children,
and how shockingly bad the response was to what was going on.
You know, you essentially have a platform here
that was already turned into this kind of right-wing space
because of the way that Elon Musk has governed it
and has changed the way that algorithmic amplification,
works and the user base of the platform overall, but then you have him wanting even more engagement
and seeing that playing into the perverts and the misogynists who are on the platform and who
are big fans of his could use this image generator in ways that are truly shocking and really
to get away with it, right? You know, he received this pressure from various governments who were
not happy with what was happening, but very few of them actually moved forward to practical actions,
to protect their citizens who were being victimized in the moment,
just continued to allow this to happen and talked about, you know,
maybe having repercussions or doing investigations or that sort of a thing.
It was really terrible to see just how little our government seemed to be able to act
against Elon Musk and this platform he owns,
even when it goes so far as not just to create non-consensual images of, you know,
women and men of adults,
but even of children, which is something that is so clearly illegal.
And so naturally, I felt we had to do an episode on this, and I could think of no better guest to come on to talk about what happened and the implications of it than Kat.
She has been reporting on deepfakes, on AI generated explicit images for quite a long time now.
And of course, not only had the kind of background context that was necessary as this scandal broke out, but was obviously just as enraged about it as all the rest of us.
So with that said, I think we should just get into this conversation with Kat.
I hope you enjoy it and, you know, it's not too shocking for you.
If you do like this conversation, you can leave a five-star review on your podcast platform
of choice.
You can share the show on social media or with any friends or colleagues who you think
would learn from it.
And if you do want to support the work, that goes into making tech won't save us every
single week so we can keep having these critical in-depth conversations, looking at the
ways that Elon Musk and so many other of these tech billionaires are making the world a
worst place every single day. You can join supporters like Pranchu from India, Shivam from Toronto and
Jordan in Denver, Colorado by going to patreon.com slash Tech Won't Save Us, where you can become a
supporter as well. Thanks so much and enjoy this week's conversation. Kat, welcome back to Tech Won't
Save Us. Thanks so much for having me back. Absolutely. Really excited to chat. Unfortunately,
it seems, you know, I have you on to talk about probably the worst possible topics that we can
talk about. That is where my coverage always goes.
Yeah. At least you're doing the hard work of looking into this for people and making sure they're informed. What I wanted to have you back on the show to talk about was what we have been seeing play out with Elon Musk's ex platform and the Grock chatbot over the past number of weeks where we have basically had the platform take these restrictions off of the chatbot to allow it or make it much easier to generate non-consensual explicit images, not just of women and adults, but of children as well. Obviously, there has been.
a whole firestorm around this. And so I want to dig into a ton of aspects of this with you,
but I think just generally, as you were seeing this story break in late December, early January,
what was your initial reaction to this? Were you surprised or did it just seem like a matter
of time before something like this happened? I definitely wasn't surprised, but it did remind me
sort of, as I've covered deepfakes over the past few years, there are like these cultural moments that
are just like these peak points when things are really starting to shift and you just see this
like massive influx of typically image-based sexual abuse like produced with AI.
And that's exactly what this was.
And the way that the pattern always has gone, at least that I've observed, is like there will be
this thing that seems so insignificant.
In this case, it was a viral tweet that basically said like, I looked at Groch's image
output and I saw that it was being used to undress pictures of women and girls. And that one post,
you know, got enough traction that other people went and started seeing that this was happening.
And then the outrage started to foment and grow and grow and grow. And unfortunately, as we
always see, alongside like the awareness that this is happening, the growth just accelerates and
skyrockets to an unbelievable degree. So now that the numbers are in over the past month,
we can see that this was, again, another major tipping point with just, like, a library that you can
hardly quantify or imagine or wrap your head around in terms of how many of these images were produced
and how many people were victimized. Absolutely. Like, I think there was a New York Times report
just today, as we're talking, that put it in the millions, right, of the number of images generated
just over the course of, like, a couple weeks at the peak period where this was really happening, right?
Exactly. Yeah. And I found it interesting what you said there about,
the tweet going viral because it did really seem like the restrictions were taken off of the
Grock Chapot rolled back. And it did seem like it took like a few days to a week or so for people
to really realize what was happening and to start like reckoning with what, you know, the ex-platform,
what Elon Musk, what his team had really unleashed here, right?
100%. And it's like so unfortunate that you can see almost exactly how this happened because
the potential for this to happen has been.
there and sort of this practice has been happening now for years. But what was really different
about this moment and with the X platform specifically is that Elon Musk and the team behind
XAI with him as its like prominent, very public facing face, they encouraged this cultural shift
on the platform by rolling out all of these like sexy, flirty, spicy, NSFW functions. And Elon
himself obviously has a really misogynistic attitude toward women in general, toward
marginalized people.
Yeah, really putting it lightly even.
Yeah, exactly.
Exactly.
Like, looking at Elon Musk, there's no question why so many men on the platform who look
up to him and idolize him were inspired to create this type of content.
And so it was kind of happening quietly in the background as they rolled out these features.
And then, of course, in the new year, all it took was someone with a large following,
spotting it happening for everyone to realize this is now happening.
And then, of course, it inspires more and more and more of that behavior.
Which is, you know, not surprising, but also, like, disgusting and abhorrent at the same time.
Can you talk to us a bit about how this actually works?
Like, what were we actually seeing occur on this platform?
Yeah.
Yeah.
So the most common sort of frequent phenomenon that was happening.
that was happening is people were responding to women's photographs. That could be a woman posting a
selfie on X. That could be a user pulling her pictures from some other place on the internet.
And they would post it on X and then reply and at Grock and say something like put her in a
bikini or cover her face in whipped cream or turn her around. And Grock would often respond
and edit the image in that fashion. So the bikini.
pictures are really what captured the most attention because that was something that they could
get the bot to easily do.
What we typically see with user behavior in these cases is that trends will emerge.
Like specific prompts will create a successful image.
They'll result in the user getting what they wanted, which is to humiliate this woman.
And so then a bunch of other users will do the same thing.
And that's how like the bikini trend emerged.
And by the end of the first couple of days when this was prolific,
You know, researchers found that more than 7,700 harmful images of this variety were being
created every hour with GROC. And these ranged on a scale because there were some things that
GROC is still doing that don't fall under the range of sexually explicit territory. But then there
also was a lot of material that was sexually explicit. I've had researchers send me examples of like
nudity of very realistic looking juditalia. I've seen some pretty shocking stuff on the ex-platform
publicly. And at the same time, what a lot of people don't know is that the standalone GROC app and
GROC Imagine, which is like a website as an app, that was making even more sexually explicit
and violent material. So all of GROC in all of its various forms, but what the most viral output was
was what was happening on X publicly.
Yeah, I think that's a really good point to make
and something that I want to come back to, though, right?
Because you have what's on the platform,
but then you have this much bigger picture
that seems to be occurring outside of X itself.
And so I want to return to that in just a little bit.
But in terms of, like, looking at what is happening on X
and what Grock is able to do on that specific platform,
obviously there were concerns about what was happening on the platform
well before the end of October and these changes.
Can you talk about what was occurring
on there before. And to what degree any of this was actually possible before these changes were
made? Yeah. So I started covering deep fakes pretty routinely in early 2023. And by that time,
X was already the premier social media platform for deep fakes to get traffic on. So from almost the
beginning of this practice becoming mainstream, X was just like a real environment for this to
flourish. And what I used to see a lot of was I would search for example, for example, I would
search TikTokers names on the platform. And I would just look at the chronological results of
imagery associated with their names, the most recent posts containing images and the words like
Addison Ray or Charlie Demilio. And I found deep fakes just running rampant. And this was obviously
happening in plain sight, but a lot of them weren't viral. It was like this kind of thriving,
almost underground community of deep fake proliferation. And it largely seemed to be targeting
public figures, which makes sense because when it comes to the community aspect of deep fakes
and the monetary aspect of deep fakes, it functions a lot like mainstream pornography just with
no consent. And so women who are already famous, there's a real market and appetite for that
content. But as the technology evolves alongside that demand, you start to see people going,
I want to make deep fakes of my cousin or I want to make deep fakes of some girl I'm in class with or just this random person.
And so I saw all of that happening on X over the past few years.
And as time went by, the incidents started getting more viral.
A lot of people will probably remember at the beginning of 2024 the Taylor Swift deepfake incident where there were those AI generated pictures of her in a football field.
And that was probably the most massive.
incident that X was embroiled in, but it was far from the only one. The Taylor Swift stuff,
X actually responded. It was too late. The images were already viral and are still proliferating today,
but, you know, X made it impossible to search Taylor Swift for a while. They don't do that for
the vast majority of people affected. And there was legislation inspired by the Taylor Swift deep fakes,
you know, in Congress. Other times I've seen this happen, the response has been much more minimal.
one of the most like shocking examples of this was in 2024, there was a viral deep fake of the actor Jacob Allorty.
And it was not hard to find. It was all over X. And this was Jacob Allorty's face on the body of someone who appeared to be like a porn performer.
And these face swap videos really, really common, really popular phenomenon. What I found was the person who said it was their body in the video. He told me that he was 17 when he was.
he took the video. So that was essentially like CSAM, child sexual abuse material being spread
on X as this viral celebrity deep fake. And I remember reaching out to X at the time. They took down
some of the posts when I sent them to them. But otherwise, like, this has never been something that
X has really worked to contain. They are more responsive than they are proactive. And their response
is often so much less than where it needs to be if they want this to stop happening.
So the vibe that I always had, even before GROC, was X doesn't really care if their platform is a rampant space for this type of abusive material.
They're not really doing the work to ensure that it isn't.
And by the time it goes viral, their attempts that sort of like solving the problem do really very, very little to stem those problems.
Yeah, I think really well said.
And there's a bunch of things in there in that answer that I want to kind of drill down on a little bit further, right?
I want to start with that notion of the safety team, right?
And what is actually going on at X itself?
We know that when Elon Musk came into Twitter,
he kind of gutted a lot of different parts of the company
significantly reduced the headcount at that platform
before renaming it X and doing all this.
And there was this reporting in CNN
about kind of like meetings that occurred before these changes to GROC
and what it was able to do.
And it kind of suggested that there were even a couple
people on what remained of the safety team there that even left before these things happened.
So what do we know internally about what is going on there and how this platform is actually
looking at, you know, the kind of harmful material that can spread on it?
Absolutely.
I mean, I think, as you said, the manpower to address these issues has just been decimated
ever since Elon Musk took over.
And that was very intentional and that was very specific.
Some of the boards he disbanded and segments of the, like some of the employees who he let go,
they were specifically in these roles around things like child safety.
So he really took a sledgehammer to any of the infrastructure that could have been around to address this
or raise these questions or make sure that this didn't happen in the first place.
And then in addition to that, when we look at sort of the priorities of the X team as communicated through their public posts,
as communicated through the actions that we see taking place on the platform, safety hasn't just
been sort of like minimized as a concern.
We're really kind of seeing the opposite play out.
Like we are seeing people at the wheel of X who are embracing the chaos and driving it
in this direction.
And they're a little more subtle about it than getting out and saying like, we don't
care, it's a CSAM free for all.
Like they're not going to say that.
But what we do see them saying is like,
we love these depictions of women.
We want to see more of the type of stuff.
Like Elon Musk was responding to bikini edits with laughing emojis.
He frequently, like daily is posting these like sensual AI generated images of young-looking
women made with Grock imagines.
So that's like the tone being set on the platform and the rules and the safety team are
following along with that vision.
Yeah, it really is disgusting, right?
Like, you have this man who is in his 50s, has all of these children and, you know, different women who he's had children with.
And, like, is regularly posting, as you were saying earlier, you know, these advertisements for the chatbot that can, like, have a not safe for work mode and stuff like that.
And then these AI generated images of women in, like, suggestive positions or even with, like, captions on it that, you know, is, I don't know, I find it kind of disturbing just to see, like,
what he's up to, and then to see, like, the reaction to this whole moment where he seems to
feel like it's a big joke, right?
100%. And it's, it's really interesting and kind of unique, the relationship that the
perpetrators of image-based sexual abuse on the platform have to Elon Musk. Like, I have found
deepfakes on every major social media platform. And I found tons of bad, real violating content on all
of these platforms. And to a degree, like, there's nothing you can do to ensure that nothing bad or
harmful is going to get posted on your platform. But one of the major differences is just the level
of sort of idolatry when it comes to Elon Musk. I was looking at the accounts of people creating
this material of people prompting Grock to put her in a bikini. And some of them had Elon Musk tweets
as their banner. And some of them had, like, number one Elon Musk fan in their bio.
and things like that.
And you don't really see that dynamic with someone like Mark Zuckerberg
or one of the other tech CEOs.
Like, Elon has this call of personality that he has created very intentionally.
And he has weaponized it.
And this is a consequence of that.
Yeah.
Even when you mentioned Mark Zuckerberg, like, sure, he's made this kind of like manly pivot
recently, but I can't see many people being like, yeah, my male icon, like, for working
out or whatnot is Mark Zuckerberg.
Like, you know.
Yeah.
Oh my God. I'm not surprised to hear you say that about these people and the way that they look up to Musk, but it is still like it still turns my stomach to hear it. You've talked about the way that these people are approaching women, the way that they see women, you know, in a very misogynistic way. And they're getting that kind of passed down from from on high, from the platform owner, from the richest man in the world, from this guy who has been held up from by all these media institutions and powerful figures for decades and now is doing all of this with the power that he has accrued.
But there's another kind of dimension to this story, especially in the early days, and I don't know how much that was actually addressed later, but how the GROC was also being used to create images of children.
And you've mentioned this with the child sexual abuse material, CSAM.
Can you talk about that dimension of this?
And whether that has been properly reined in, because it felt like the first week there was a lot of attention on this.
And then it felt like that really dropped off.
And it was more about how this tool was being used against women after that.
Absolutely. Yeah, I thought it was really interesting how the phenomenon kind of grabbed a lot of attention as framing GROC as like a C-SAM generating machine, which is not entirely unfair. I found in sort of reviewing the output from GROC at various times throughout this controversy, it's really hard to distinguish sometimes. Like, are you looking at a picture of a child or are you looking at a picture of a grown woman? And I think that a lot of perpetrators purposefully blur those last.
where it's like they oftentimes focus on very young-looking women. But absolutely, I mean,
just, you know, a couple days ago, I was sent an account of this guy who was posting pictures
that were absolutely schoolgirls. Like, they were school-aged children and he was having
Grock like turned them around. And this was after X, you know, put in new restrictions. Like,
this behavior is still happening all over the platform. And I think a lot of times violence against
women and girls and women and children is very interlinked because it falls under a lot of the same
sort of social patterns and impulses and biases that we recognize in this field of understanding
abuse. It makes me so mad, like to even hear you describe it, right? So I want to ask you about
the response to this, as you were saying, you know, when they started to rein things in. But
before we do that, I did want to dig a little bit more into the history and the consequences here, right?
So you talked about the history of the platform, what we have been seeing since 2023, but more
broadly with explicit deepfakes and non-consensual AI generated images. Obviously, this existed
before generative AI, but generative AI feels like it has really kind of fueled and given a lot of
acceleration to this practice. So can you talk to me about the evolution of this that we've seen
over the past number of years? Absolutely. Yeah, one of the things that I always try to communicate
through my work on AI and image-based sexual abuse and harassment is like the idea.
ideas and the ideologies and even a lot of the behaviors at play are not new whatsoever.
So there's a legal scholar named Marianne Franks who she has told me in the past that like
before AI, one of the ways that people would make this face swap sexually explicit material
is they would take physical magazines and they would cut out women's pictures like celebrity
women and girls and they would physically paste them over like a pornographic magazine.
And then they would cut that out and mail it to the celebrity.
No way.
Yes, yes.
So this was like the pre-internet well-in advance of AI version of deep faking.
And at the time, the response from, you know, authority figures, from people in these women's lives was like, oh, it's not a big deal.
Like, it's not real.
Just ignore it.
Maybe take it as a compliment even.
And we have seen a lot of that sentiment kind of evolve alongside the proliferation of deepfake technology.
because it isn't really taken seriously.
And the vast majority of perpetrators are not held accountable in any way.
But the major distinction with deepfake technology and with GROC specifically is looking at sort of like the speed and scale and availability of what is happening.
So in the early days of deep fakes, which go all the way back to 2018 and even before, there were already communities on like Reddit making like Scarlett Johansson deep fakes.
But at that point in time, it took a lot of sophistication, a lot of tech knowledge, and a lot of time and efforts to make a realistic looking deep fake. And even then, the deep bake still weren't that realistic. Today, the technology has evolved so much that you don't need any technical knowledge whatsoever. All you need to be able to do is at Grok and get the bot to do it for you. So that obviously makes it much more accessible to anyone. It also amplifies the scale at which this is how.
happening. So for example, Genevieve O, the researcher who got a lot of this data for journalists
over the past few weeks, she found that Grok was producing like 25 times more material than the
top five deep fake porn websites combined. So just like that scale is enormous. And we've seen
this before. Back when I started reporting on this in 2023, a lot of these deep fake websites
were already out there and had been for years.
But there were some cultural moments that year
where just like the rate of adoption skyrocketed.
And when these things skyrocketed,
they often don't come back down.
A lot of times, like once people have figured out how to do it
and sort of gotten a taste for it,
they will find other ways to go about doing this.
So even if someone can no longer use grok
to do exactly what they want,
there are so many other apps out there,
Nudifying apps have been, you know, on the app stores for years now.
And even like Google and Apple have been, have done a poor job of enforcing rules on apps that do this type of stuff.
I know that they even still run ads for those kinds of apps, like, you know, on Facebook and things like that, right?
I've seen reporting on that recently, which is so frustrating, right?
You know, it's another one of those things that, like, adds to the frustration around this whole issue.
You talked earlier.
And I remember last time you were on the show.
as well. We were talking about this documentary, I believe it was about a college girl who had
deepfakes made of her, you know, and this was like pre-generative AI, right, and was kind of
advocating for legislation in order to try to rein these things in. And of course, you mentioned
the Taylor Swift moment in early 2024 and how that prompted conversations around legislation
and what a legislative response should be. Have we seen anything tangible really come of those
efforts in the past? So it's interesting. When it comes to sort of legislative,
and enforcement around deep banks.
There has been a lot of progress over the past couple of years,
and we've seen action taken at the federal level with the Take a Down Act,
which is very problematic for a whole host of reasons.
We've also seen, like, different states pass legislation around deep fakes.
And from an international perspective, like Australia, the EU,
other places have worked to pass measures around deep fakes.
The problem really comes in at the enforcement level.
And there's really only one example that I am aware of of someone posting a deep fake on X and getting in trouble for it.
And this is an interesting case.
Megan the Stallion was targeted with deep fakes back in, I believe, 2024.
And she actually sued a woman who had reshared some of these deep fakes and directed her audience to the deep fakes.
And she won that defamation lawsuit in recent months.
But what's key there is that Megan was, she knew who the perpetrator was.
She knew exactly who this woman was, and therefore she was able to sue her for defamation.
And she didn't involve any of these newer statutes created around deep fakes.
She went for a more traditional, like, reputational harm legal argument.
And this situation is so unique because the vast majority of the proliferation of this content
is coming from anonymous users who could be based anywhere, who could be disguising where they're based out of.
And there's just not prosecutorial effort to go after any of these people.
The FBI and the DOJ, thanks to the Take a Down Act, they could be going after people doing this
on X right now.
They have yet to publicly announce or investigate any of these cases.
So my suspicion is that most people who have been targeted over the course of this month
by GROC, a lot of them probably don't even know this is happening.
Those who do know it's happening don't have a lot of great options.
The options that they do have would be very costly, probably prohibitive for the vast majority of people.
And to pursue a case like this requires such an enormous toll.
You have to popularize the images.
You have to think about this for much longer.
Like it's so much energy, expenditure, time, money, all of these different things that you may end up suffering more from trying to get justice than if you just sort of let it go.
So that's the unfortunate reality of where we stand with kind of enforcement around these things.
Yeah, I feel like the extent that I have seen of people being held to account, at least like reading news stories and stuff like that, is more like high school kids or whatnot being kind of like disciplined within the school environment rather than like through the legal system for doing this to their peers and things.
Right.
Like I haven't seen very many actually like legal cases or something where someone actually feels the effects of what they've done, right?
100% yeah. And even in those small-scale community settings, like an elementary school,
the response varies so much depending on what the individual administrators decide to do.
There have been some really heartening cases where school administrators have taken it super seriously,
expelled perpetrators, like ensured that real consequences happen. But there have been just as many,
if not more, examples of school administrators blaming the victim. There was a case recently where a victim actually got expelled
after she reported the perpetrators,
and there was basically no investigation.
The school administrator just assumed that she was lying.
So we still see these types of patterns play out at every level.
And I think that in itself relates so strongly to gender-based violence in general,
whether it's with the use of AI or not.
That's so shocking to hear, but unfortunately, like, not surprising that some cases
would be good in other cases.
You have administrators who, like, really don't want to do anything positive.
of here, right? You mentioned there, like, the effects of gender-based violence. And do we know, like,
what the effects of deepfakes of non-consensual images like this are on the people who, you know,
they're created of and kind of weaponized against? Absolutely. I mean, we've seen so many
different types of consequences, both at individual levels and at sort of systemic levels. So at the
individual level, there are different risk factors for different victims of this. There have been
victims, including very young victims, as well as victims from marginalized communities who have
committed suicide because of this. And sometimes that's because the victim is young and young people
can often be really manipulated into thinking, like, my life is over. But also in some communities
where religious modesty and cultural modesty are really prized, that can be a huge consequence
because even if the material is not quote unquote real,
it can have the exact same impact depending on sort of your community norms.
And also at the individual level,
there's a real psychological consequence at play here.
And it's something that applies to a lot of different forms of online harassment
where kind of the common knowledge or assumption is,
like there's that famous meme.
Like how is cyber bullying real?
Just close your eyes.
And like that kind of stuff is like it's funny.
it's comedic, but the reality is so much darker because the reality of what we know about online
harassment is that our brains perceive these things that happen to us online as physical threats.
So when you are being physically threatened online or when you see a deep fake of yourself,
you will have a physical reaction.
Your palms will shake.
You'll feel scared.
Your fight or flight response will kick in.
You may endure the same consequences of what we would think of as quote unquote real life trauma.
it's still real and your body is like perceiving it as happening to you.
So there are a lot of consequences that happen at that level.
And then at the more systemic level, I think what's really unfortunate is that through
reading like deep fake forums and through pouring over sort of what the perpetrators talk about
and how they refer to what they're doing, a common thread that has emerged is similar to
other forms of patriarchal violence.
This sort of end goal here is to punish women.
at large as a group, make it harder to be a woman, and specifically to sort of push women out
of the public eye by, like, retaliating and making it so dangerous to exist in these online spaces.
And we can actually see that happening on X because so many women have left the platform.
I left the platform in response to, you know, gender-based violence on the app when Trump
got reelected, when Nick Fuentes was saying, like, your body, my choice.
That for me was like the breaking point of, I don't want to be here.
anymore. This environment is psychologically destructive to me as a woman. And a lot of people have left
social media because of this. I've seen multiple examples of, you know, high profile women leaving
social media because of the Grock trend, because they were victimized by it or because they're
scared of it. I've seen so many women say, like, I don't post pictures of myself online anymore.
I would never let my daughter post pictures of herself online. And that may seem like good
safety advice in theory. What that really means.
is like women aren't living their lives fully or equally in these spaces that are so important
because these are democratic spaces.
These are places where we make our voices heard.
These are places where we build careers.
These are places where we share our perspectives and knowledge and insights with the world.
And so deep fakes are really kind of like a wide scale war on women as I've come to see it.
And with looking at how Grock was being weaponized, one thing that really struck me was like,
this is a way for misogynists to be able to control and punish women who they otherwise would not
have access to. Like, it's a way to sort of see that conservative ideology in action at scale.
Really well put. And like we can have our criticisms of social media and how these platforms work
and, you know, what these companies have done to like these spaces where we communicate with
one another. But at the same time, recognize that like there shouldn't be these kind of
structural barriers that are created that are enabled by people like Elon Musk that exclude
whole swathes of people from these spaces because they don't feel safe to communicate in them,
right?
Exactly.
It's like a, it's the digital version of the political project to ultimately undermine women's
autonomy, to make it harder to run for office as a woman.
There's no, it's not a coincidence that Alexandria Ocasio-Cortez is one of the highest profile
victims of deep baking.
and she's also one of the most public-facing women in politics.
Like, these things are aligned.
These sort of, like, goals and missions are aligned.
And I really do think that Elon Musk very intentionally and with full knowledge of what would happen,
like introduced these features to the platform.
Definitely.
I even think back to, like, the past few years, right?
And how for a while there, a lot of people were really kind of worried and concerned about being open
and, like, admitting the politics that Elon Musk.
held, right? Like, I remember even when he was, like, you know, very clearly supporting, like,
abortion bans in Texas, but, like, wouldn't say it explicitly. People were like, oh, you know,
you don't know. Maybe he doesn't, like, whatnot. But it was like, no, you can tell, like,
he's disowning his trans daughter. He very clearly sees women as, you know, kind of vessels for his
procreation. Like, this is not a man that should be given the benefit of the doubt. Yet, even to
this day, it feels like with his political positions, that still remains the case, right?
Totally, totally. And a lot of people have sort of defended all of this under the guise of
it's freedom of speech. It's a freedom of speech issue. And the U.S. government, like,
federally has kind of taken this position. And I've seen a lot of like commentators take this
position. And the reality is that it is a free speech issue. But the free speech that's being
suppressed is the free speech of the women and girls who are being affected by this, as well as
women and girls in general as a group because we're all threatened by the normalization and
allowance of this type of behavior. Yeah. Creating non-consensual deep fake images and AI-generated
images of people is not your free speech. You know, that's not something you have a right to.
So I want to ask you about the response to this as well, right? I want to start with the platform
itself, right? So it started to get all of this criticism from governments, from, you know, the public
generally, I'm sure, from a certain segment of users of the X platform as well. How did Xx
and X-A-I respond as the kind of concern and the scandal around this was escalating.
So the response from the companies that it was very delayed.
There were several days that went by before they really responded to anyone.
And meanwhile, the X-AI press account responds to every request with an automated response
that just says, legacy media lies, which is a classic like Elon Musk move.
At least it's not like the poop emoji anymore, but like it's not any better.
Exactly, exactly.
And one thing that people picked up on in the early days of this that was so annoying
is there were some legacy media outlets that were asking GROC for comment and then printing it.
Oh, my God.
I was going to say, like, there were all of these headlines, right, that were like,
GROC says whatever or whatever, as though like GROC itself was like impersonated, was thinking,
was like acting on its own.
And it's like, no, there's like Elon Musk and all these little guys turning the dial.
behind the scenes that is resulting in this, right?
It's not like the chatbot itself has any agency here.
Exactly.
There's like this huge tech literacy gap around conversational AI
where it's like even authority figures treat it like it is a conscious thing
that is capable of forming a response.
It is reflecting what you ask it.
Like when you prompted to do something,
it's just trying to please you by like regurgitating what you said back to you.
So that being said, like that was ridiculous to see.
And then, you know, X safety releases statement a few days afterwards.
And they basically say, and Elon Musk says this too, anyone who posts anything illegal to the platform will be treated the same way regardless of its AI or not.
Which was kind of a true statement in that when you post illegal material on X, there is no guarantee or likelihood that anything will happen to you either.
Like the platform has become known for videos depicting like extreme violence, extreme gore, things.
like bestiality, sexual abuse, and of course, the aforementioned deepfakes. So he wasn't lying
when he said that they weren't going to treat the content any differently. It's just that they don't
do anything about these problems. So that was the initial response. And then the backlash really
started to build. And we initially saw this in the form of other governments responding to what was
happening on X, like Malaysia and Indonesia, were two of the first governments to actually temporarily
banned the platform. India also like a government minister there, you know, took action,
launched an investigation. We saw the UK's independent regulatory body start to investigate
the EU, France, California as its own entity, started to investigate X. So you saw all of this
growing international pressure, mainly coming from other Western governments and Southeast Asian
governments. And so alongside that, there's like the growing user backlash, more and more
people speaking out about this.
And finally, in really in response to that pressure,
and I think specifically that regulatory pressure,
like the fear of those fines,
the fear of the app being completely blocked
in places like the UK,
that's when, you know, last week, I believe,
is when they said, well, first they restricted the ability
to create or to use GROC to edit images
to just paying premium users,
which didn't stop the problem at all
because those paying premium users are often the must
fan boys who were driving a lot of this behavior.
Exactly. And I even saw like tweets like, you know, when people would do the at grok and,
you know, ask it to undress somebody, I don't know if it still does it, but there was a period
of time where it would respond and say like, you can only do this if you're a paying member
now, like basically like an explicit upsell on your attempt to create non-consensual
defakes. Like, ugh.
Yes, absolutely. And I, it is still doing that, I'm pretty sure. And so, yeah, they're basically
just at that point monetizing the image-based sexual abuse.
So that was the first response.
And then after more sustained pressure, that's when they announced that they were rolling back
the ability for Grok to put real photos of people in revealing clothes.
That was kind of the way that they phrased it.
And with these efforts put into place, we did see a decrease.
Because at its peak, when anyone could do this to anyone at any time, there were over
7,700 images per hour being created.
after the changes, we were still seeing a ton of images.
Like, we were still seeing thousands of images being created per hour, but there was a decrease.
And then after they rolled the restrictions back seemingly entirely, but I'll get into that in a second.
When they restricted it even more, we again saw another decrease.
But the reality of the situation was, A, they left a lot of the most harmful material up.
So, like, you can still go back and look at these pictures that were created in the first place.
and people very quickly found ways to get around these restrictions.
So like the researcher who I've been in communication with,
she was immediately able to find numerous examples of really sexually explicit,
like bad stuff still being created,
really realistic looking like genitalia, bikinis.
She still was able to find those, like stuff like the little girls being told to turn
around, things like that.
That was all still happening.
And something that I personally observed as a shift in sort of this culture
is an emerging phenomenon that is now really, really rampant involving like commanding GROC to take off
religious items of clothing like the hijab or put on religious items of clothing. So now a lot of the
prompts that I'm seeing are Grogh put her in a sari, like the Indian traditional cultural clothing,
or like Grogh put her in a nun's habit or like Grogh like cover her up. And it doesn't fall under the same legal
definitions of what we think of as sexually explicit because it's literally the opposite, but that
impulse and that ideology is the same. So the boundaries have just shifted. And there's this gray area
around what they're doing because a lot of what they're doing is still pretty nakedly abusive
and harmful, especially like at Grock take off her hijab and show her with curly hair. I see that
all the time. And like that is really harassing, but they know that they can get around
sort of the restrictions that have been put in place.
And the root of this behavior is obviously the desire to control women to take away their autonomy.
And, you know, it's a very sexist sort of ideology at play.
Yeah.
And I feel like that example just like adds to what you're saying as well, right?
Where it's like, okay, it's not even just about taking the clothes off of all these women.
It's like even just changing them putting, you know, again, taking a hijab off when that's part of like your cultural or religious identity can still be like very invasive.
and very abusive and things like that.
It just makes me so mad to hear about all these things.
I want to ask you as well about the media response, right?
Because we talked about how some of these headlines were reporting on GROC talking about
this in a certain way.
But I also felt that as X was announcing these changes to the platform and how it was going
to work, we very quickly got these headlines like, oh, the platform is raining this
in, you know, they're taking action to address this.
And then, as you're saying, when researchers or journalists, you know, certain journalists started digging into what was actually happening, they quickly found that it doesn't sound exactly like what X suggested. These things can still happen or they're just monetizing it. I wonder what you make of, I guess, the way that the media reported on this and how it wasn't always completely accurate or reflective of what was actually happening in a case that was important to give the public accurate information here, right?
100%. And I think that over the past few decades, we've seen some advances in how the media often
covers gender-based violence. One thing that I compare it to, just to show how stark that difference can be,
is when what we call oftentimes revenge porn colloquially, when that started to become a really viral
phenomenon in the late 2000s and the 2010s, a lot of the media framing is that it was the victim's
fault for taking these photos in the first place. And like, if you think back to how the media
treated like prominent celebrity women who experienced that, tonally, it was extremely,
like, victim-blamey. And today, things have gotten a little bit better. But as you're describing
here, there is still way too much of a reliance on sort of like the authority figures in the
equation. And we see that with media coverage in lots of different respects. You see it with
how the media gives the police so much authority when they cover crime. You know, you
You can see it in how the media gives like the federal government so much authority when they cover politics.
And you can see it in how media companies give tech companies the benefit of the doubt when we cover these patterns of abuses.
And the reality is just because the tech company says that it is fixing the problem, as we should all know by now, that does not mean they are fixing the problem.
And in fact, a lot of these tech companies, when AI was starting to evolve into what we think of it as now,
when that was happening a few years back, a lot of these companies signed on to pledges around non-consensual intimate imagery where they pledged and promised that they would do everything in their power to stop this from becoming the crisis that it is today.
And all the media outlets wrote that up feverishly.
Like all these, they're doing a great job, everyone.
They're doing a great job. They are not doing a great job.
And yet still, we return to this idea that when Elon Musk and his company say they are doing something, that means it can be reported on as full.
And I wish that as in industry, we would stop taking these guys at their word when they've proven
over and over and over again that were foolish to do so.
Yeah, it even makes me think of like whenever Elon Musk says something that sounds like
any bit like sci-fi that gets written up as like headlines.
Elon Musk says, this is coming, whatever.
But when he posts like white nationalist tweets and whatnot, that doesn't get the same degree
of coverage and attention as the nice like science fiction is really coming.
But as you say that too, like it even makes me think of how.
early after the Twitter acquisition,
Elon Musk was talking a lot about taking on child sexual abuse material.
Like, I can't remember the exact language that he was using in that moment.
Even as researchers were saying,
this is proliferating on the platform.
And now, of course, we've seen his AI chatbot actually create child sexual abuse material images.
Just like boggles the mind.
There's not more accountability on this man for what he's doing, right?
100%.
And it is so reminiscent of how child safety and protecting children
and stopping child trafficking and child abuse and all these other things,
it is so frequently like a pillar of conservative rhetoric around like, this is why we need
ice, this is why we're doing all of these things.
But the reality is it's a total displacement of responsibility and it's essentially just like a bait and switch.
Because the reality is that these authorities are often the ones perpetuating the problem
and letting it happen.
They just use that rhetoric to gain more popularity and to get themselves into these positions of power.
Definitely.
And so I wanted to ask you about the government response, right? You mentioned in particular seeing the South Asian and Southeast Asian governments moving really quickly to try to rein this in or even to temporarily ban the platform. We heard a lot of kind of talk from European governments, but at least at this point, as we talk, it doesn't seem to have amounted to much in terms of like enforcement or anything. So I wonder what you make of how they responded to this. And what you think we really should be seeing as governments see this happening, as we see,
this AI-generated kind of deep-fake crisis continually, seemingly getting worse, and it doesn't
feel like they're doing enough? Like, what do you think they should be doing here? And what are we actually
seeing them doing? Yeah, it's really, it's a really interesting distinction because it's not
bad that these governments, you know, made these threatening statements, that they rushed to
investigate, that they called this out as wrong. In theory, and even in practice to some extent,
that was a good thing. And I do think that that sort of sustained international
pressure is a huge reason why X did anything at all. If it weren't for those countries,
I imagine that GROC would still be doing exactly what GROC was doing a month ago. But I think when
you look closer at sort of the underlying reasoning behind a lot of these government responses
and also sort of the structure of a lot of legislation here in the U.S. too, one of the issues
is that the focus is so frequently on the obscenity component. And the focus here oftentimes is like
well, this is bad because it's sexually explicit or this is bad because it's pornographic.
And so therefore, it becomes a very narrow definition of what exactly is bad and what exactly
these governments care about stopping. And so that is how we wind up in the situation where we are
now where they have removed or at least limited the ability to put someone in a bikini or to create
public graphic nudity. But meanwhile, behind closed doors, if you have the Grock app, you can still do
all of that stuff and worse. And even publicly, we've just seen it shift to, you know, put her in a hijab
or take her hijab off. And because that doesn't fall under the boundaries of what's considered
sexually explicit or what's considered pornographic or obscene, you're not going to see the same
government response. And so, you know, a lot of the scholars and advocates who I've spoken to about
this have argued that we really need to sort of change how we think about these issues. The problem
isn't that all nudity is bad. And in fact, like, that line of thinking will often carry into
censorship and often carry into, like, threats to freedom of expression and, you know,
censorship and discrimination against sex workers and things like that. The reality is that the problem
is the desire to control women. And that is something that our sort of, like, legal field
does not have, like, the language for yet, or does not have, like, the desire to put in our existing
framework of right and wrong. And so I think that will require a much bigger cultural shift to understand
that the root of the problem is not just like the nudifying aspect of this, but that it's like
the control aspect of this. And it's the misogynistic aspect of this. And that is really like
the root of this and associated behaviors, not just like the sexually explicit function. But because
that is where so much of like the government focus is, you see things like the Take It Down Act
which the Take It Down Act is not working as intended.
The Take a Down Act was supposed to ensure that something like this wouldn't happen
and that if it did happen, you would see this prosecution at the highest level.
We're not getting that prosecution.
What a lot of people fear is going to happen with the Take It Down Act
is that in just a few months in May of this year,
the second half of that legislation is going to come into effect.
Platforms like X and also all the other major social media platforms
are supposed to create these removal, flag and removal system.
where you or anyone can say like, here's a deep fake of me, take it down, and they have 48 hours
to do that. But the problem with these incoming systems is that they could so easily be weaponized
and used for censorship and used for anything. Because platforms don't actually have time to look at
every single thing that's going to get flagged. A lot of them, as we already know, are going to operate
with an overabundance of caution. And when Trump signed the Take It Down Act into law, he even said,
he pointed out Linda Yacarino, who was then the CEO of X and was then in the audience. And he said, like,
Linda and X, they're doing such a great job. What? Right. Okay. And then the other thing he said was like,
this is so great for me because no one's treated worse than I am on the internet. So it's very
clear how this could become a tool of federal censorship versus what it was supposed to do,
which is help victims. But this White House administration is obviously,
not focused on anything to help victims of sexual violence.
Well, they're focused on the biggest victim of all, right?
Donald Trump. He's the victim.
Exactly. Exactly.
Oh, my God.
As you were describing that, though, like, I was immediately, and, like, you know,
the cultural differences and, like, the issues with nudity generally or even just sexual
content, it brought me back to, like, the early days of the app store and how Steve Jobs
was like, we can't have apps that have, like, nudity or anything like that.
Like, we're not going to allow it on the app store.
And like the Europeans were like, but we don't have a problem with that.
Like, why are you sticking your kind of cultural norms onto us and like the rest of the world?
Because this makes no sense, right?
So you can totally see how those like how those views definitely shape legislation,
especially in the United States and then how that can like filter out to other places as it's like normalized
in that market and like the major market of the internet and whatnot.
And then it's adopted elsewhere.
I feel like one of the points that angered me or that really stood out to me is like, listen,
And I guess people can debate about when this is being done to adults.
And I think both of, you know, you and I would be on the side of like, this is not acceptable when it's done to adults because it takes away power, it takes away agency.
There should be accountability for something like this.
But there are like clear laws about this being done to children and about this kind of material being created of children.
And I found it really shocking that there was not a stronger response, at least on that side of things, and how quickly.
quickly that seemed to fade from the discussion. And I wonder what you make of that.
Yes, you're absolutely right. And I've even interviewed a legal scholar who made the argument
that this has been illegal since the 90s, because back in the late 90s, there were laws
passed around how children could be depicted in CGI and sort of how that applied to C-SAM
and sort of like fake but realistic looking content. So this has been illegal and prosecutable for
a very, very, very long time. But the reality is that it rarely is taken seriously and followed through.
And in sort of my experience reporting on gender-based violence at large, that doesn't surprise me
because I've seen this time and time again where like it falls to the FBI, it falls to local
police, it falls to people in these varying jurisdictions, and it just doesn't get done. Like,
the cases just get dropped. Things do not get followed up on. I think what's even concerning here,
is sort of this idea that because the problem is so pervasive,
because there are a million of these images out there,
like we're not even going to try.
We're not even going to instill the pretense
that we're going to go after this type of behavior,
whether it's illegal or not.
And I think that is really discouraging for anyone sort of in this field,
but it also sort of speaks to the way that rhetoric around caring about children
is so easily weaponized,
even when the people behind that rhetoric have no,
intention of following it through. And right now, one of the major conversations happening is around
children and online safety. And we have so much legislative momentum in the U.S., but also all around the
world, saying, like, we got to take kids off the internet, like kids shouldn't be allowed on the
internet. That's how we're going to keep kids safe. When the reality is, is that as adults,
we can't keep kids safe on the internet. Like, even if kids aren't allowed to use X, what we have
here as a problem is that adults are posting their pictures and like there's nobody doing anything
about that. So I think that a lot of times when we look at the discourse politically and culturally
and otherwise around children and safety in the internet, it's like we kind of have our targets
all wrong. It's not that the kids are posing a danger to themselves necessarily. It's more so that
as adults, we have yet to figure out a way to hold each other accountable for hurting children.
And that's going to be a problem with AI.
It's already such a pervasive problem in every respect.
I think that this is such a good example, like this whole kind of like scandal, right, around how,
okay, listen, you can have a debate around raising an age limit on what social media should be or whatnot.
But it shows very clearly that the problem isn't the age that you go on to social media or these platforms,
but the broader way that we have allowed these companies to create these platforms and govern these platforms.
And it feels like we need much more of like a rulebook or like expectations and standards on how those platforms should operate to make sure that we reduce the potential for harm for everybody, adult or child alike, rather than being like, okay, you're not a certain age so you can't use it.
But everyone else can still like be abused or have mental health impacts or whatever it is we want to talk about because we're not going to pay attention to how the platform operates in any other context, right?
100%. And to me, this has so many parallels.
to the Epstein story as well, which was a case involving a lot of victims who were underage,
as well as a lot of victims who were adults. And now that the child victims are adults and still
continuing to advocate for themselves, they're still not being taken seriously. Like,
you see how this is a problem that is not just constrained by technology, but rather is like a
much further reaching pervasive, like cultural attitude. And the lack of follow up with rock is then
reflected in all of these other prominent cases. A really great point. I have a couple final questions
for you before we wrap up. The first is that, okay, we've been talking a lot about GROC, right? We've been
talking about a lot about what has been happening on the X platform or even what you can do with
grok, you know, outside of X. And obviously what happens on X is important because of the virality
component, right? Because it can really take off. A lot of people can see it. But we know that there is
this kind of much bigger network of generative AI tools, open source,
models that are used for different things and what have you that are also used to victimize
people, to victimize women in particular, but also children, you know, because of the lack of
guard brails here. Do you have any idea of what we can actually do to try to tackle this much
bigger issue just beyond, you know, this one social media platform and the chat bot that's attached
to it? It's such a great question. And I think that a lot of times I'm more scared of what we're
not seeing than what we are actually seeing because it's so much harder to even get a
grasp of what's happening behind the scenes with these essentially like closed door platforms.
The GROC app, that applies to that, but it also applies to all of these other technologies,
all these other applications.
And so I think within sort of the evolution of AI, in order to create products that are
actually safe for consumer use and don't perpetuate these problems, the act of like resisting
and opposing sexual abuse has to be an active consideration throughout.
the entire process because it starts with sort of the programming and the data sets on which
these AI technologies are trained on. Researchers at, I believe, Stanford found that in popular
data sets, there were authentic images of CSAM that AI tools were being trained on. So it is
baked in at that level. And I think a lot of people have sort of figured this out through casual
use, like AI depictions of women, they tend to be white, they tend to be young, they tend to be
conventionally attractive, and they tend to be sexual or sensual. And so that is a very clear
bias in a lot of these AI technologies, and that can't be solved on just the user end. It has to
start on the end of the people who are creating and producing these tools. And so I think, like,
that's something that has to be addressed in this industry in order for any of this to get
better. And in my experience, reporting on sort of AI consumer products and how they've been
adopted and what this looks like, I find that the industry is pretty rotten when it comes
to this issue. Like, you look at some of the earliest iterations of this technology and the earliest
applications of this technology. And it's like sexual abuse is so baked in. You go back to 2018
and those videos of Scarlett Johansson. And it's like before chat GPT, before open AI, before any of this
stuff reached the mainstream on this massive scale, this was already what it was being used for.
And that's a story that's been echoed repeatedly throughout the tech industry's history.
We've seen it time and time and time again.
So right now, all of the incentives are the opposite of what they need to be.
We need like a complete reversal and to kind of like, I would argue, like destroy and rebuild
in order to actually get better.
Listen, I'll completely agree with you on that one.
I love that idea.
But like even as you're saying that, like it brought to mind, I think it was the end of last year when Sam Altman was basically like open about the fact that they are going to push more of like the app as a companion and like the kind of AI girlfriend kind of stuff because they recognize that that application got a lot more engagement regardless of the broader consequences that that might come with it. As we end off this conversation, I know that you wrote a piece recently about getting off of X and not using that platform anymore. So I wanted to end, you know, with everything that.
we've been seeing with the deep fake scandal, but also everything before, do you think people should
be getting off this platform and abandoning Elon Musk's social media? I would love if more people
would take that leap. I think at this stage, I mean, it's something that I think about all the time.
And it was a very personal decision for me where I just like reached a moment where physically,
individually, I was like, oh my God, I can't take it anymore. Because I didn't, I honestly didn't
expect to leave X. I thought of myself.
as like a long hauler who was like,
you're not going to take my platform away from me, Elon Musk.
I remember when he took it over.
I was like, I'll be here until it dies.
And then that changed.
Yeah.
I mean, I think at this stage,
it's just like the whole platform has been essentially murdered
and then reincarnated as like the worst possible version of itself.
So I know for me as an individual, like there's no place for me on there.
And I still spend a lot of time on the platform as a,
reporter. So in some ways that almost feels like cheating because I'm like, oh, I can still see all like
the good jokes, but I'm not actually using it or at least I'm not like contributing to it.
But I can see for other people why this is a very fraught decision. I have heard a lot of stories
from a lot of people about like why they keep using X. And I can really relate to that and understand
that. Like one reason that I've heard is because there are people who have built support
networks on X, who get support for like just everyday necessities. They have a community there that
helps them get through affording life. Or perhaps like that is where their business is located.
That's something that I hear from content creators a lot is like they feel like they can't leave
because it's still such an important tool and platform for them. And ideally they wouldn't have to
leave. Like this is a platform that has been co-opted and really like made worse for them in a lot of
ways. And that's why also, even if I'm not still on X, I find it's so important to continue
reporting on X because there are people who really kind of feel like they're stuck there
and that their experience is being degraded and it's really being turned against them.
Like, again, going back to content creators and sex workers in particular, it's like
they are sort of forced to live within the confines of this platform and the platform is becoming
more and more violent toward them. So, like, they deserve better. And so I do.
do think that a lot of people, I would encourage more people to get off X because I think at the
end of the day, like, the complaints about Blue Sky are like, oh, there's not as much stuff over there,
like, where am I going to get my discourse about heated rivalry? And I'm like, well, have you posted
it over there, then it would be over there. And yeah, so I would encourage more people to try out
the alternatives because X is unique in its current positioning as like an enemy force. Like, it is the only
social media platform that has been directly conjoined with Trump's administration, where, like,
it is an agent of the federal government. So I'm like, this makes it, like, uniquely bad,
but I know why people struggle with it. Totally. As much as I, like, recognize and accept the
reasons that you gave for why people still might be on there, still when I, because I occasionally
pop back on to see if I need to see, like, what some politician has posted or something like that,
and see the number of people who are still engaged on it.
I have a harder and harder time kind of like understanding why you would still be over there.
It's like, come on, at some point you, and I felt like especially seeing this latest scandal,
it's like how can you still be on like the child porn generating platform?
Totally.
And one thing that I really wish would happen is I wish that these politicians would get off X.
What's so wild about that is they're not even getting any engagement.
For one of my Brock stories, I reached out to all of the representatives who
co-sponsor, the Take It Down Act and the Kids Online Safety Act and some of these other bills,
and they are all posting on X every single day. Some of them are getting zero likes. I'm like,
why are you there? Just get off. It serves no value to you. It's bizarre to me. So I do think that
a lot of people in positions of power, like have no excuse to be on X anymore. Totally. And government
accounts and stuff like that too, right? You see like next to no engagement in like local government
and even some like state or provincial government accounts.
It's like, okay, this needs to change.
This needs to be like the moment where you all start to evacuate from X.
Kat, it's always great to get your insights.
This was so informative.
Thank you so much for taking the time.
And of course, I'd highly recommend people go check out Spitfire
where you're constantly writing really informative stories about these issues and so much more.
Thanks so much.
Thanks for having me.
Kat Tamborge is an independent journalist who writes the Spitfire News newsletter.
Tech Won't Save Us is made in partnership with the Nation magazine,
and is hosted by me, Paris Marks.
Production is by Kyla Husson.
Tech Won't Save Us relies on the support of listeners like you
to keep providing critical perspectives on the tech industry.
You can join hundreds of other supporters
by going to patreon.com slash Tech Won't Save Us
and making a pledge of your own.
Thanks for listening and make sure to come back next week.
