Tech Won't Save Us - Deepfake Abuse is a Crisis w/ Kat Tenbarge
Episode Date: April 4, 2024Paris Marx is joined by Kat Tenbarge to discuss the proliferation of AI-generated, non-consensual sexual images, their impact on the victims, and the potential fallout for tech companies who helped ma...ke it all possible. Note there is some discussion of self-harm and suicide in this episode.Kat Tenbarge is a tech and culture writer at NBC News.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry.Also mentioned in this episode:Kat has reported extensively on this issue, including stories about fake nude images of underage celebrities toping search engine results, nonconsensual deepfake porn showing up on Google and Bing too, Visa and Mastercard being used to fund the deepfake economy, and why plans for watermarking aren’t enough.Another Body is a documentary that looks at the scale of the problem of non-consensual deepfake explicit images.Microsoft’s Designer AI tool was used to create AI porn of Taylor Swift.Middle and high schools in Seattle, Miami, and Beverley Hills are among those already facing the consequences of AI-generated and deepfake nude images.In 2014, Jennifer Lawrence called the iCloud photo hack a “sex crime.”Support the show
Transcript
Discussion (0)
this is your fault. You made this technology. You did not think about this. Or if you did,
you did not create guardrails around these obvious problems. And now people are suffering as a result. Hello and welcome to Tech Won't Save Us, made in partnership with The Nation magazine.
I'm your host, Paris Marks, and before we get into this week's episode,
I just wanted to talk about something pretty exciting with the podcast.
This month, we celebrate our fourth birthday.
It's hard to imagine saying that about a show that I started in the early
days of the pandemic when we were all in lockdown. And I figured it was probably a better time than
any to start a podcast. And now four years later, we're here. I've done over 200 episodes of the
show, digging into a whole range of critical topics on the tech industry with such fantastic
guests over those four years. We have more than 90 show
transcripts on our website for people to explore, to go back over things, and just if they prefer
to read rather than listen to these interviews. We've done live streams like our usual end of year
live streams where we go over what happened in the past year in tech. And of course,
the recent live stream that we did on Dune, and we want to start doing some more of those. I think you all have fun with our annual
worst person in tech contest that of course, we'll be doing at the end of the year again this year.
And in October of last year, we did an in depth series called Elon Musk on mass digging into
who this man is, where he came from, and how he built this mythology of himself that is hopefully
rapidly falling apart as he takes this turn to the extreme right. But that doesn't mean that
there aren't really serious dangers that come with the power that he has amassed. Along with the many
listeners who enjoy this show, who share it with their friends, media organizations have also
recognized the work that Tech Won't Save Us is doing. Last year, the New York Times recommended Tech Won't Save Us for people wanting to know more about AI and said, quote,
for anyone alarmed by all of the widespread predictions about AI swallowing whole entire
job sectors, the show's measured coverage might prove reassuring. Gizmodo said, quote,
Tech Won't Save Us weeds through the crap and snake oil of the industry and discusses the human
consequences of the technology.
And Mashable said, quote, a healthy counterdose to the nauseating tech utopia idealism that usually surrounds Silicon Valley and enthusiast press coverage.
People recognize what we're doing here with Tech Won't Save Us by spreading critical perspectives on technology and why it's so important. In the coming months, I'll be looking to explore even more issues that we haven't gone in depth on before, like Starlink, Neuralink, fast fashion, geoengineering, and
previous waves of tech criticism. And that's all because of the support of listeners like you.
Thank you to all of those who already support the show. And as we celebrate the show's fourth
birthday, I'm asking those of you who don't already to consider going to patreon.com slash
tech won't save us and supporting the show for as little as five dollars a month so we can keep doing this work
and to help us tackle a new project if we get 200 new supporters this month we're going to do a new
series like the one we did with elon musk last year but this time we'll be tackling the ai hype
of the past year by digging into the false promises of tech titans like Sam Altman,
the environmental consequences of these tools, including everything from the massive energy
demands and water use, the growing backlash to hyperscale data centers built by companies like
Microsoft, Amazon, and Google around the world. And we'll ask the key question, do we really need
this much computation? With the AI hype in full swing and the growing drawbacks of
the Silicon Valley model becoming all too apparent, it feels like the kind of series we need right now.
And to help us make it, you can go to patreon.com slash tech won't save us and become a supporter
this month to help us reach our goal. So thanks so much for your support. Now let's turn to this
week's episode. My guest is Kat Tenbarge. Kat is a tech and culture reporter at NBC News.
Now, I'm sure that you've seen all of the stories circulating recently about the deep
fake nude images and AI-generated explicit images that have been produced of celebrities,
women celebrities in particular.
Taylor Swift, of course, comes to mind, but many others.
And the conversation that this is getting people to start having about one of the broader impacts of these generative AI tools that tech companies have been rolling
out over the past year or so. In this episode, we dig deep on what is actually happening there,
the problems with them, and we don't just focus on celebrities, even though those are the ones that
really start these conversations and get people to pay attention to it. But this is actually having
very serious effects for the non-celebrities, the regular people in pay attention to it. But this is actually having very serious effects for the
non-celebrities, the regular people in the world as well. In particular, these tools are being used
to generate explicit images of middle schoolers and high schoolers in the United States, but many
other parts of the world, and having serious consequences for the victims of those things
as those images circulate. I think it's positive that we're starting to have conversations about this and what we're
going to try to do about it. But it seems like too little too late at this point when so many
people have already been harmed. And so this is a conversation that I've been meaning to have for a
while. And I'm very happy that Kat was willing to come on the show to dig into it with us and to
give us such great insights into the serious problems
that these generative AI tools are creating as they're allowing people to generate these explicit
images of people in their lives and the serious consequences that come of that. One more thing to
note, given the subject matter that we're discussing, we also get into some pretty heavy
topics like suicide and self-harm. I know some people prefer to be aware of that ahead of
time. So if you enjoy this conversation, make sure to help us hit our goal this month of getting 200
new supporters. And you can join people like Nina from Essex, Antonin from the Czech Republic,
JK from Connecticut, Danielle from Victoria, BC, Fabian from PAO, Matthias from Switzerland,
and Jim in Seattle by going to patreon.com slash tech won't save us where you can support the show for as little as $5 a month and help us hit our goal to make that
new series digging into the AI hype and the backlash to data centers. Thanks so much and
enjoy this week's conversation. Kat, welcome to Tech Won't Save Us.
Thank you so much for having me. It's great to be here.
Absolutely. I've been really looking forward to speaking with you. I've been following your
reporting and coverage for NBC News for a long time. And one of the topics that you have been writing about a lot recently, because it's obviously been in the public conversation, but it's also an issue that is not getting the level of attention that it deserves because of the widespread harm
that is causing to a wide number of people and a growing number of people, and particularly women,
but beyond that as well, right? And so I wanted to start with probably the moment or the event
that many people will be most familiar with. And this is when in January, a number of explicit AI
generated images began spreading of Taylor Swift on X in
particular, but then on some other platforms as well. Can you talk to us a bit about what
happened there and what the significance of that kind of moment was?
Yes. So for, I would say about the past year, roughly, I've been seeing more and more kind of incidents like this on X as it's now called.
And this was one of the biggest incidents probably in the entire deepfake space so far.
And the virality of the moment really hinged on it being Taylor Swift, who was being victimized
in these images. But to take a step back, what happened? Basically, there was an account that
had a modest following. And the way that it had gained followers and gone viral was by posting
sometimes things related to sports, sometimes things related to music, just pop culture
tweets that were intended to go viral. And a lot of these accounts can go viral pretty easily by sexualizing women in the
public eye. So sometimes they're able to do this in a more innocuous way just by reposting an
Instagram photo that's kind of sexy or commenting about various women's appearances. But in this
case, what they did was they actually posted an image of Taylor Swift. It was an artificially generated image.
So the entire image was fake.
A lot of times when you see them, it's like a real photo that's stitched into something else, or it's like an edited photo that's edited with AI.
But in this case, it almost looked more like a photorealistic drawing.
And if you zoomed in, you could kind of tell like this isn't a real image.
But what it depicted was someone who is very obviously Taylor Swift in a football stadium,
being, you know, sexually harassed, sexually assaulted, even by various men in this football
stadium. So it was this fantasy scene that played on a bunch of different elements. The biggest
element was being non consensual. Taylor Swift, obviously, not only did she not consent to this image being created
or distributed, but the scene that's being depicted hinges on this idea of non-consensual
sexual assault. And the other aspect here is that early this year, Taylor Swift was constantly in
the news for being at her
partner's football games because he's one of the biggest people in the NFL. And so it had become a
cultural phenomenon already, this sort of sexist portrayal of Taylor Swift, like she's taking the
attention away or like, why is she there? Or like, why are women caring about football? And so this sort of like deep fake image
capitalized on all of this. And I think that's why it really took off. And by the time it had
been taken down, it had been viewed over 40 million times. So this reached the mainstream.
It was showing up on almost everybody's timeline on X, and then the news started covering it,
and it just blew up from
there. I think you put that really well. And when it comes to the way that these images spread
around, I think you would imagine that if something like this was going to happen, especially to
someone like Taylor Swift, that it would be addressed very quickly because someone like
Taylor Swift is obviously not only in the public's eye very clearly,
but also is quite an influential person. I'm sure that she can get in touch with,
or her people can get in touch with these social media companies to try to ensure that something is done. And it seemed quite notable in this event that it continued, you know, kind of spreading
around for such a long period of time. What was going on there and why did it seem like Twitter
or X or whatever was not able to get
a hold of this? And I believe in a story of yours that I read that these images also started to show
up on platforms like Instagram and Facebook and I'm sure elsewhere as well. How did it spread so
much when something like this, these sorts of images should be taken down? So this really
confronts a lot of the issues in the space is how the whole
incident played out with the Taylor Swift images. And it exposed something that people who have been
looking at deep fakes have been aware of for a while, which is that platforms are extremely
reluctant to do anything about this. They're reluctant to actually take down the images. They're reluctant
to suppress the images and links to the material. And above all, they're extremely reluctant to
suspend the people who are posting this stuff and the companies that are posting this stuff.
And so in the case of the Taylor Swift images, Twitter did not do anything. And I actually don't think Twitter ever did anything
in regards to that viral image. What happened was fans of Taylor Swift started a campaign
to mass report this image. And after hours and hours and hours of presumably hundreds,
if not thousands of Taylor Swift fans reporting the image. Then it was finally taken
down because they overwhelmed the reporting system. And that's how they got the image taken
down. And it wasn't until like 24 hours after it was posted that you saw Twitter start to actually
respond to this. And this really tracks with the average experience of someone who is victimized
by material like this, including celebrities.
I think we're going to go into this a little bit more later, but other less famous celebrities,
but still people with PR teams, people with lawyers, other celebrities and influencers and
creators have talked about how it shocked them that it was impossible to get this stuff taken
down and how they went through every avenue
available to them and still nothing was being done about the problem. So the Taylor Swift situation
really exposed to the mainstream, not only what this problem looks like, but how difficult it is
to get anyone to do anything about this problem. Yeah, basically, if you have the Swifties on your
side, maybe you can get some action because they can actually push these platforms to do something. But otherwise, if you don't have this kind of in their early 20s or their late teens,
potentially even younger, kind of getting their images used and spread around in these ways by using these deepfakes and these kind of AI generated images. What are you seeing there?
And, you know, as you were saying, what is their experience of this?
Yes. So the practice of deepfaking, it got on my radar in like the late 2010s,
which is when you saw the
technology evolve to this point. There were already rudimentary AI-generated deep fakes
coming out in the years between 2016 and 2018. You saw on the corners of Reddit, the corners
of 4chan, people were starting to do this. Visibly, the people who they were targeting were often
celebrities. It makes sense because if you're trying to go viral, or if you're trying not even
to go viral, if you're just trying to get some attention for your technology or whatever you're
doing with it, then it would make sense that you're going to go after a highly visible,
high-profile woman, especially with the deepfake community. And you study how the community has
evolved over the past seven years or so online. It's a highly gendered environment. And the
ideology behind what they're doing is highly, highly gendered. So you see the communities of
deepfake creators, they're just dominated by men. If there are women there, you're not really seeing them
identify themselves as women. It's like a boys club. It's like a men's space. And a lot of people
who have studied sort of the deep fake space have talked about how it's emerged as a kind of social
bonding community for a lot of men of various ages. And so in the early days, this was relatively contained to these male
dominated corners of the internet. But over the past few years, it got on my radar because
I was seeing influencers who I covered having to deal with this, but not in a major, major way.
So if I was already looking at a case where there was certain attacks against an influencer or an influencer had a really controversial reputation, I would then sometimes see this material.
Or if the influencer was extremely vulnerable.
So one of the first times I saw a deepfake was actually some child influencers.
I saw some deepfakes of them on some extremely unsavory website. And my reaction, this was probably around 2019,
I was like, this is horrible, but I have to be careful about how I approach this because
what I don't want to do is I don't want to make the problem inadvertently bigger by shining a
spotlight on it. So at the time it felt very underground and kind of like a new threat.
And then in 2023, in actually January, so a year before the
Taylor Swift images, something happened that registered to me as a really big deal. And it did
become somewhat of a watershed moment. But most of the public, I doubt, is really aware of this.
So what happened is a major streamer on the platform Twitch, he was live streaming.
He was just live streaming one day.
And in the corner of what he's showing on his camera, you can see his web browser.
So you can see what he's doing on his computer.
And he has a tab open.
And the tab is he's looking at deepfakes of some of his peers and friends in the Twitch
space on one of these
really prolific deep fake websites. You can see what he's doing. I don't believe that he meant
to showcase this. I don't think he meant to expose what he was doing. I think it was an accident,
but it caused this cataclysmic outpouring of attention onto this issue. And the people who were paying attention were the people
who actually wanted to consume this content. So from that day forward, we saw the numbers of
traffic, the amount of traffic being directed to this website, it just started to skyrocket.
And it has never stopped. Month over month, ever since this happened, the traffic has just continued to climb.
And what happened by the end of the year is that in 2023, there were more deepfakes created than every other year combined thus far.
So that is where the problem really emerged as a mainstream issue and not just something in the corners of the internet with the most unsavory types of
people. And you also saw immediately the effect that this had on the women who he was looking at
the deep fakes of, because some of those women who were some of the biggest female Twitch streamers
were really, really understandably traumatized by this. They talked about all of the various consequences that this had
on their mental health. One of the women who was affected talked about how she had struggled with
an eating disorder in the past and seeing her deep fakes, it had reignited some of those triggers
because she was seeing her face on a different woman's body. And it was starting to make her
question her own body,
like, should my body look like the body in this deep fake that I'm seeing? And so that's just one
of the many underexplored and sort of under-recognized consequences that this can have,
in addition to all of the trauma of being essentially sexually abused. And I think a lot
of people struggle to make that disconnect. They're like, well, it's not real. It's not happening to you. It's fake, right?
But in reality, our brains on a neurophysiological level do not recognize that. And this is something
I've talked about with tech titans at companies like Adobe. We all recognize why this is an issue
because we all know that the way the brain works is that when
you're processing videos and images, your brain is kind of treating it as if it's real. And so even
if you know cognitively that what you're looking at is fake, it still has a real effect, not only
on the person who's depicted, but on the viewer. And so it is a very similar issue in that sense
to what used to be known as revenge pornography,
which we now prefer not to use terms like revenge porn. We prefer to use terms like
non-consensual sexually explicit material because it's less stigmatizing. But the phenomenon,
it has played out so similarly to how this issue played out in the 2010s with women's nude photos being posted online
without their consent. It's really just like watching the same cycle play out yet again,
because where we're at right now is women and victims of this are sharing how harmful it is.
And tech companies and the people who are responsible for this problem,
they have not caught up. And so now we're in a space currently where there's very little regulation, very little oversight, very I saw kind of break out into the kind of broader public discourse, but was something that was contained to these sorts of communities that are paying attention to the deep fakes and AI generated explicit images and things like that. But I'm happy that you brought up the comparison
to what was happening in the past, right? Because it's not nude images of celebrities in particular,
or even kind of minor public figures is something that is entirely new. These have circled around in
the past, but usually they had to be real images and not things that were being
created. As I was reading some of your stories and just kind of preparing for this, I thought back to
the leaks from the Apple kind of cloud storage stuff that was particularly focused on Jennifer
Lawrence, but I believe affected a number of other people about 10 years ago when their kind of nude
images were circulating around. And I wonder what you see in the similarities
to what happened then versus what is happening now,
but also the differences in, I guess,
what I would see as the scale of the problem
since these images can just be created with these tools
that are now easily accessible by these tech companies,
you know, in particular, since the boom
of the generative AI tools being released
in the past couple of years?
What are you seeing the difference between now and then, but also the similarities?
Yes, it's a great question. And it really illuminates the scale of the problem now,
because in the 2010s, when you saw this issue arise, it happened in a lot of the same ways
that we're seeing it now, where the highest profile incidents
were like the iCloud hacks and the leaks. And, you know, fascinatingly enough and disturbingly
enough, some of the same websites and some of the same people who posted that illicit material back
in the 2010s, it's the same people posting the deepfakes now. It's the same websites posting
the deepfakes now, because we never really got a handle on how to stop that from happening. We created tech companies
eventually after years of women suffering, created pathways for them to be able to at least take this
material down from Google, from Facebook, from whatever. But the websites that existed as sort
of like the black market of this practice,
those websites were never taken down. They're not mainstream tech platforms. So they're not
going to get yanked in front of Congress. People don't know them. They're not recognizable.
They are less susceptible to scrutiny and media pushback and regulation. And so those websites
and those people, they're still out
there. They still exist. And this has become the new goldmine for them. As you alluded to,
I think that one of the major problems here is we still have the non-consensual sharing and
distribution of intimate images. It's still a massive problem. And even when, for example,
it became more commonly known that you don't want to just send your nude images to anyone
because they could post them on the internet, even when that became common knowledge,
of course, abusive partners and vengeful ex-partners would still release this material
after they broke up.
But in addition to that, predators have always gone out of their way to acquire this material
through whatever means necessary.
So back in the 2010s, when it was called revenge porn, and that was like the big deal, I remember
one of the guys who actually did go to prison. The reason that they were able to convict him is because he had hired a hacker to hack into women's devices to find this kind of material.
Because that guy, his name is Hunter Moore, he was making money off of this, as a lot of people were able to.
He was profiting and he was able to monetize the spread of this non-consensual material on the internet.
So once it became profitable to do this, in addition to something that people just wanted
to do maliciously, that's when it really became a kind of unstoppable force that eventually
institutions had to pay attention to. And eventually I think like with the celebrity
iCloud leaks,
it reached a point where it could no longer be ignored. Because you can ignore thousands of anonymous women, you cannot ignore Jennifer Lawrence, she has access to the New York Times,
she can say, hey, this is a sex crime. And so then you start to see things happening. And it's
the same thing with Taylor Swift. In terms of what is different now and what is so staggeringly horrifying about the deepfake
issue, it's exactly what you said, Paris.
Before, it had to be some sort of real material.
And you could use hidden webcams in changing rooms.
You could go to great lengths to acquire real explicit material of victims.
But now you don't need to do any of that. And I'm about to list a
couple of real things that people are doing to create deepfakes. There are people who are going
on to public live stream footage of courtrooms and pulling images from people testifying at the
stand and turning them into deepfakes. There are people going through women's Instagram accounts
and OnlyFans profiles and taking clothed pictures
of them and running them through programs that will undress them. There are people doing this
with girls' yearbook photos and pictures of women and girls taken at school. There are people doing
this with broadcast news interviews, as well as movies and TV and podcasts and social media posts
and all the other ways in which we are able to share our images today.
And that doesn't even get into what created those Taylor Swift images, because there was never even any real picture.
They just were able to create it out of thin air.
So the problem now is that the scale of being able to perpetrate what should be considered a crime, the scale is now unimaginable. And you see this with individual perpetrators,
with the amount of damage that they're able to cause. So I've seen cases where an individual
perpetrator has been able to create deep fakes of women who are close to him. Like maybe he creates
deep fakes of a dozen of his female classmates. And then additionally,
he's running all these celebrity and influencer women through the same technology. And so now
he's able to victimize an entire scale of women, those who he knows personally to him,
and those who he doesn't know. And that is a kind of like a level of criminology, I think,
like a level of criminal potential that is somewhat modern. And the scale, I think, like a level of criminal potential that is somewhat modern.
And the scale, I think, is something that people have yet to fully realize just how much of a deal
this is. It's shocking when you actually start to kind of grapple with the broader impacts of it.
And I want to ask about how it's affecting people beyond the celebrities. But I have one more
question before I do that. And you mentioned how this can be sort of like a bonding thing. Like there are these communities where men make these
images of women and share them with one another. And those are groups that exist online. But I
also wanted to know a bit about how the economy of this actually works. Because as you said,
there's also a lot of websites that profit off of this and that have been doing so for a long time. Obviously, we know there are plenty of tools that are created
to make these sorts of images. How did these companies make their money? And how does this
become such a big problem that so many different actors can make money off of, even as so many
people are suffering as a result of it? Yes. So with the internet, there are so many possible
ways to monetize things now. And there are so many ways that you can set up monetization schemes,
both with, you know, sort of mainstream financial institutions and outside of them.
So one way that I've seen people monetize the creation of deep fakes is there will be a website that is kind of like a
YouTube clone. And it functions the same way that a lot of free porn websites do like Pornhub,
where you go on at this current stage of our regulatory environment, depending on what state
you're in, you may or may not need to provide your ID to view what's on that website on Pornhub
specifically. But typically, you can just access
it from your browser. You can just go to the website and you can just watch free videos.
And that's how most people consume porn today is for free. If you're looking to make money off of
your videos, there are deepfake websites that are basically like YouTube or Pornhub clones.
And you can go and you can watch a couple minutes of a deepfake video for free. And then in the description, it'll be
like, here are various ways where you can unlock longer content, customized content, and most
disturbingly, content that features individuals who you personally know. And so there has to be
a way to actually get money into the
hands of the people who are making this stuff. And the ways that I've seen them do that is they can
use cryptocurrency. So cryptocurrency wallets have become a big part of this economy. And that is a
very difficult kind of thing to figure out, like, how are we going to stop this because of the
nature of cryptocurrency? It's harder to trace. It's harder to control. There's no government that is going
to determine the use of certain types of cryptocurrency. So that's one way that they
can profit off of it is through crypto. Another way, staggeringly, is that in NBC investigation,
we found a website that was like an OnlyFans clone.
And they had MasterCard and Visa hooked up to this website where they were selling deep fakes.
And we reached out to MasterCard and Visa. And we were like, hey, why are you offering your
payment processor services for this website? And we never got a response. And it's unclear whether it was because of our
reporting or not. But that one site that we found basically just banned everyone. That site ended up
going down. But there is the potential to just create a new one. And I think that a lot of these
financial institutions and payment processors, they have become really strict about supposedly pornographic content over
the past several years. But the deepfake stuff is getting through. And so it raises a question
of who's at the wheel, who's monitoring, who's allowed to use Visa and MasterCard,
and are they doing a good enough job? Because at this current stage, deepfake producers are
able to market and make money off of this content using people's
credit cards. That's a really good point about how, you know, kind of determined they've been
to crack down on, you know, the ability to sell kind of nude images and stuff like that, just to
any kind of sex work or anything like that. And the target that has been placed on that by lawmakers
and by payment processors and whatnot, but how these deepfake companies and whatnot are seemingly able to get
away with that, at least so far. And I imagine part of that is because it can kind of be like
a bit of a whack-a-mole situation with new ones kind of propping up here and there.
But as you were saying, this is not just something that affects the celebrities that we all see on
the news all the time. It's average everyday people who are also being affected by this and who are having these
images circulating about them. As we were talking about before we started recording, there was a
documentary that I don't know if it's a fully released right now for it's still showing at
festivals, but that is really kind of grappling with this issue called another body. And it looks
at, you know,
this woman, I believe she's a high schooler in the video, and they actually kind of creatively
use deepfakes in order to hide who she actually is by using a deepfake for her kind of throughout
the whole film. And you don't find out until the end, if I remember correctly, or maybe it's part
way through. But that is kind of a film that really showed me how much of a problem that this is. What is the widespread implications of this? And what are we seeing when it comes to
regular women to even girls in like high school and things like that, when, you know, the people
they know are making these images of them? So Another Body, just a fantastic documentary,
and it opened my eyes to the scale of this problem as well.
And I think part of the recent evolution of deepfakes, what we're dealing with at this
very moment, is that even in 2018, 2019, 2020, in those years, the technology existed and
people were using it for this purpose.
But it was sophisticated.
There was kind of an entry level
that you have to be able to access to create a deep fake. You have to have a computer that can
store all of this technology on it and process all of this at once. And you have to have the
technical ability to know how to do this. And so for years, that limited the spread and the scale
of the destruction. In the Another Body documentary,
the perpetrator who they identify, he's a comp sci student in college. So he understands computer
science. So he's able to understand how to do this. What we're dealing with today is that there
are hundreds of apps on the Google Play Store and on the App Store right now that you can download and you can input
photos. And some of them are really rudimentary because I've tested out several of them on
pictures of myself to try to figure out what can you really do with these apps. And it varies.
Some of them don't do it very well. Some of them are super, super rudimentary. But there are
enough. And some of them, you have to pay them $5 or you have to sign
up for a subscription. And a lot of them are scammers. So I try not to do that because I don't
want someone to have my credit card info. And in that case, I guess Google and Apple are also
getting their cut as people are doing this. Exactly. And so with all of these apps,
you no longer need to have any sophistication at all. Because I'm someone who doesn't have a lot of comp-sized sophistication.
I'm at a really rudimentary level.
And so when I'm testing this stuff out, I'm coming at it from the sensibilities that your
average fifth grader would have and their abilities to navigate these types of apps.
And that's exactly why we're seeing this problem in middle schools.
Because now what's happening
is these apps, they're being advertised on mainstream social media platforms. They're
being advertised to young people. They're being advertised with photos of young celebrities that
fall into these people's age groups. And the message that's being sent to high schoolers and
middle schoolers around the world is just download this app and do this and it'll take five seconds. And that is exactly what is happening. We've seen cases at at least a dozen
in middle high schools. And I truly believe that that is just the tip of the iceberg. Because what
we've seen in some of these cases is that the schools kind of try to cover it up more than they
try to actually fix the problem because what middle or high school
wants to be on the national news for a deepfake incident? But regardless, the problem, it's woven
its way into all of these various communities around the world. There was a middle school in
Spain where this happened. I think there was a school I saw in Canada where this happened.
When you look at the map of where this technology and the creators behind these deepfakes
are coming from, it's all over the world. I've spoken to victims from India. I've spoken to
victims from Australia. I've seen technology developed in China, in the Ukraine. I've seen
technology and victims. It's very clearly a global issue. It hinders the ability for change because
even if, let's say, we banned deepfake apps in
the United States, which we're nowhere near doing, so many of these apps are produced outside of the
United States. And how do you even trace down the perpetrator? Really frustratingly, a lot of times
when someone is a victim of something like this and they go to their local police department,
the local police officer who they interface with, more likely than not, does not have
any specialized training or knowledge to deal with this issue.
And unfortunately, what a lot of victims hear, and this tracks with how police respond to
sexual violence in all situations, a lot of times what they're hearing is, we don't think
this is a crime.
We can't help you.
And even if we do think this is a crime, we're not going to devote any
resources to helping you figure out who's making these deep fakes of you. So it's left to the
victims in most cases, in the vast majority of cases, to try to seek some sort of recourse
themselves. As you described that, I can only think about the harm that comes of it as well.
Like I'm the furthest thing from an expert on this issue, but I know that I have
read multiple stories of people who were in high school and have had just nude images that they
took of themselves or that someone took of them kind of spread around through their school. And
then, you know, I know that these are really sensitive topics, but engaged in self-harm or
even attempted suicide or committed suicide as a result of that.
And now if these sorts of images can just be created by anybody, by any of their students
and spread around throughout their schools, and they have so little control about that,
are we seeing broader impacts on these students and these victims of the creation of these images?
Yes. There has been at least one reported case, I believe, in the UK of a young person who
died by suicide after seeing deep fakes. And I don't have this reporting myself. We didn't
shore it up ourselves. But from what I saw reported in some tabloids, this child made
reference to this issue as why they were resorting to this. So absolutely, we're seeing these types
of consequences spiral out.
And I think we're going to be seeing and hearing a lot more.
And in addition to that, one of the things that really alarms me about what's happening
here is the perception of fear and the ways that now women and girls are trying to protect
themselves and the ways they're being encouraged to protect themselves.
The fact is, there is nothing that you as an individual can do to prevent someone from doing this to you.
But people are going to try.
They're going to try to find a way to avoid this happening to them.
And what that looks like is women unenrolling from male-dominated fields because of their male classmates doing this to
them. That's what we saw with the Another Body documentary is like, it was a computer science
course, which is already a very male-dominated field, and they're doing it to all the women in
their classes. So the end result could therefore be, you see this gender discrimination continue
to be perpetuated in these male-dominated fields.
And I think that honestly has a lot more to do with it than people realize,
because that's the story of a lot of tech. Unfortunately, a lot of tech comes back to
this issue where you don't have women in the room, you don't have women in leadership positions,
and sexual harm and non-consensual imagery becomes a key functionality of the tech that
we produce.
So that's one huge issue with this.
But in addition to that, you also see women and girls wanting to recuse themselves from
public life out of fear that this will happen to them.
And I've seen people, even people who seemingly have good intentions, say things like, this
is why you
shouldn't put your face on the internet. And it's like, guys, you're doing the work for them.
Because let's be real, the only way to be a public figure in 2024 is to have some sort of online
presence. So you're basically telling women and girls en masse, don't try to be a public figure.
Don't try to go into politics. Don't try to be a visible
person in your field or in the world in general out of fear that your image will be corrupted or
that your image will be abused. And so I think that's a really harmful message that is now being
perpetuated because of this. And the saddest thing about it is that wouldn't even work.
When we see these cases pop up in middle and high schools, it's not because the girls have social media presences. It's because it's their classmates
doing this to them. And it really tracks with the entire spectrum of gender-based violence and how
we see it most commonly perpetrated, which is by people who are close, physically close to the
victim. Yeah. I think it's such an important observation
for you to make. And I'm not surprised to hear those sorts of things, but to think that these
are the effects that women are experiencing as a result of these technologies. And there's so
little accountability for the people creating these technologies, let alone the people using
them in order to create these images, it just makes you profoundly angry. Yes. Yes. And it makes you, I know that after looking at this for the past year,
and I know there are other reporters like Samantha Cole, who has been reporting on this from day one,
from the day of the first deepfake, you get this sense over time, you start to realize
that this giant pressing issue does not matter to the major
companies and the major people who are rushing ahead in this AI arms race. They're acting like
this doesn't even exist and that it's not even happening because if people were to really wrestle
with the actual harm that is currently being committed by this technology, then we would start to ask, hey, should Microsoft be producing this stuff at this rapid rate with zero guardrails?
Should open AI really have the prominence that it currently does in business and culture?
Shouldn't we be asking these tech CEOs these kinds of questions? They don't want that to happen.
They love how much money they're making from AI right now. They don't want that to happen. They love how much money they're making from AI right
now. They don't want to have to deal with this conversation. They would prefer that nobody talked
about it. And when they do talk about it, they constantly talk about it as if it's this thing
that's going to happen in the future and not something that is currently already happening now.
It just makes you so angry. I remember reading one of the stories that you wrote,
I believe it was after the Taylor Swift incident where the Microsoft CEO was asked about this stuff. And he was like, yeah, we definitely need, you know, kind of rules around this or whatnot. You know, we need to be paying more attention to it. But it's like, that is not nearly enough. Like that doesn't get to the scale of what is happening and how, in many cases, it's the tools that are created by these major companies
that are helping to enable these things to actually be done. Yes, that has been one of the
most shocking, and it shouldn't be shocking, but it's been one of the most staggering findings for
me personally. That's such a good case study, the Taylor Swift Microsoft response, because as he was
doing that interview, 404 Media was figuring out that the Taylor Swift
images were created with Microsoft's generative AI tools. And I remember Microsoft tried to say,
we don't think that's true. We don't think it was our tools. But eventually they relented and they
were like, yeah, it was our tools. And now we've strengthened these protections. But we cannot function in a society
where we're going to let the harm happen first and then we're going to respond to it.
We simply cannot. And that's what these tech companies have gotten away with for so long.
And that's their status quo is they're just going to create new technology and they're going to push
it out and they're just going to wait for people to abuse it. And then they'll have the conversation.
If that's the process, if it's not, let's consider the harmful effects before we push it out,
then people will lose their lives. They already are. And I feel as though they exist in this
echo chamber of plausible deniability. And at some point we have to puncture that and be like, no, this is your fault. You made this
technology. You did not think about this. Or if you did, you did not create guardrails around
these obvious problems and now people are suffering as a result.
Definitely. It's not just the people creating these images that are at fault here and need
to be held to account, though they absolutely do. It's the people who are enabling it in the first place, who are creating these
technologies, who are not thinking about the broader consequences or just ignoring the fact
that there will be broader consequences because that is much more beneficial and easier for them
to be able to roll out all this stuff and create all the hype around it and get the investors
excited rather than saying, oh, there are a lot of potential problems here that we actually need
to address. Can you talk a bit about, you mentioned Microsoft there, but obviously OpenAI has these
tools, Google has these tools, I believe Facebook or Meta is working on their own. Do we see similar
things from a lot of these major companies when it comes to people being able to use them to create
images like this? Yes. And I think that part of what makes this complicated is that a lot of this
technology is open source. And a lot of it is then able to be taken from the code repository,
like GitHub, for example, which is owned by Microsoft. Microsoft owns GitHub. And if you go
to GitHub right now, you will see hundreds, if not thousands of AI models that are on there
for anyone to use that are created just for this purpose. They're not even under the pretense of
being created for other purposes. There are just like bounty networks popping up everywhere. Like,
please someone AI this woman, or please someone make an AI tool that can do this to women.
It's just right out happening in the open. And in addition to that, one of the really
fundamental issues that I have with the current AI space is that they're putting these products
out into the consumer market. And they're not even creating technology. There is no technology
that can reliably detect if something
is AI generated. And in a lot of cases, like OpenAI is a good example of this. When they first
launched ChatGPT, they were like, here's a side program where you can put the text into it and
we'll tell you if it came from ChatGPT. That didn't work. They pulled that program like a year later
and they were like, this doesn't work.
The accuracy rate is so low. So now there's just nothing. And they're very transparent about that.
They write on their website, there's nothing that exists to reliably detect AI generated text.
There's nothing that exists to reliably detect AI generated video. So they're throwing the rest
of us to the wolves. And I think that people need
to have some solidarity, but I understand why people don't, because this is all moving so quickly
that I don't think people realize the sheer impact and magnitude of everything.
But it's like these companies do not exist in our interest. When they talk about strategies
to combat the harms of AI, they're not talking about you and me. They're only talking about themselves. And the sooner that everybody wakes up to that, I'm just like, AI is not even for us. It's not for us. It's to take our jobs away. It's to make our creative work less valuable. It's to make us more productive for their bottom line. And very little of what AI
is projected to do will impact the average person in a meaningful way. It's all about creating,
inflating these artificial stock prices and values for the people at the top to benefit the most.
It's so well said. And reading your work, I was struck how the lack of responsibility isn't just
in the creation of these tools and what people can use
them for, but also just in the search engines and the way that many people access information,
right? If you go onto a Google search engine, you know, just the other day, I was, for example,
trying to look up images of like Elon Musk and Mars for like a piece I was writing. And so many
of, you know, the images that like Google served me up were AI was writing. And so many of the images that Google served me up
were AI generated stuff.
And obviously it wasn't labeled as that,
but it reminds me in the past
when the image results used to be filled
with like Pinterest images,
and now it's just like stuffed with AI generated garbage.
And if you can't clearly tell,
in many cases you would think
that they're just normal images
or something that someone had
created. But reading your reporting, this is also a serious problem with explicit images where
these deep fakes show up in Google images. Even when you search for the names of certain
celebrities, for example, it will show up in their kind of results. Like what are we seeing
on the search engine side of things? And are these companies even properly responding to that? Yes. So the search engine component is huge
because a lot of the ways that people get exposed to this material, whether they want to or not,
is through the search engines. When you look at those major deepfake websites that are hosting
the majority of this material, the way that people are getting to that website is they're
Googling so-and-so deepfakes, and then Google is giving them the links to go to this website.
This is a Google problem. Yes, the website is ultimately at fault for hosting this material,
but Google has a lot of responsibility for powering the existence of this website.
People wouldn't be going to this website if Google wasn't sending them there. And Google, the stance that it takes is like, we're going to wait
and see how the cultural conversation and specifically the legal conversation around this
develops. They're basically like, we're going to wait and see if this becomes illegal and then we'll react.
Like our policies are shaped by local legal requirements.
And so they'll just tell you like they're not looking at this from a moral perspective.
They're not listening to the people who use their platform.
What they're listening to is the only thing that has the power to keep them accountable.
And that system of accountability is
not doing enough to combat this issue. So Google has removed itself from having responsibility.
The only way that it's going to take responsibility is if people demand Google takes responsibility
for this. And something that is so insidious about the way that this happens is like Google
in its own defense will say, we're not showing you deep fakes.
We're not even showing you sexually explicit material. If you just type in the words,
Jennifer Lawrence, you have to type in Jennifer Lawrence deep fakes to get there.
And that's half true because what some researchers found is that if you are a news consumer who wants
to hear about Jennifer Lawrence's thoughts on deepfakes, or if you want
to hear about Jenna Ortega's thoughts on deepfakes, or Scarlett Johansson, what has she said about
deepfakes? Because Scarlett Johansson is one of the most victimized women in the deepfake space.
And so Scarlett Johansson has been talking about this for a long time. And so when you go to try
to find an article about Scarlett Johansson's thoughts on deep fakes, Google's not just giving you that in the results. Google is giving you links to deep fakes when you're explicitly looking for things that aren't that. And so Google's defense is like, we only give people what have to ask Google, is that right? Should we give people this type of material that's harmful just because they want to see
it?
That's one of the biggest questions, I think, in the regulatory space is like, does Google
have a responsibility to actually prevent people from accessing material that's harmful?
And in other cases, the answer is yes.
When it comes to things like child sexual abuse material, Google isn't
just going to give you that because you want to see it. But with deep fakes, they haven't quite
reached that point yet and they need to be pushed into that point. Yeah. And I guess with the child
sexual abuse material, that's because that is explicitly illegal. And so they have to act on it,
right? Yes. And even then, and this is something I did an article about, even with child sexual abuse material, which I talked to a bunch of legal experts, and I went back to the stat wrote into those statutes in the US code that computer generated child pornography is illegal. So they
already had this kind of protection set up because AI, it's new in some senses, but it is not new in
other senses. People have been talking about artificial intelligence since like the 1970s
and computer generation. Hello, the Shrek movie was computer
generated. So it's been something that has existed for a long time. And what I found by just searching
like pretty general terms related to deepfakes, I found deepfake examples in the top Bing and
Google search results. And what they were, was they were pictures of
celebrities taken before the age of 18. The one in my article that I really focused on is like
this picture of Miley Cyrus. And the picture of Miley Cyrus at the oldest, she was 15 in the photo,
and they had taken her face and they had pasted it over adult nude bodies. And this was coming up in
the top search results for like Miley Cyrus deep fakes.
So that image should not be there. That is technically not allowed. And when I show that
to Google, they take it down because they're like, yeah, we recognize that that's not right. That's
prohibited. But it just goes to show that even though this is prohibited, they're not necessarily
going to catch it. And I think that's another part of this is they have to actually efficiently be able to
detect and remove this material. And when it comes to deep fakes, we're not seeing that,
not even with deep fakes that depict children. Yeah. Once again, I'm shocked and not shocked
at what you're saying. And it really strikes me that when you talk about responsibility
and you think about how little Google is doing on this, and so many of these tech companies are
doing on this. Meanwhile, we saw just a few weeks ago, there was this rapid backlash to their Gemini
AI tool because it dared to show some racial diversity in historical events when it was
prompted. And something like that,
it seemed like it immediately had a response and immediately had something to say. And the CEO had
something to say about it. But when we see these issues with deepfakes, or when we see other issues
that have come of their AI tools, they're much less likely to actually say something or actually
take some degree of action. What do you make of the difference in the way that they respond to these different issues?
That's a really good question.
And I think that a lot of it depends on who is raising these complaints.
With the Gemini stuff, you are seeing not only giant conservative voices speak out about this,
but there's this alliance currently in the big tech
space between certain venture capitalists and certain billionaires and certain tech platform
owners have become very close and very friendly with these extremely conservative voices.
And so when you see someone like Elon Musk or Bill Ackman start to speak out on an issue,
well, they're in the same room as the
people who run Google. So now it's your colleagues who are calling this out. And I truly think that
if someone like Elon were going to make a big deal out of the question of deepfakes,
maybe we would see a response. But Elon can't do that because his own platform is part of the
problem. And I don't think
Elon is interested in women's rights, more broadly speaking. Yeah, I would tend to agree with that.
We've been talking quite a bit about the responsibility of the companies and the
responses that they have had to this and, you know, how those responses have been truly ineffective
and not nearly meeting the bar that I think most people would expect of them. But on the kind of legal side of things, when we
look at lawmakers on the federal level in the United States, but also on the state level,
and I don't know if you have any insight about internationally, what are we seeing,
you know, from our politicians when it comes to trying to address this issue? And does it seem
like there are any attempts that would actually make some real difference here?
There have been some positive strides, both internationally and within the United States.
Australia was one of the first countries to actually form a task force dedicated to this
issue. Europe in general has way better and tighter regulations around this type of stuff although not necessarily with deepfakes. Europe has better protections
around things like data privacy and because of the way they've legislated
online issues in the past they have a clear pathway to legislating something
like deepfakes. In the United States federally speaking when it comes to
regulating the internet we're a mess. We have very poor
regulations. And the process to actually getting anything passed federally is super convoluted and
messy and difficult to do. On the state level, it's much easier to pass things like this.
And so we've seen legislation here and there. We've seen a bunch of states and more and more
with every passing week, introducing legislation, passing legislation, getting things on the books related
to deep fakes. The problem is then that like, for example, California has some decent laws around
this issue, but then there are high profile victims like celebrities and influencers based in California. The problem then becomes identifying and having jurisdiction over the perpetrators. And the problem also becomes who bears the legal responsibility here. image may be illegal, but in the act of carrying out that law, the enforcement of that law,
that's where you begin to run into a lot of questions and technicalities. And a lot of times,
unfortunately, for victims, there's so much involved in the process of trying to get justice.
And this is something that applies to victims of all kinds of crimes. And it's an issue that
is very frequently overlooked, which is that you have
to have resources. You have to be able to afford and access a lawyer. You have to have the time
and the money to dedicate to fighting your case. These are things that are not available to the
vast majority of victims. And so we're not going to see the vast majority of victims, even under
these laws, get that kind of justice. And so, you know, approaching the issue of deep fakes then takes
on a much more multifaceted sort of approach, because we also have to look at social factors,
and you have to disincentivize what these young boys are doing in these middle schools and high
schools, we have to sort of make it clear that this type of behavior isn't going to be tolerated
on various levels. And again, we've seen strides there. We've seen some good things happen, I would say. We've seen some recent
middle schools really crack down on this in a way where they're removing perpetrators out of
the school system. They're separating perpetrators from victims. They're showing victims that you
matter and that this is a problem and that it was wrong. And even something that simple can make a huge
difference in actually combating this issue on a cultural level.
Speaking of the cultural level, do we see a shift on that as well? Because I remember
kind of in the past when people would talk about kind of nude images circulating, it was a very
sort of shameful thing. And, you know, it could have kind of severe consequences for people, especially people in the public eye.
You know, my question is not to kind of dismiss how important this is and dismiss the need for action on this.
But do we also see a change kind of in the social norms where it's like, if this happens to you, you know, you shouldn't feel shamed or, you know, people aren't going to think worse of you.
Do you see changes there?
I see a lot of different things happening at once. And so it really depends case to case,
looking at the various influences within the community of the person who's being affected.
So in some ways, we've seen some progress. So a case that I reported on a couple weeks ago involved a middle school in Beverly Hills.
And what I saw in that case was some really progressive kind of action that I don't always
see.
But I think part of why I was seeing that is because Beverly Hills is a community that
is unlike most other communities.
It's an extremely wealthy, high-profile community that is kind of
incomparable to most communities in the United States and globally. And so with those vast
resources and that vast spotlight, they did what seemed to be the right thing. Elsewhere,
you're not always going to get people reacting in the same way. And I think culturally, for example, not just in the United States, but also in other countries, in conservative communities, you might have an approach to this issue that blames the victim.
And we've seen that.
We've seen, especially in the early days of deepfakes, women who were targeted in really culturally conservative areas
faced a lot of backlash from their communities. And they were blamed in a lot of cases,
and they faced violence as a result of being violated. So something that concerns me outside
of the deepfake issue, but that intersects with it, is we have this really radical anti-feminism ideology that is growing in lots of
different areas. We see it with young boys and young men in various communities around the world.
They're influenced by people like Andrew Tate and people who are telling them, actually,
you should do stuff like this. You should assert your power and your dominance as men by violating sexually the women and girls around you.
So when you have boys getting that message, that's going to influence how these sorts of incidents play out.
And I think right now we're seeing kind of like a growing gap where some communities are becoming more progressive and some people are becoming more and more progressive.
But you also have people becoming more and more regressive.
So I think that for some victims, there will be cultural things that help them. Like, I think that
you're still seeing the Me Too movement sort of reverberate and make women more confident in
coming forward about being the victim of sexual crime. But you're also seeing a backlash to the
Me Too movement that is then trying to make women
not do that. So we have all of these competing cultural influences that are going to shift the
environment for anyone who's a victim of this. Yeah, that's a really good point. And unfortunately,
those kind of Andrew Tate ideas and the people like him who promote this are far more influential
than they should be. And as you say, even if norms are
changing to a certain degree, there's still that kind of visceral reaction of seeing these images
of yourself and how, as you were talking about earlier, this can lead to people wanting to
kind of, you know, move out of public life and try to avoid, you know, situations or careers or
sectors where they might face a higher risk of having these images being made of them and being
put in these
situations. And that is completely unacceptable. And so I think my final question would be,
what do you see in the activism around this? We talked about before we started recording the
My Image, My Choice movement that's been put together by the people who created the Another
Body documentary. Obviously, I think that this issue is becoming something that more people are aware of and that lawmakers are feeling more pressure to do
something about. What do you see on that angle and where do you see this issue going over the
next year or so? Like you just said, I have seen some really heartening activism work and advocacy
work popping up in this space. A lot of people who have been committed to this issue for a long time, because before deepfakes reached this point,
they were working with these same issues in response to the revenge porn crisis of the 2010s.
And the same people who dealt with a lot of concepts like intimate privacy, which is sort
of a modern concept in itself. But people who have been
leading the charge on that issue are also responding to this issue. And so you see
advocacy organizations, as well as the organizations that exist for survivors of all
types of gender-based and sexual violence. Something that you end up seeing happen a lot
as you're looking at how this plays out, is people who were already
abusive, they just expand their toolkit. So something like deepfakes just becomes a new
tool in the abusive toolkit that a lot of individuals weaponize against their victims.
So like helplines and resources for victims of all these types of crimes, they're seeing this
happen more and more in cases in the
same way that they saw a loss of intimate privacy in regards to real images begin to impact their
clients. 10 years ago, they're now seeing these fake images and fake things impact their clients
as well. So we're seeing a lot of response on a lot of different fronts that I think is really
important. And in terms of how this issue is going to develop over the next couple of years,
I think we're just starting to hit that stride. I think we're going to see so much more movement
come out of this. Because a lot of times when you look at the field of victims' rights,
it takes people time to process what has happened to them before
they're in a place where they can do something about it. So I think that a lot of people who are
tragically being victimized right now and over the past couple of years, in the coming years,
they're going to reach a place where they're like, I have now processed what I went through,
and I want to do something about it. And so we're going to start hearing these voices and
these testimonies, and they're going to get bigger and bigger and bigger.
And I think that bringing it back to what we talked about at the very beginning,
after the Taylor Swift deepfake incident, I personally saw more legislative and more just
like action and attention and interest and support happening than I had seen at any other previous point. It was like a wave,
a tidal wave of just attention being paid to this issue. And so having celebrities be involved in
this, their advocacy can be important in the same way that Jennifer Lawrence saying what happened
to me was a sex crime. And anyone who viewed those pictures is a sexual offender. When she said that,
that reverberated. That said to so many women, like, I'm not alone. Like, Jennifer Lawrence is
speaking for me. And it said to people, like, you should reconsider what you consider to be okay.
And I think that we're going to see those norms shift. And it's going to take place
with a lot of big conversations, as well as a lot of smaller conversations. Yeah. And that's so important. And it's something to take place with a lot of big conversations as well as a lot of smaller
conversations. Yeah. And that's so important. And it's something that absolutely needs to happen.
And this accountability needs to start being something that we see a lot more of,
both on the level of the people who make these images, but as we were talking about
the companies that are making the tools that allow them to do it in the first place as well.
Kat, this is such an important issue and you've given us so much insight into understanding the broader ramifications of it. Thank you so much for taking the time.
Thank you for having me and for giving a platform to this issue.
Kat Tambarge is a tech and culture reporter at NBC News. Tech Won't Save Us is made in
partnership with The Nation magazine and is hosted by me, Paris Marks. Production is by
Eric Wickham and transcripts are by Bridget Palou-Fry. Tech Won't Save Us relies on the support of listeners like you to keep providing
critical perspectives on the tech industry. You can join hundreds of other supporters by going
to patreon.com slash tech won't save us and making a pledge of your own. Thanks for listening and
make sure to come back next week. Thank you.