Today, Explained - The Deep Fake
Episode Date: March 1, 2018There’s a new kind of algorithm that allows you to take a video of one person and map the face of another person onto his or her body. Not surprisingly, it’s being used to map celebrities’ faces... onto the bodies of porn stars having sex. Vox’s Aja Romano tells Sean Rameswaram how “deepfakes” are spreading across the internet. Plus computer scientist Peter Eckersley explores how the same technology could tear our society apart in bigger ways. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Hey, happy March! You've heard of March Madness. How about a March mattress? The weather's
turning. Your bed is yearning. Check out Mattress Firm. They're open all month, except at
nighttime because they're big on sleep. But their website never closes at all. Head to
mattressfirm.com slash podcast to learn how you can improve your sleep.
This is Today Explained.
I'm Sean Ramos from...
There's this crazy thing happening online right now that could make you question whether what you're hearing, whether what you're seeing, is real.
It could affect who you trust, it could affect elections, but right now, it's mostly affecting porn.
It kind of goes back to this long-standing rule of the internet. If it exists, there's porn of it.
Now we know that even if it doesn't exist, there's porn of it.
This is Asia Romano.
I am an internet culture reporter for Vox.
Asia's been writing about algorithms lately.
The ability to build your own predictive algorithm has been there for years.
Predictive algorithms take some stuff, and with that stuff, they can imagine totally new stuff.
Recently, people have been using these predictive algorithms to make fake videos of real people. what you look like based on all of the photos that I give it and then predict what your face
would look like doing something else that I feed it. So if I give it a video of, I don't know,
someone chopping wood, it would then be able to say, this is what he would look like chopping
wood. Does that make sense? No, because why would anyone want to watch a fake video of a podcast
host chopping wood? But because there are a lot of pervy dudes on the internet, take one guess what they've used these algorithms for.
Porn?
Porn.
Porn?
Yeah, porn.
So then, on September 30th of last year,
a Reddit user named, or using the handle deepfakes,
posted two celeb fakes.
And he basically posted this algorithmically generated image of Maisie Williams' face spliced onto the body of a porn star.
Maisie Williams, for those who don't know, is an actress from Game of Thrones.
A girl is Arya Stark of Winterfell.
And I'm going home.
This was a computer neural network
essentially learning what Maisie Williams'
face looks like and then
using what it learned
quote unquote to predict what her face
would look like mapped onto the features
of a real porn star.
Enter Deepfakes.
That's the name of a Reddit user
but also the name of a new
genre of porn with famous faces realistically mapped onto the name of a Reddit user, but also the name of a new genre of porn with famous faces
realistically mapped onto the bodies of porn stars. So the thing about Reddit, if you're not
familiar with Reddit, is that A, a lot of dudes are on Reddit, and B, it has a history of serving
as a distribution center for a lot of this stuff that's really dicey. Like in 2014, it was
essentially the main distribution center for
the famous Jennifer Lawrence leak of nudes. Which were real. Yes, which were real, which were
basically actual leaks of actual nudes, not just of her, but of a lot of famous women. Reddit
eventually stepped in and shut down all these distribution centers. No more stolen photos of
naked celebrities. But late last year, DeepFakes, this anonymous Reddit
user, came up with something Reddit
hadn't banned yet, and it
was original work in a
totally creepy way.
What he did was he went
ahead and started his own sub
forum on Reddit also called
DeepFakes. Named after himself.
Named after himself. Yes.
So then there was a whole like subform devoted to this and
you had tons of people trying to do this on their own um and eventually he released the code and
when he released the code he basically gave everyone the magic tools to do this themselves
without having to fumble and do their own guesswork because he'd already done the guesswork
interestingly when he made the post with releasing the code, the image that he used as an example was Nick Cage's face being mapped onto Donald Trump's face.
Interesting.
Yes. And like showed them meeting in the middle, essentially, in this terrible like Trump-Cage hybrid.
And Nick Cage because he was the star of Face Off.
You want to take his face?
Yes.
His face.
Or Nick Cage because...
I think just because he's a perpetual internet meme.
Because the internet loves Nick Cage.
Yes, exactly.
Got it.
So between October and January, when Vice noticed it,
you had all of these people doing these incredibly, increasingly creepy things with,
again, all women, all celebrities, all of it without consent. That
should go without saying. So what is the legality here? Is there an existing law that applies to
this where this manipulation is obviously illegal? Or is this something new altogether because it's
being made by computers and not even people? Well, it's new to us, but it's not necessarily
new to the courts because, again, algorithms have been around for a while.
Questions about the legality of using algorithms to remix existing samples and so forth have been around for a while.
Okay.
In America, under U.S. copyright law, you can pretty much remix anything as long as it's, quote unquote, transformative and as long as it's not essentially infringing directly on the profit of the original source material.
When I told you about the Nick Cage and Donald Trump thing, that would be considered a remix
because they are both celebrities, they're both public figures, and this could be parody
under US copyright.
Okay.
But when you factor in the porn, what's happening is that you have a situation where, A, someone's
consent is being violated.
Okay.
Two people's consent is being violated okay two people's consent are being violated but also the porn market directly is being infringed upon because this is essentially
not transforming the original work it's meant to replace the original work you're saying this is
like still porn and you're it's like a real violation of the porn in a way right exactly
like you didn't change enough substantially to make it a new thing. It's sort of incomprehensibly not legally a violation of the celebrity, but it is a violation of the porn you stole to make the deep fake.
Right, exactly. Because it's really not her image that is getting violated.
Because what you're doing is you're actually taking thousands or hundreds of images of her and putting them all together to teach this computer.
So when the computer actually generates her face, it's generating something new,
but it's generating something new onto an existing source,
and it's that source that's getting replaced.
This sounds a lot like remixes, like, I don't know,
taking MIAs, paper planes, and chopping it up and making it sound new again.
But even beyond the infringement is the ethical quandary, which is that this is totally non-consensual and awful.
Reddit has finally stepped in and said, OK, we're going to ban deepfakes to subreddit.
Yeah.
The user is still there.
He can do whatever he wants.
They basically updated their content policy to put these videos and these photos under the category of involuntary porn.
I think it's a category that's kind of unique to Reddit, but it covers a couple of facets, including revenge porn and including the leak of the nudes and so forth.
I mean, it's safe to assume that any celebrity would feel deeply violated by this.
Right. And I think the use of the phrase involuntary is really crucial here because it covers a range of sins that all have to do with people taking your image out of your own hands, whether it was originally created by you or not.
I want to play you just another piece of music right now.
This song is called Total Entertainment Forever, and it's by Father John Misty. night inside the oculus rift after mr and the missus finished dinner and the dishes all right so he said betting taylor swift every night inside the oculus rift father john misty
making uh maybe a provocative point about where our entertainment culture is going right about
having sex with celebrities in virtual reality which i guess
is maybe like the next place this kind of thing is going to take us most people think it's dumb
then this was shocking for a second until it's not and then the next shocking thing might be
having sex with a celebrity in virtual reality until we get used to that is there some sort of
is there is there a line or do we always just get used to the next
technological development of perversion and violation of women's bodies and images well if
we keep saying the line is here and then crossing the line and this of course gets into all kinds of
political ramifications and so forth and then saying oh we're used to this now let's get used
to the next thing.
Really, we sort of leave ourselves open to the possibility that the only thing that can really provide some sort of moral absolute is the technology itself. When you have the ability to
change someone's face and make it look as though someone is doing and saying something that they're
not, you open yourself up to all kinds of new waves of fake news and fake, the spread of fake information.
Coming up, harnessing the exact same technology to make the president say whatever you want.
This is Today Explained. Expert Mattress Firm. All different kinds of mattress.
Waiting there for you. That was a mattress haiku about how Mattress Firm wants to help you.
The experts at Mattress Firm got you covered with mattresses, obviously, but maybe less obviously.
They've got you covered with headboards, adjustable bases, sheets, and bedroom decor.
Get to know your local Mattress Firm. Here's a conversation starter. They consider themselves
America's neighborhood mattress store.
Mattress Firm can help you stretch your budget a little further when you're looking for ways to improve your sleep.
Go to mattressfirm.com slash podcast to see their latest deals.
Mattress Firm offers 120 night sleep trial to ensure perfection and 120 night low price guarantee.
So, you know, you paid the perfect price. Again, go to mattressfirm.com slash podcast to learn how your sleeping could be improved.
This is Today Explained. I'm Sean Ramos-Ferrum.
North Korea is a rogue nation which has become a great threat
and embarrassment to China,
which is trying to help, but with little success.
The president never said that.
It sounds like him. He tweeted it, but it's not his voice.
It's a recording generated by an algorithm.
Here's another one.
North Korea has conducted a major nuclear test. Their words and actions continue to be very hostile and dangerous to the United States. Now, as a radio person, I can sort of tell it's fake,
but I'm not sure if my uncle who always forwards me garbage news from trashy sites could.
And what if the technology got a little better? Could I even still tell the difference?
It's sort of scary.
So my name is Peter Eckersley.
I'm the chief computer scientist at the Electronic Frontier Foundation.
Peter says yes, scary, but also maybe not.
This is a technology that could be either good or bad, depending on how it's used.
Peter and his computer science network just came out with a report this week
on the use of malicious artificial intelligence,
like deepfakes.
His deep take?
We need the researchers who are making these things
to do a better job of thinking carefully
about how to put their thumbs on the scale
as they design things
to ensure the beneficial applications
outweigh the problems.
So how do they do that?
This guy who made the deepfakes, he just said, hey, look what I made up.
And then he just shared the algorithm and now people are still making them everywhere.
So what could he have done differently?
And what would you have other creators and coders and researchers do?
Well, I think one thing he thought he was doing is warning everyone that this technology
was out there.
And in a case like that, that actually may not be totally crazy. I think one thing he thought he was doing is warning everyone that this technology was out there. Whoops.
Well, in a case like that, that may not be totally crazy
because, of course, there are some people who are going to do this
and tell everyone, which promotes a little bit of chaos,
but at least we get to have this conversation.
There are other people who might get a hold of this technology
and then immediately wait to use it to intervene in a
political campaign with no warning. And then we'd be having the conversation in retrospect,
which is a much worse place to be. Like the Russians, right? The Russians could do that.
Exactly. So I think telling people isn't necessarily a bad action. As a researcher,
you should always think carefully before you do that. And we're calling for a culture of people doing that a little more. I think the other thing that's important is if
you're releasing something like this, it's better to release something where you can tell it's a
fake. And I think with these initial examples, at least, when you listen to that audio,
you watch that video, you can tell it's not quite the real thing.
Yeah. But I mean, you and I can, but not everyone can, right?
That's right.
And one thing we're learning about the current US media landscape is that having experts
be able to tell the difference isn't working all of the time for us to be able to tell
the difference between truth and fabrication.
And I think that's a deeper political problem that people on both sides of American politics
need to find ways to fix.
How do we have productive conversations where we might not agree on what needs to be done, but we at least
have some path to agreeing on some facts and some evidence from both sides of the aisle?
What can outlets like Facebook and Twitter and Google and Reddit and Instagram do to ensure that fake videos and fake audio,
especially potentially politically damaging stuff,
doesn't spread as fast or at all?
Well, a few ideas.
I mean, the task that those companies are going to have is going to be complicated.
And the first and most important thing is that they avoid censorship
in response to these problems.
Because I think a lot of people have the instinct to say,
oh, we need to take all the fake stuff down, censor it.
And the problem there is, of course, it's sometimes hard to tell when things are fake.
And you can wind up doing more harm than good in some cases
if you dive in there with censorship.
So instead, what I think would be really constructive
is new user interfaces for people using those platforms
where you have a bunch of sliders that say,
look, how much are we going to weight the credibility
of checking that's gone into a news story
when pieces are essentially opinion pieces?
Can we review them to see how manipulative, essentially,
they are in their style?
And then just instead of trying to censor things based on that,
just give users an option.
How much of different types of content do they want to see?
There's a slider for those things.
There can be some defaults, but we really want everyone to take ownership.
In the same way that maybe you want to choose
how many of your friends' baby photos you see
or how much news you want to see at all,
you'd have some opportunity to get your own perspective double-checked
and the other side's perspective double-checked.
I think that's the big, hard design problem
that the technology companies face here.
That sounds pretty reasonable, but I feel like so far so bad
with Facebook fixing this stuff themselves.
Is there a way predictive algorithms could be used to help the cause?
I don't know, like AI fact-checking or AI policing of fake audio and video?
I think a lot of people have that fantasy,
and it's a long-term vision that you can imagine AI being able to do this predictively.
The reality at the moment is that AI reads at about second grade level.
So it's pretty hard to get current machine learning algorithms
to do really critical reading of subtle propaganda.
So it can help a little bit if used cleverly,
but there isn't going to be any shortcut around
having to have huge numbers of humans with essentially journalists training to double check sources, check facts.
And a lot of this is going to have to be continuous.
You know, it could be done using new technology in very creative ways.
We could have APIs for fact checking, scalable databases where people can look at things initially and give them a quick read
and then keep updating as more information comes in about stories.
I think trying to build that technology would actually be very good for us
as a society and a civilization in an era where we're struggling to tell truth from fabrication.
We need new types of institutions to do a better job of that task for us,
transparently, without censorship.
But that's what they were saying over at Skynet too,
and it did not work out for them, Peter.
Fortunately, I don't think we're going to exactly be living in a Skynet world.
The future is going to be far stranger,
both more beautiful and potentially
more dangerous than we expect.
But if we plan in advance, think carefully about what we're building, I think we can
actually make a more awesome and excellent future with AI.
Peter Eckersley is the Chief Computer Scientist for the Electronic Frontier Foundation.
I'm Sean Rommelswurm. This is Today Explained.
Folks, let me be clear. This is Barack Obama.
Follow Today Explained on Twitter at today underscore explained.
You know what? I just went to mattressfirm.com and saw that they're having a big price drop right now,
which, I mean, as a consumer is exactly what you want to hear from America's neighborhood mattress store, right?
Don't sleep on a good deal, friends.
You can go to mattressfirm.com slash podcast right now to learn how you can improve your sleep.