Front Burner - The Deepfaking of Anthony Bourdain
Episode Date: July 23, 2021Deepfake technology — the use of algorithms to create realistic copies of people in video, audio, or photography — is once again in the spotlight. That's after Morgan Neville's documentary Roadru...nner used the technology to copy the voice of the late Anthony Bourdain. MIT Technology Review's senior A.I. editor, Karen Hao, breaks down the risks for how we perceive our reality.
Transcript
Discussion (0)
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem. Brought to you in part by National Angel
Capital Organization, empowering Canada's entrepreneurs through angel investment and
industry connections. This is a CBC Podcast.
Hi, I'm Jamie Poisson.
When Anthony Bourdain fans heard the food rock star would be back in our lives for a few moments,
with his story being shared in this new documentary, Roadrunner, it was pretty exciting.
So many people were shocked when he died in 2018, and this was a chance to see him again.
Some of you might ask, how is this food related? F*** if I know.
But that was quickly overshadowed with questions over whether the filmmaker,
Morgan Neville, went too far here. Roadrunner was released last week and has faced a lot of criticism since, specifically for a moment in the film where
Anthony Bourdain reads his own email. You were successful and I'm successful and I'm wondering,
are you happy? That's because he didn't read the email, at least not out loud. That moment and two
others in the documentary were the product of an AI technology known as deepfake, which uses algorithms to make a synthetic copy of someone.
And if you just watched the movie, you might not have even noticed.
The audience discovered that only because he had an interview with Helen Rosner at The New Yorker,
and Helen asked him, how did you get this audio of this email?
And then he revealed it in the Q&A.
That's Karen Howe. She's a senior editor at the MIT Tech Review, where she specializes in AI and deepfakes.
In today's episode, she's joining me to talk about the ethics of this technology and the risks for how we perceive our own reality.
Hey, Karen. Thanks so much for taking the time today.
Hi, Jamie. Thank you so much for having me.
What was your reaction when you first heard, like, essentially that his voice was being faked in this documentary?
And then I guess the documentary itself wasn't very transparent about it.
My first reaction was honestly, oh, wow, we now finally have an example of this technology in the mainstream,
because I've been writing about it for a while. But a lot of people don't know that it exists,
and it isn't widely used. So I was like, wow, my, like, the thing that I've been writing about has
finally made it into pop culture. But then this immediate second thought was, oh, no,
what, what has Neville done? This technology is just inherently controversial
and experts who've been studying it and people who have been writing about it for a long time
have been talking about the ethics of it for quite a while. And whether or not he knew that,
effectively, it was ignored in the decisions that he made.
And so it sort of ended up introducing a really large chunk of the public to this technology in a pretty negative way.
Clearly, Bourdain didn't get a chance to actually consent to this.
Later on, Neville clarified that he had gotten consent from the Bourdain estate.
So you could argue maybe that that quiets things.
But I think people in general felt really off put by that.
And then the second thing is that he didn't disclose it.
He wasn't up front with it.
We would have never learned about it had Helen Rosner not asked the question.
So it makes people start to doubt what are other things that I've heard that
I thought were real, but were actually fake, and they just, no one thought to tell me.
Where did deepfake technology actually originate? Like, do we know the first instance of it?
So we know that deepfakes came from a set of algorithms that are known as generative adversarial networks.
And that category of algorithms was actually invented by this guy named Ian Goodfellow.
And at the time, he was actually just trying to figure out how to make computers better at generating images.
And the folktale around him coming up with this idea was that he was at a bar and was sketching out some ideas about
how to create this algorithm on a napkin and then it ended up being like this really successful
technology um at least for his particular use case but then what happened was the artificial
intelligence community open sources most of their research And so someone took that open source code and then
started using it in pornography and started posting that pornography on Reddit. And that's
actually why we call it deep fakes today is because that Reddit user's username was deep fakes.
Oh, interesting. So for a while, deepfake technology had like a pretty bad
reputation. But over time, it's gotten a little bit more professionalized. So at least with deepfake
video, which is a bit more advanced than deepfake audio. One of the more exciting uses has been
this documentary that came out last year called Welcome to Chechnya.
I wanted to ask you about the alleged abduction and torture of gay men in the Republic.
Where deepfake faces were used to conceal the identity of the whistleblowers that were
portrayed in the film because they were trying to tell their story but at great risk of telling their story.
And the director made the decision to use this technology
to allow their authenticity and their expressions
to be retained, but to have them portrayed
with a different face.
What I learned from them that I did not yet understand
was that they knew that they were being hunted
around the globe.
And if it were known that they were alive,
that even by members of their own family,
that that would put their lives in risk no matter where they landed,
even if they landed in Toronto or in Paris or Berlin.
And so I promised them that I would find some way to cover them up.
So in that case, the documentary filmmaker chose to disclose it to the audience
and explain why he had made that choice.
And then with every character that had the deep fake face, he actually left a little bit of a halo or shimmering effect around their face as a visual cue to people that, hey, this person, what you're seeing is true.
It's just not their actual face.
You know, I was thinking about some other examples where this has been used recently and there's that show, Sassy Justice.
Sassy Justice.
Made by the South Park creators.
I don't know if you've seen it.
The lead character looks like Donald Trump and he has his wig on.
And Al Gore is on there and Ivanka Trump's on there.
Mr. Gore, let me start by asking,
why are politicians in all of Washington so concerned about deep fakes?
Well, what has people in the government really scared
is that deep fakes can put words in people's mouths.
Although, I mean, it's very obvious when you're watching it
that, like, this isn't really them.
Right. I think there are different ways to...
I mean, the technology is so new
that there aren't really norms yet around disclosure,
but there are clearly different ways that you could disclose things without just flashing
words on a screen being like, this is fake.
Like there have been, like Sassy Justice, there have also been like lots of music videos
where people will spoof a music video and swap someone's face in.
And it's just clearly like, it's like a man's face on a woman or, you know, like or it's Donald Trump's face, not on his body.
And so it's there's like very clear visual cues that are telling the audience suspend your disbelief.
And this is sort of like artistic liberty.
There's still like the norms are still being figured out.
But there have certainly been certain projects that have already succeeded in figuring out how to strike the right balance.
Yeah. And just to talk about just one more example before we move on.
And I guess on your note about striking the right balance, you know,
I know that there was also like a ton of controversy the other year,
year around James Dean,
essentially being like posthumously resurrected to be cast in this sort
of Vietnam era drama. And people did not like that either. You're tearing me apart. What?
You say one thing, he says another and everybody changes back again.
Yeah, it's interesting. Now that the film industry actually knows this technology can be utilized, there are actors that are, while they're living, getting the chance now to opt in to being deep faked once they pass away. she wasn't around that back then but um so it is interesting in in that sense that this consent
piece might start to the questions might start to change as people have more opportunities to
consent um and it's actually not just posthumous anymore like there are some actors that are
starting to consider licensing out their identity to be in more movies than they can physically
be in um but that's this is like very, very early stages.
I'm not actually sure if anyone's doing it yet, but there's certainly a lot of talk
about that possibility.
Wow.
Just like imagine like The Rock just in like even more movies.
If you smell what The Rock is cooking! Watch new episodes of Dragon's Den free on CBC Gem. Brought to you in part by National Angel Capital Organization.
Empowering Canada's entrepreneurs through angel investment and industry connections.
Hi, it's Ramit Sethi here.
You may have seen my money show on Netflix.
I've been talking about money for 20 years.
I've talked to millions of people and I have some startling numbers to share with you.
Did you know that of the people I speak to, 50% of them do not know
their own household income? That's not a typo, 50%. That's because money is confusing. In my
new book and podcast, Money for Couples, I help you and your partner create a financial vision
together. To listen to this podcast, just search for Money for Couples.
Together, to listen to this podcast, just search for Money for Cops.
Okay, so I want to talk to you about some of the other consequences of this.
So we've seen moments where elements of this technology have been politicized in the U.S.
So these examples may not be deep fakes exactly,
but I remember when Democratic House Speaker Nancy Pelosi came off as having like a slurred speech when some people slowed down a video of her.
And, you know, the implication was that she was drunk. We want to give this president the opportunity to do something historic for our country.
We want to give this president the opportunity to do something historic for our country. We want to give this president the opportunity
Do something historic for our country and and I also remember when?
Reporter Jim Acosta from CNN got his press pass revoked at the White House
And then this video was manipulated to make it look like he hit an intern when he did not in fact do that
That's not an invasion.
That's enough.
I was going to ask one of the other folks.
That's enough.
Pardon me, ma'am.
Excuse me.
That's enough.
And these were like not very sophisticated.
And so how worried are you that this will keep happening and will get worse and ultimately be like way more sophisticated?
Yeah. So I think this is the biggest concern is
will we enter a future where we can no longer distinguish real from fake? And then how will
that end up affecting our trust in media and in institutions? In the Pelosi and Jim Acosta case,
both of those are what experts call cheap fakes um kind of riffing off the deep fix name
where it's not actually artificial intelligence it's just like the nancy pelosi videos someone
literally just slowed it down um in like a video editor to half the speed or or whatever um and
that is it's this has been around for a very long time. Like video manipulation, image manipulation, and audio manipulation have all been things
that we've seen before because,
and sometimes in professionalized settings
like with Photoshop or with Adobe Premiere,
whatever it is.
And so in some ways,
I don't think people should be thinking
of the worry around deep fakes
as like a fundamentally new thing
that we've never
faced before in society. We have faced some kind of prerequisite to deep fakes before, and we've
been able to handle that. But there are legitimate concerns with deep fakes just being a lot more
persuasive. And there are two main ones that experts talk about a lot, especially in the context of political deep fakes.
One is that people will just consume a lot more misinformation because real or fake things will be perceived as real.
And the other one is that people will just like completely stop trusting anything.
And even real things, they will start to wonder if it's fake.
I think there's a really
good example of that, right? In Central Africa from a few years ago. So like just the looming
threat, the possibility that something could be deep faked led to this like massive instability.
And so can you tell me that story? Yeah, so this is the country of Gabon. And at the time,
the citizens of Gabon were already very upset and distrusting of their government.
And there was a period of time when the president disappeared.
And supposedly, we now know that he actually had suffered a stroke, and that's why he disappeared from the public eye.
But at the time, the public thought he had died and that the government was just trying to cover up his death in order to maintain power.
And so on New Year's, he released an annual address where his face just visibly because of the stroke looked odd.
It was his first public statement since falling ill, and he looked different.
ill. And he looked different. A neurologist told the post his appearance appeared consistent with someone who had a stroke or brain injury and even cosmetic procedures. And people immediately
thought, oh my God, is this a deep fake? This is just confirming how much our government is willing,
how far our government is willing to go to retain power by continuing to fake his life.
And it ultimately was one of the factors that led to a military coup.
On the morning of January 7th, 2019, gunshots can be heard outside the national radio station.
Government saying it has arrested four military officers
believed to be behind an attempted coup.
The government says two of the plotters have been killed
and several others are under arrest.
And in these kind of environments,
in these countries that do not have robust institutions,
do not have a lot of trust between the media, the public and governments, this type of risk is really high.
Yeah, yeah.
It's scary, like the ability that these things have to just like completely undermine our
democracies.
I want to talk a little bit more about what you think could be
done here. But first, I just wonder if we could talk about just a few other examples of concerns
that you might have. You know, I would imagine one might be you mentioned pornography earlier
before, like, what about sort of revenge porn, right? Like a jilted, you know, ex could post something online of a woman who had like not given her consent or something like this.
Yeah, I'm so glad you brought that up because so much of the conversation around deepfakes is focused on political deepfakes.
But it is actually quite a small fraction of deepfakes.
It is actually quite a small fraction of deepfakes or to be honest, I don't know that there's even more than like a dozen examples at the moment of political deepfakes that have actually caused real damage. But there are so many, so many more cases of deepfakes being used to abuse women.
of deepfakes being used to abuse women. And I've spoken with both a woman who has suffered this herself, as well as social workers that work with domestic violence survivors, where they are just
deeply concerned that revenge porn will come to a whole new level. Because at this point, like,
a woman doesn't, like, there doesn't even need to be material for an abusive partner to then create pornographic material to shame their partner and control their partner.
And so it is a very huge concern that I'm hoping that regulators will take seriously when they think about how to actually address the consequences around defigs.
What could regulators do?
What tools do you think are available to deal with this issue,
which you did say, like, we have been dealing with in some capacity for actually a pretty long
time? Yeah, so I think, to broaden the question a little bit, I think there's sort of, like, three
players that are particularly important in this. is like the regulators um one is the
technologists who are actually creating these methods for deep faking things and then one is
the platforms that are distributing the manipulate possibly manipulated images or audio um and sort
of the goal in general that experts talk about is right now the incentive is like really misaligned such that it's super
cheap to create deep fakes there are zero consequences and it's super expensive to then
catch them so like the incentives make it such that you can like really rapidly proliferate a
lot of deep fakes and nothing is really done about it um and so so the goal is to kind of like use
different levers from each of these three players to narrow the incentives or shift the incentives so that there are much higher costs to making the deepfakes and much, like, lower rewards to it as well.
So, like, technologists who are actually building the technology to deepfake things, they should be also putting just as much effort into making their algorithms
detectable. And that could be releasing, like every time they release an algorithm that can
make deepfakes, they also release like sort of the antidote to that algorithm. It could also mean
that their algorithm always leaves disclosure on it. Like you make a deepfake image with this tool
and the image will come out with like a disclosure on the image.
Some kind of fingerprint that image forensicists know to look out for
or that the average person can see and look out for.
or that the average person can see and look out for.
For platforms, there are a certain set of really common deepfake methods and cheapfake methods. So, like, can we build those into the distribution channels so that every single image, video, and audio clip that's uploaded is automatically checked and then labeled for this?
is automatically checked and then labeled for this.
And then for policymakers,
there should be consequences for the people who still go rogue and try to generate these deep fakes.
And all of these different things working together
can really hopefully help mitigate the issue. Since we don't have these things right now, is there anything like I can do as an individual,
as I'm just sort of out there on the internet, at stuff to, to, to, I don't know,
be literate about this. I think the best way to go about, like, if you ever encounter something
where you're, like, not quite sure if it's real, the best way isn't necessarily to keep scrutinizing
that thing and, like, and try to find the wonkiness in the image or the wonkiness in the voice
the best thing to do is look at other signals around it like who posted it was it a reliable
source are there other people posting it that are also reliable sources um like what like do a
reverse image search is is this like an image that's coming up elsewhere um sort of just in
general like you know when when we read the news, we're sort of like
putting on a critical reading lens and just trying to figure out, okay, like, what information seems
trustworthy and what isn't. It's the same with deepfakes. Like, you want to use all of those
same techniques. Okay. Karen, this was super interesting. Thank you. Thank you so much for
coming on. Thank you so much for coming on.
Thank you so much for having me.
All right. So before we let you go today, Toronto's interim police chief is acknowledging some pretty big mistakes that the police force made when they investigated the death of transgender woman Allura Wells.
Journalist Justin Ling has been investigating Allura's murder for the CBC podcast Uncover the Village season two.
He learned that police failed to notify the homicide squad or file certain paperwork.
But perhaps the most shocking thing is that they failed to talk to a person of interest about the case, her boyfriend,
when he was sitting in a jail cell for an unrelated crime. Interim Chief James Raymer
said neglect of duty charges would have been filed against one officer had he not retired.
You can hear season two of Uncover the Village wherever you listen to podcasts.
That is all for this week. Front Burner is brought to you by CBC News and CBC Podcasts.
The show is produced this week by Elaine Chao, Imogen Burchard, Katie Toth,
Allie Janes, Simi Bassey, and Sundas Noor.
Our sound design was by Derek Vanderwyk and Mackenzie Cameron.
Our music is by Joseph Chabison of Boombox Sound.
The executive producer of FrontBurner is Nick McCabe-Locos,
and I'm Jamie Poisson.
I'm actually off on vacation
for the next few weeks,
but you guys are in very,
very good hands.
Elamin Abdel-Mahmoud
will be here,
Jean-Montpetit,
Anthony Narestan.
So I hope that you have
a wonderful, wonderful August,
and I will see you in September.
For more CBC Podcasts, and I will see you in September.