The Current - How disinformation makes a natural disaster more dangerous
Episode Date: October 9, 2024Hurricane season has collided with an election campaign in the U.S., and politicians like Donald Trump are using the moment to spread disinformation about relief efforts. The CBC's Nora Young helps us... disentangle the facts from the deepfakes.
Transcript
Discussion (0)
In 2017, it felt like drugs were everywhere in the news,
so I started a podcast called On Drugs.
We covered a lot of ground over two seasons,
but there are still so many more stories to tell.
I'm Jeff Turner, and I'm back with Season 3 of On Drugs.
And this time, it's going to get personal.
I don't know who Sober Jeff is.
I don't even know if I like that guy.
On Drugs is available now wherever you get your podcasts.
This is a CBC Podcast.
Hello, I'm Matt Galloway, and this is The Current Podcast.
Misinformation is swirling around Hurricane Helene and the U.S. Federal Emergency Management Agency, or FEMA's, response.
It has a tremendous impact on the comfort level of our own employees to be able to go out there,
but it's also demoralizing to all of the first responders that have been out there in their
communities helping people. We need to make sure we're getting help to the people who need it.
That's FEMA Administrator Deanne Criswell responding to former President Donald Trump's false claims
that FEMA is taking money from disaster relief and using it to house migrants.
Here is Donald Trump's daughter-in-law, Lara Trump, defending the former president's claims on CNN.
You have migrants being housed in luxury hotels in New York City.
We have paid so much money from our tax dollars
into the crisis that didn't need to happen. We could redirect money to help people immediately
on the ground in North Carolina or in Florida, but we're probably going to have a situation
coming up in the next several days. That's a separate tranche of money.
This narrative around migrants is just one example of disinformation spreading since
Hurricane Helene struck, and there is growing concern that these falsehoods could put people in further danger as
more storms head toward the Florida coast. Nora Young is the former host of Spark here on CBC
Radio 1, now with the CBC's Visual Investigations Unit. They have been following this and other
stories. Nora, good morning. Good morning, Matt. Walk us through this disinformation that we're seeing around Hurricane Helene and the response by FEMA to that natural disaster.
Yeah, it really runs the gamut.
I mean, there are things that have sort of a grain of truth.
For example, that the amount of money that people who've lost their homes can get from FEMA tops out at $750, when in fact that's the initial amount to deal with people's immediate needs.
when in fact that's the initial amount to deal with people's immediate needs. But it also includes conspiratorial misinformation like what you referred to,
that the government has deliberately created Helene specifically to target Republican-leaning areas
or that it's deliberately destroying communities in order to leave space for lithium mining.
PBS is reporting that there have been Russian and Chinese-linked disinformation campaigns
sowing distrust around the government's response to the disaster. And all three of those are pretty
classic, right? The urban myth with a grain of truth, the conspiracy theories, and then the
disinformation campaigns deliberately designed to sow discord. Why does that disinformation spread
so quickly after a disaster like this? I mean, partly it's the rapidly changing situation. It's
a highly emotional story. And of course, there can be a real shortage of actual concrete information in the early going in a disaster like this. And then, of course, nowadays, what are we doing? We're all looking on our phones for information, right?
Donald Trump, for example, has been openly stating some of these truths. So they're making the leap into the actual political sphere, you know, like that the aid is specifically not
getting to Republican-leaning areas. Given that we're all looking at our phones when disaster
strikes, what are we supposed to do? How do we best verify the information that we're getting?
Yeah, I mean, in a fast-moving situation like this, the official outlets are going to be your
best bet, the closer you can get to the horse's mouth, as it were.
I mean, in this case, FEMA has specific information at their website about Helene, and they've also put together a rumoured debunking page.
And I think also remembering that in a case like this, you're probably going to find a lot of fake accounts on social media posing as official, and you're probably going to find a lot of misinformation.
We'll watch that.
In the meantime, you have been looking and watching
at another visual and auditory story.
This is around how artificial intelligence
and voice generation is leaping forward
at a remarkable pace.
I'm fascinated by this as somebody who talks for
a living, but also is he consuming a lot of sound? What's going on here?
Yeah, it turns out that much in the same way that we can use AI to generate fake video,
fake images, you can also do that for voice. So you can generate a realistic sounding voice,
like we're not talking about the old phone tree voice that you would hear
that sounds very robotic we're talking about very naturalistic sounding speech so you can generate a
realistic sounding voice you know just out of thin air that doesn't actually exist you can use that
to translate text into natural sounding speech but the more worrying thing from a misinformation
point of view is the ability to use that same type of technology to reproduce
real people's voices. And, you know, as you said earlier, Matt, what could go wrong?
There's considerable concern that audio deepfakes could be a real problem in elections, for example.
Like back in July, there was a doctored campaign video where the deepface voiceover appeared to be
Kamala Harris saying that Joe Biden was senile and that
she was the quote, ultimate diversity hire, all made up, all manufactured using audio deep fakes
and shared on X by Elon Musk. Can you explain as you understand it, how this works? I have spent
more time than I would care to admit over the past few weeks using this notebook LM site,
which is this thing put together by Google that allows you
essentially to feed text into the server. And it comes out with what sounds like a podcast,
a conversation between two people talking about whatever it is that I said that they should talk
about. How does that work? Yeah, it's extraordinary. I have listened to the demo of Notebook LM,
and it is absolutely extraordinary how realistic it sounds.
I could not tell the difference between that and a real person's voice.
And you're not even giving the thing a script to read.
You're just giving it data, and it's turning it into a natural sounding conversation.
This is how quickly the technology is progressing.
I mean, deepfakes in general, whether we're talking about audio or visual deepfakes, they're based on data-driven machine learning.
So a system's been trained on huge amounts of data, they're based on data-driven machine learning. So a system's been
trained on huge amounts of data, in this case, audio data. And then in the case of cloning the
voice of a real person, it actually needs a sample of the person's voice to analyze and reproduce.
And the concerning thing, if you weren't already concerned about, is that this technology is
getting cheaper, it's getting faster, it's getting easier, and it's getting more realistic.
You spoke with Hani Farid. I am concerned, by the way.
You spoke with Hani Farid. He is a professor at the University of California, Berkeley,
one of the leading experts on digital forensics.
And here's what he said about the growth of this technology.
Maybe three years ago, a small group of researchers cloned Joe Rogan's voice.
To do that, first of all, they spent years and a lot of money,
and they needed eight hours of audio of Joe Rogan.
Then it was you needed a couple of minutes of audio.
Now it's about 20 to 30 seconds,
and you are hearing people talking about 5 to 10 seconds
is what you need to clone a voice,
which means if you have a voicemail saying,
you've reached Hani Fareed's phone, please leave a message, done. You have my voice. Or if you've created a single
TikTok video with your voice or a single YouTube video, or somebody simply record you in passing.
Five to 10 seconds of somebody's voice, Nora, what has happened to allow this technology?
This is like Jeff Hinton's, he just won the Nobel prize yesterday. This is Jeff Hinton's
worst nightmare. It seems like come to come to real life. What has happened that allows
this technology to progress so quickly? I mean, it's partly that it's in the nature of these kinds
of machine learning applications that they're iterative and they get better over time. I mean,
it's not just audio. We've seen extraordinary improvements in all kinds of synthetic media,
right? It's not that long ago that you could easily tell an image deepfake because, you know, the person had five hands
or whatever. That's not happening anymore. I also think that in the case of audio, it's,
you know, it's hard to make a deepfake video of someone's face that at least for now doesn't have
that sort of slightly uncanny, uncanny valley sort of creepy feeling, but it seems to be a little
bit easier with, uh, with audio. And as honey suggested, it's getting faster and easier all
the time because it sounds close enough to somebody's voice that you get taken in. Yeah,
exactly. Yeah. Who would be vulnerable to that sort of voice cloning? Do you think,
I mean, I think wherever there's a sufficient motivation for people to do it,
and politicians are an obvious example, making them sound like they're saying things that they
didn't say, as with that Kamala Harris voiceover that I mentioned, but also where there's a
sufficient strategic or profit reason to do so. I mean, Hani told me of one currently sitting US
senator, chair of the Foreign Relations Committee, who was on a call with what he took to be a Ukrainian official, but was actually a video and audio deepfake.
And fortunately, that was discovered when the deepfake started behaving oddly.
But profit, too.
I mean, there's a confirmed case that I know of of a U.K. engineering firm that was built out of $25 million based on a video call that used deepfake voices as well as video to impersonate
people from the company. There have been concerns that voice cloning could be used to pull off those
so-called grandparent scans, the premise being that someone gets a call impersonating their
grandchild saying they need money desperately. Our colleagues at Radio-Canada's Decryptor have
looked into cases where that was supposedly happening and not found evidence of it. They're
just using old-fashioned scam artist techniques. But the point is, it is getting so much easier
to do this sort of thing that with a sufficient motivation, it's easy to imagine anyone being
targeted. All you need is that fairly short sample of the person's voice.
So this goes back to what I said earlier, which is how good are we right now, do you think,
at separating real voices from voice clones?
I mean, this is not a good news story, Matt. I'm
sorry. Nora! I don't make the rules, Matt. They're not very effective, as it turns out. I mean,
and it does seem like a game of cat and mouse. Like, Honey told me that the goal, as he sees it,
isn't a sort of magic bullet that can tell you, yes, no, fake or real, but in making detection
tools that are good enough, that it becomes more difficult and more expensive to make convincing audio deepfakes.
But Matt, I want to play you a clip of Sarah Barrington. She works with Hani Farid,
and here's her talking about the specific challenges of detecting these new deepfakes.
We used to see in our methods that we could detect a fake based on things like a cough,
which is really hard to simulate in a computer, or something really explainable like the pauses between the words
as someone is talking and their cadence. And now these things have seen so much data that they're
able to replicate that even with just a few seconds of your voice. So there really are very
few protections we can put in place to try and stop this from happening. And that to me is really
that it's that natural quality. It's not me is really that, it's that natural quality.
It's not so much the pitch,
it's the ability to put in those naturalistic sounding pauses,
the coughs, all that stuff that makes these current round of deep fakes
so convincing.
There are some interesting initiatives though in trying to combat this.
For instance, Sarah and Honey talked about techniques
to analyze audio to see if it came out of an actual physical body in an actual physical space
by things like reverberation, for example. Then that way you would know that it's not coming out
of a machine, but it sounds like it was actually done in a room spoken by people.
Yes, exactly. That's where we're at in this game.
There is, as you've said, a scammy element to this.
People are always going to try to figure out a way to make somebody part with their money or information.
There are other ways that we're seeing this technology used.
Have a listen to this.
That is not Justin Bieber.
It is an artist who goes by the handle FlowGPT.
A manipulation.
You can hear Bieber's voice, I guess, moving back and forth between English and Spanish.
Might be interesting if you wanted to, I suppose, expand your repertoire or your reach of your music.
But what are some of the other ways that we're seeing this kind of audio manipulation start to pop up? Yeah, I think it's important to remember that
there are creative applications for this type of technology. As we know, humans respond to voice,
right? We're storytellers. You talked about Google's Notebook LM, for example, that lets
you turn data into a podcast, because we know from the popularity of radio and podcasts that it's an effective way to absorb information.
We like that vocal interaction.
But there are also medical applications for this type of technology.
Honey and others have talked about its use for people
who are losing their ability to speak, for example.
You can actually get a sample of the person's voice
and it'll allow them to continue to speak.
So there's potentially very positive aspects to this type of technology.
I was thinking about education as I was playing with Notebook LM, that yes, it might put me out
of a job, but it might also help somebody learn that if you absorb information more, you could
take a large chunk of information put into this, and it might come back in a way that is easier
for you to absorb in a linguistic way rather than reading it. Yeah, I think that's absolutely right.
And, you know, they've used examples of where, I think it was the New York Times reporter who turned the text of articles about Waymo, the root finding application, into a podcast.
And, you know, that's the thing, maybe you're not likely to read like an instruction manual, but you might listen to it if it's turned into an engaging bit of banter for you.
What should people who are listening to this keep in mind? I mean, you are real. I am real at this moment, at this particular moment.
What should people keep in mind if they're listening to audio and they aren't entirely sure that it's legitimate?
and they aren't entirely sure that it's legitimate.
Yeah, I mean, I think for now, the main thing is considering media that you encounter,
like audio or video with audio that you see shared on social media, being aware that it might be manipulated, particular for something tied to partisan politics or tied to separating
you from your money. But down the road, we're looking at the potential, which is apparently feasible, of somebody hijacking a conversation live. So you and I are talking, and some security experts
are saying that there's a possibility of actually hijacking that audio live in real time. So,
I mean, one thing that people have talked about, even with respect to old-fashioned
so-called grandparent scams, is having a code word that you use with the people
close to you so that
you have a way of identifying, yes, it's really Matt. Yes, it's really Nora. Nora, thank you for
this. I think this is, it's really unnerving, but it's fascinating at the same time. Thank you very
much. My pleasure. Thank you. Nora Young is with the CBC's Visual Investigations Unit.