The Journal. - The Deepfake Election Has Arrived
Episode Date: February 26, 2024Days before the presidential primary in New Hampshire, thousands of people received a call from someone who sounded like President Joe Biden, telling them not to vote. The call was a deepfake, and as ...WSJ's Bob McMillian reports, the rapid advancement of AI technology will likely have profound implications for elections around the world. Further Reading: - New Era of AI Deepfakes Complicates 2024 Elections Further Listening: - The Company Behind ChatGPT - The Hidden Workforce That Helped Filter Violence and Abuse Out of ChatGPT - OpenAI’s Weekend of Absolute Chaos Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Just two days before the New Hampshire primary last month,
thousands of voters around the state got a phone call.
What a bunch of malarkey.
We know the value of voting Democratic when our votes count.
It sounded like President Joe Biden.
But he had an unusual message.
He was telling people not to vote.
It's important that you save your vote for the November election.
We'll need your help in electing Democrats up and down the ticket.
Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again.
Your vote makes a difference in November, not this Tuesday.
If you would like to be removed from future calls, please press 2 now.
But it wasn't Joe Biden.
It was a deepfake.
That's our colleague Bob McMillan.
AI has come a long way since the last presidential election,
and everybody is wondering if all these amazing advances are going to be completely misused during 2024.
This was one of the first examples of a deepfake that we've seen in the 2024 presidential election cycle.
And if I was going to be a betting person on this one, I'd say it's probably not the last.
Welcome to The Journal, our show about money, business, and power. I'm Ryan Knudson.
It's Monday, February 26th. Coming up on the show, how a new era of AI deepfakes
could complicate the 2024 elections. Artistry. A master painter carefully applying Benjamin Moore Regal Select eggshell with
deftly executed strokes. The roller lightly cradled in his hands, applying just the right
amount of paint. It's like hearing poetry in motion. Benjamin Moore, See the Love.
Is there a sense of whether or not these calls worked?
Like, did they actually influence people to not go out and vote?
I, you know, I don't know if they worked or not.
I mean, I don't know anyone who really listens to robocalls, like, no matter who it is.
But, you know, it doesn't, sometimes it doesn't take much to move the needle.
It was a small number.
The estimate is between 5,000 and 25,000 calls went out, most of them to people in New Hampshire.
It's not a totally insignificant number, but it almost, to me, felt like it was a test, right?
Like, does this technology work?
Can we get away with it?
You know, what will happen if we try it out here? So this robocall goes out, and how did anybody figure out that it wasn't real,
that it was actually a fake? I think people knew very quickly, just as soon as they heard the call,
like Biden isn't going to call people and tell them not to vote. Like, that's just not something
that happens. So I think it pretty quickly went up the chain and the Democratic Party, and they were like, this is not Joe Biden.
Who was behind it? Where did it come from?
Yeah, at the end of the day, the New Hampshire Attorney General's Office investigated.
They traced it to a company in Texas that is known for doing robocalls,
that has a couple of actions against it for robocalls in a few states.
The company is called LifeCorp. It didn't respond to requests for comment.
The New Hampshire Attorney General's office was on this. Like, it was within a few days,
they said that they were investigating this as an act of voter suppression.
Here's the New Hampshire Attorney General in a press conference.
First, at the New Hampshire Department of Justice,
we have issued a cease and desist letter to Life Corporation that orders the company to immediately cist violating New Hampshire election laws. After the fake Biden call,
the Federal Communications Commission also issued a new rule that banned unsolicited
robocalls using AI-generated voices. But LifeCorp didn't create the fake phone call itself.
According to a third-party analysis of the audio,
it used a company called Eleven Labs.
Eleven Labs is a relatively new company.
It was founded in 2022.
On its website, the company has a library of artificial voices,
and users can easily clone their own voice,
or anyone else's, by uploading
audio of them speaking. So you upload a bunch of samples, you give your consent for it to be used,
and then at that point, you can just type something and it will sound like you said it.
I decided to clone myself. And it was really easy. All I had to do was sign up for a plan,
which normally costs five for a plan,
which normally costs five bucks a month,
but right now there's a promo price of $1.
I recorded two minutes of myself talking,
and voila, there was a fake version of me.
So how believable is your clone?
Well, you tell me.
I'm going to tell you two clips.
One clip is going to be me talking,
and the other clip is going to be my clone talking. And I want you to tell me which one you think is real. So this is the first one. I used 11 labs to clone my voice by uploading just two minutes of myself reading
from George Orwell's 1984. Do you think this is the real me or a clone? Yeah. Okay. Okay. And then
this is another recording that gives you an option for comparison
i used 11 labs to clone my voice by uploading just two minutes of myself reading from george
orwell's 1984 do you think this is the real me or a clone i think the second one was really you
and i think the first one was the clone that's right the first one was the clone. That's right. The first one was the clone. There's sort of a warmth that was missing from it.
But it was pretty good, right?
Like, I know, I've just been talking to you for, what, 25 minutes.
Like, if you played John Wayne's voice, you know,
or somebody who I don't talk to regularly,
I wouldn't be able to tell.
I'm going to play you one more.
I want you to tell me if this is real or a clone.
Bob McMillan is the greatest reporter of all time.
That's a clone.
Yes, that is a clone.
But it is the truth, right?
See, there's still a room for common sense with this, right?
When I created this clone of my voice,
all I had to do was check a box that said,
I hereby confirm that I have all necessary rights
or consents to upload and clone these voice samples
and that I will not use the platform-generated content
for any illegal, fraudulent, or harmful purpose.
I reaffirm my obligation to abide by 11 Lab's
Terms of Service
and Privacy Policy. That's it. And then I can upload any voice I want and create a clone out
of it in seconds. Yeah. And here's a newsflash. Criminals often lie. So 11 Labs, if you look at
their process, it raises some questions, right? Like, clearly Joe Biden is not going to just like open a free account there and upload his voicemail.
You know, that's just not going to happen.
So it's really unclear what kind of controls on abuse of the AI technologies exist, right?
Because these are often technologies that are run by startups.
They're trying to encourage everyone to use them.
They're trying to get maximum exposure
for their technologies.
And when they're being misused,
they have to put guardrails on that.
They have to slow down the adoption.
They have to check things.
Eleven Labs declined to comment
on the fake Joe Biden audio.
But the company said, quote,
We are dedicated to preventing the misuse
of audio AI tools
and take any incidents of misuse extremely seriously.
The company also says it built a safeguard
that's designed to detect and prevent people from creating voices
that mimic politicians who are running for president in the U.S.
or prime minister in the U.K.
What is the point of having a technology like this? Oh, well, I think it's fun,
right? Yeah, I don't know what Eleven Labs' business model is, but I would imagine if,
say, you had a company and you wanted to create an answering machine message
that was somewhat personalized or adjusted for the date or something like that,
and you didn't want to have to record a bunch of messages.
You just wanted to delegate that.
The technology does allow you to do some pretty interesting things.
Like, it can make it sound like I'm reading entire articles or books
without ever actually opening a page.
And it can also translate my voice into other languages,
like Mandarin.
大家好,我在中国的朋友们,感谢你们收听播客。
Or Hindi.
我们在中国的朋友们,感谢你们收听播客。
And if you speak either of these languages,
please email us and let us know how the clone did.
And if you speak either of these languages, please email us and let us know how the clone did.
The technology has potential upsides, but what can be done to stop it from being misused?
That's after the break. Discover more value than ever at Loblaws.
Like price drop.
Hear that?
Loblaws lowers prices every four weeks on a selection of items.
So you can save more.
Whether it's pantry staples or seasonal favorites,
you can look forward to new discounts throughout the aisles at Loblaws
to get your essentials at great prices.
It's your cue to stock up and save.
Look for new value programs when you shop at Loblaws, in-store and online.
Summer's here, and you can now get almost anything you need for your sunny days delivered with Uber Eats.
What do we mean by almost?
Well, you can't get a well-groomed lawn delivered, but you can get a chicken parmesan delivered.
A cabana? That's a no.
But a banana? That's a yes.
A nice tan? Sorry, nope.
But a box fan? Happily, yes.
A day of sunshine? No.
A box of fine wines? Yes.
Uber Eats can definitely get you that.
Get almost, almost anything delivered with Uber Eats.
Order now.
Alcohol in select markets.
Product availability may vary by Regency app for details. It's not just fake audio that could impact an election.
There's also fake images and video, too.
There's the audio deepfakes, then there's the video stuff.
But the state of moving video generated by AI is constantly being advanced.
We just saw this product called Sora released recently that it creates like cinematic movies based on just a few word prompts.
Sora is made by OpenAI, the same company that makes ChatGPT.
Using only a few sentences, its technology can generate highly realistic video clips.
This is the biggest breakthrough that I think AI has ever had. That's not a real cat, bro.
That's a fake cat. This is all AI. Wow. Look at that. We're done.
I mean, look at this wide shot of a Western town. It looks better than CGI. And some shots, especially ones without people, like this art gallery,
I wouldn't be able to tell that it was AI generated.
So the technology is getting really good.
Open AI says that Sora, which is not yet available to the general public,
will reject prompts that ask the software to mimic a celebrity.
So what is this all going to mean for the 2024 election? I think it's going to mean there's more garbage than ever before,
right? And it's more believable garbage than we've ever seen. And not just that there's more garbage
out there, but that also politicians would now be able to say that a real video or a real audio clip was actually a fake one generated by AI.
Yeah, absolutely.
We've seen the beginnings of that with the Hamas attacks in October.
People were claiming that legitimately shot video of those attacks was fake.
At the end of last year, former President Donald Trump said on social media that a recent ad used AI to make him look bad.
You keep confusing things.
And we did with Obama. We won an election.
Getting the facts wrong.
We just left pleasure.
Paradise.
But the clips in the ad were actually real.
Trump has been the victim of AI-generated content as well.
been the victim of AI-generated content as well. An ad supporting one of his opponents showed Trump attacking Iowa's governor, but his voice turned out to be generated by AI.
I opened up the governor position for Kim Reynolds, and when she fell behind,
I endorsed her, did big rallies, and she won.
It's not just politics where this technology is causing problems. There are also scams.
technology is causing problems, there are also scams. I've heard from the people who investigate cybercrime that this type of technology is extremely popular with scammers. Their game is
going up. People literally will use voice to text to try and pretend to be people they're not.
And call up companies and say, I'm the CEO, I want you to transfer a bunch
of money. They might create a voicemail that they leave for an employee that sounds like it's the
CEO asking for something to happen. There's a case in Hong Kong just a couple of weeks ago where
an employee transferred more than $20 million in response to a voice deepfake.
in response to a voice deepfake.
Are there things that AI software companies can do to stop the misuse of their products?
The AI companies, right after our story ran,
they announced that they were going to do some stuff
to prevent AI from being misused in the election.
But it's pretty vague about what they're actually going to be doing.
And it's a little unclear what they really can do.
I mean, the AI companies like Eleven Labs put a watermark in their audio
so we were able to identify it definitively as a deepfake.
An audio watermark is like a digital signature
that can make it clear which company generated the audio file.
That's how that third party was able to confirm that Eleven Labs was the company that created the fake Biden audio.
Do all AI companies include watermarks and things like that so that their generated stuff is detectable?
No, they don't all do that.
And it's very easy to strip the watermarks.
No, they don't all do that. And it's very easy to strip the watermarks. So I think some companies are working on robust systems for identifying the content that's created with their tools. But just seems like we haven't seen anybody solve that problem yet. Eleven Lab says it only allows paying customers with verifiable bank details to use its voice cloning features,
which makes it easier to go after people who are caught misusing it.
The company also says that for its more advanced cloning service,
which can make the clone sound even better than the version I made,
it requires users to prove they have permission by recording a voice verification.
by recording a voice verification.
What about the social media companies like Facebook or X or TikTok?
Have those companies said about anything
that they'll do to try to prevent the spread of this stuff?
In 2020, they all had pretty robust programs
to prevent the spread of disinformation.
And they have systems for identifying them.
But the thing that's happened since then is
Meta has kind of disbanded the team that was really focused on this.
In the fall of 2022, Meta dissolved a team that was in part
focused on addressing disinformation and safety issues related to AI.
At the time, a Meta spokesman
said the company remained committed to the team's goals and that most of its former members would
continue similar work elsewhere in the company. But then you look at Twitter. I mean, they have
changed a lot since 2020. So ever since Elon Musk has taken over, he's said, you know, this is going to be a free speech platform.
They've allowed a lot of speech that was not allowed previously.
The U.S. government is also wrestling with how much it should police misinformation online.
That's become a very sort of a lightning rod of a political discussion here in the United States.
Is disinformation
something that should be allowed? Who gets to decide what is true and what is not true?
Should the government be doing that? We have congressional inquiries into what the tech
companies have been doing with the feds. We have civil lawsuits against disinformation researchers.
It's a whole different environment than it was in 2020.
And right now, it seems certain that the efforts to counter disinformation
in 2024 are not going to be as robust as they were in 2020.
I mean, you have this like perfect storm of advances in AI, of a politicization of disinformation,
and a federal government that's not going to be as involved as they were.
And then you also have the nations that have traditionally engaged
in sort of nation-state-backed disinformation campaigns
are becoming more aggressive.
You know, people that study this stuff say that Iran and China are kind of emerging.
And then also you have Russia basically involved in this conflict in Ukraine.
So they have like extreme motivation to push the United States to not support Ukraine.
It just feels like a moment where there's a lot of incentive for the use of
disinformation from nation states. Is this like the end of reality, almost?
Oh, this is the, yeah, it's the postmodern election, right? We've finally gotten a
complete postmodernism, choose-your-own-adventure reality.
That's all for today.
Monday, February 26th.
I'm going to let my clone take it from here.
The Journal is a co-production of Spotify and The Wall Street Journal.
Additional reporting in this episode by Alexa Corse and Dustin Volz.
Thanks for listening. See you tomorrow.