The Dispatch Podcast - The A.I. Arms Race in Political Ads
Episode Date: August 21, 2023Artificial intelligence has become a common weapon in political information warfare. The Morning Dispatch reporter, Grayson Logue, is joined by Darrell M. West, a senior fellow at the Brookings Instit...ution and the co-editor-in-chief of TechTank, to explain the unique threat that A.I. poses. -West's profile at Brookings -TechTank Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
When you're with Amex Platinum,
you get access to exclusive dining experiences and an annual travel credit.
So the best tapas in town might be in a new town altogether.
That's the powerful backing of Amex.
Terms and conditions apply.
Learn more at Amex.ca.
www.ca.com.
Did you lock the front door?
Check.
Close the garage door?
Yep.
Installed window sensors, smoke sensors, and HD cameras with night vision?
No.
And you set up credit card transaction alerts,
a secure VPN for a private connection
and continuous monitoring for our personal info on the dark web.
Uh, I'm looking into it.
Stress less about security.
Choose security solutions from TELUS for peace of mind at home and online.
Visit TELUS.com.
Total Security to learn more.
Conditions apply.
Welcome to the dispatch podcast.
I'm Grayson Logue, a reporter with the Morning Dispatch Team.
Today we're talking with Daryl West, a senior fellow at Brookings Institution Center for Technology and Innovation.
And we're discussing AI, Deepfix, and the 2024 election.
Hope you enjoy the podcast.
Well, Daryl, welcome to the show.
Thank you.
It's nice to be with you.
There's been a lot of discussion of the destruction that AI is going to have or will have on the news environment, on an information environment, trust in the news, and obviously the political ramifications of that for our elections and for the 2024 election in particular.
But before we get into talking about some of those themes, what exactly are the, you know, the,
are the types of AI tools that we're talking about here that are going to have these disruptive
effects? I mean, the interesting development has been that we have democratized technology.
I mean, it used to be if you wanted to use AI tools, you needed an understanding of technical
aspects of AI. It required some pretty advanced skills. But with the new generative AI tools,
which are prompt-driven and template-driven, essentially we have brought these very powerful
AI technologies to the ordinary person. Like literally anybody can use these things. So if you want to
develop fake videos or fake audio tapes, it's very easy to do that. There are other types of
applications as well. But the key thing is they really are accessible to anyone around the world.
And so these are, for an example, this is something like a chat GBT-like model where if you wanted
to mimic the voice, the verbal, tonal voice of a podcast.
politician, you could plug that into a machine and it would spit back out a relatively
believable version.
Absolutely.
That's exactly what can happen, either on the audio side or on the video side.
I mean, you could have an image of Trump or Biden that's going to look exactly like
them.
You can put them in a compromising situation, even though it didn't actually happen.
You could make an ad that sounds just like one of the candidates, but it's completely
manufactured. So one can easily imagine some very bad scenarios coming out of these types of
technologies. Yeah. And to provide listeners with some examples, we've seen some early forays,
I guess you could say, into this space already in the in the 2024 cycle. So I believe this was
in June, the DeSantis campaign released a video going after Trump for his refusal to fire Dr. Fauci.
And it included a mix of both real images of him interacting with Dr. Fauci, but also fake images.
of him giving him a hug or even, I think, a kiss on the cheek was one of the images as well.
And then the RNC, when Biden announced his re-election campaign, issued this entirely AI-generated
video kind of depicting these quasi-apocalyptic scenarios, should Biden be reelected?
Are these the types of, are those the types of ads that we should be most concerned about,
or are they kind of the tip of the iceberg?
Could things get worse than what we've seen so far, basically?
That is just the tip of the iceberg.
So, for example, that alleged hug between Trump and Dr. Fauci never actually happened.
This is a classic advertising principle of guilt by association.
You know, what candidates often do in their ads is try and associate a candidate with an unpopular figure.
And Dr. Fauci is radioactive in many conservative circles.
So by having an image of the two of them hugging, it attempted on disbanded,
part to undermine Trump. We've seen other examples of, for example, President Zelensky of Ukraine
surrendering. Obviously, that has not happened, but there are images that look very authentic.
I've seen images of Trump being dragged away by the police as part of the indictments that
have taken place. That obviously didn't happen. That's not how the police treated him.
So these are all examples of how the technology already is being used, but one can imagine.
imagine even more nefarious examples exactly of the sort that we already have seen.
And there's a whole range of different permutations that could go from just a really
abusive use of a deep fake to something that's a little bit more questionable.
So again, I can't remember if this was the Ronda-Santis campaign or a super PAC released
in audio-generated ad where it used Trump voice to say something that he didn't say, but
that he wrote. It was an ad about what Trump was saying about the governor of Iowa, Kim Reynolds.
So it's interesting that they're across the whole spectrum of kind of political communication,
that this could be deployed, both particularly egregious levels, but also gray areas.
In terms of some of the things that we've seen so far, you mentioned the political,
the geopolitical consequences of a fake surrender from someone like Zelensky.
Have we seen really a lot of people be fooled by these yet? Has this actually registered?
and impact. Obviously, I think the potential is very much there for some nightmare scenarios
that we don't have to think too hard to come up with when we're thinking about deep fakes.
But how successful have these been at convincing or misleading large numbers of people so far?
I mean, that is probably the most important question about this.
Like, what is the actual impact of these fake videos? And unfortunately, we don't really know the
answer to that. And also, just from a research standpoint, it's hard to know what the answer is
because we're really asking, you know, how often do people believe fake videos?
We know that the answer is not zero, but we also don't know is it 10%, 30% or 50% of people who could be fooled.
What we do know from communications research is the nature of the message as well as the nature of the messenger matters a lot in terms of how believable something is.
And the problem is the messengers look very authentic.
Like you could have the candidate him or herself saying things that look nefarious that that individual did not say, but it looks completely authentic.
So that will be hard for voters to actually distinguish.
We also know that you could have candidates saying or doing things that would be very unpopular, but it's hard for people just in looking at an ad to distinguish the real from the fake.
So we don't know exactly what the impact is going to be, but we know it's likely to be non-zero,
that there will be people who are influenced by this.
I think everybody expects the 2024 election to be very close.
And so it doesn't take much of an impact on voters to actually alter the outcome.
You know, there are probably four or five states that are in play in the presidential election.
It may be 25,000, 50,000, or 100,000 voters in each of those states.
states that will decide the difference between winning and losing. And so even if 95% of the people
are not affected by a fake video, the fact that the last 5% could be could end up being decisive.
I think that's an underappreciated point because the political effect of these types of
manipulated ads isn't necessarily reliant on an RNC ad showing San Francisco being shut down
under a Biden second term. That's obviously manipulated through AI. That's,
not going to necessarily convince your Democratic voter that, oh, I shouldn't vote for Biden
now. But these edge cases of either increasing enthusiasm or turnout among voters who were already
kind of sorted into their camp or more micro-targeted ads that are not even this broad-based
appeal type videos that you would see from national campaigns, whether they are more tailored
to specific communities that might not get as much attention as some of these national ads.
Could you talk a little bit about the way in which AI could really heighten the ability of campaigns to micro-target some of their content to those constituencies that, as you said, when elections in different parts of the countries are on a knife's edge for gaining the majority of the vote?
I think those are both important points.
We are likely to see AI engaging in micro-targeting,
so really focusing on very precise and small numbers of voters
with a message targeted on that particular demographic ed group.
Also, we are likely to see turnout effects.
We know that turnout is going to be absolutely vital
to both sides in the upcoming election.
If you can use an AI-generated ad
either to boost turnout or to discourage people,
from voting, that actually accomplishes the same thing as if you persuaded them to accept a
particular view. So the thing about AI is the micro-targeting is likely to be very easy in this
campaign. You know, it used to be to run advertising campaigns. It took a lot of money. You needed
access to PR firms who had the expertise to develop the ads. Now any ordinary person can develop
an ad. It could be fake or real. And so the number and types of fake messages are likely to really
proliferate in the coming year, just because it's easy and cheap. You can actually put together
an ad in a matter of minutes. And if you want to target a particular group, the technology
will allow you to do so with a very high degree of precision. Could you talk a little bit more
about, I think that that paints a great picture of specific strategies that campaigns could
use to target individual groups of voters or try and turn out more of their base.
But let's talk a little bit about the broader just information environment and the effects
that this could have on people's trusts and people's engagement levels with news
and with electoral outcomes.
Obviously, the famous line from Steve Bannon in 2016 campaign was flooding the zone.
Do you think that this is going to kind of be that on steroids in terms of just the quantity of information out there is going to lead people to just kind of throw up their hands and say, I don't know what to think?
We will be flooding the zone.
I mean, people were concerned about misinformation in 2016 and 2020.
They haven't seen anything.
Like, 2024, it's going to be much more prevalent just because the tools are easier and cheaper to deploy and they're more accessible to a broader range of.
of individuals. And I think that we're in a situation where tools can be used for good or ill,
and there certainly are positive ways in which this technology can be used. Because the technology
is cheap and easy to use, it means you don't have to be rich to affect the 2024 election. We've
essentially made these tools accessible to a wide range of people. And so individuals or groups that
previously might have been marginalized and not in a position to really contest the election,
they will be able to do so. That actually could be a good aspect for democracy. But the thing
that I think people worry about are the inaccurate or nefarious uses of the technology and putting
out information that is either misleading or completely erroneous. Like that is something that I
think everybody should be concerned about, regardless of your political perspective.
And to that point, that could also not necessarily, not necessarily, not necessarily,
just from the voter perspective, but from the campaign perspective, not well-capitalized campaigns
or newer candidates trying to get their operation off the ground. That could be a positive effect
that this technology could lower costs for them. Yes, absolutely. And of course, you know,
a lot of us are focused on races at the presidential level or a senatorial campaign or a
gubernatorial election. I mean, those races often are well-funded and, you know, there's a lot of
money to use these types of things. But people should also think about how these tools can be
used at the state and local level, like a treasurer's race in a particular state or a city council
race or a school board race, like races that typically don't have a lot of money involved with them.
But now these very powerful advertising techniques are going to be accessible to a wide
range of people. That could be a good aspect in the sense of allowing more people to contest
state and local elections, but it also opens up those races, which often don't get a lot of
media attention. So if there are bad things happening, there aren't reporters who are in
position to actually alert voters that something fake or inaccurate is taking place. So that actually
creates some additional risk for those state and local races. I think that's a really good
point. This was a fairly well-covered race, but the Chicago mayoral campaign that we had this
earlier this year between Paul Wallace and Brandon Johnson, I think it was a couple days before
the actual election day for those two candidates. There was an AI manipulated audio recording
of Paul Ballas at an event saying something along the signs of endorsing police brutality of
it was essentially like back in my day, you could beat somebody up and that was fine and we need to back
our cops. And it was a totally manipulated version of the ad, and it got some coverage of that.
But I think it's a great example of your point that when we don't have as much watchdog attention
at the local and state level, that opens up a window for this content to go kind of unchecked.
Yeah, there are a lot of cities that are basically news deserts now in the sense that local
newspapers have voted or been dramatically downsized. And so, you know, there used to be
reporters that would cover local races and something nefarious was taking place would at least run
stories and help educate voters that something bad was taking place. In today's world, a lot of
those gatekeepers are gone or they're completely mistrusted. And so we don't really have a situation
where somebody can alert voters when bad things are taking place. So that does create some
additional risks for those types of campaigns.
Not long ago, I saw someone go through a sudden loss, and it was a stark reminder of how quickly
life can change and why protecting the people you love is so important. Knowing you can take
steps to help protect your loved ones and give them that extra layer of security brings real peace
of mind. The truth is the consequences of not having life insurance can be serious. That kind of
financial strain on top of everything else is why life insurance indeed matters. Ethos is an online
platform that makes getting life insurance fast and easy to protect your family's future in
minutes, not months. Ethos keeps it simple. It's 100% online, no medical exam, just a few health
questions. You can get a quote in as little as 10 minutes, same day coverage, and policies
starting at about two bucks a day, build monthly, with options up to $3 million in coverage.
With a 4.8 out of five-star rating on trust pilot and thousands of families already applying
through ethos, it builds trust. Protect your family with life insurance from ethos. Get your
free quote at ethos.com slash dispatch. That's E-T-H-O-S.com slash dispatch. Application times may vary,
rates may vary. Is the technology such that it's going to go up? Is it going to come down? Do you
think it's going to be just sort of an extrapolation to where it is right now? Well, I think there's a
lot of smart people wrestling with that right now. Today, I'm speaking with Michelle Heritage. She's
the executive vice president of Embridge, Inc. and president of Embridge Gas. She's a
a leader helping us reshape how millions of us experience energy at home. Join me, Chris Hadfield,
on the On Energy podcast. Listen wherever you get your podcasts. In terms of strategies or policies to
mitigate the harm of manipulated and fake AI content online, what's the playbook? Do we even have a
playbook yet from a policy perspective, from a platform perspective, if you're a social media company,
What is the toolbox that we have right now to try and mitigate the spread of these types of harmful content?
I mean, one minimum approach, which we've used in other areas, is just simple disclosure,
like requiring those who use AI to generate fake ads, fake videos, or fake audio tapes to disclose the fact
that they have used AI in that campaign messaging.
I mean, disclosure is an important part of television advertising by candidates or
required to say who paid for that ad. We have disclosure in the campaign finance world as well.
People who make campaign contributions above $200 per candidate have to disclose that information.
And so disclosure is something that to me makes a lot of sense because it's already part of
the campaign environment, the idea that, you know, you can use these tactics and these
technologies, but you should disclose the fact so that people at least are aware of it.
The other thing that is starting to be discussed is basically should we think about remedies
in cases of overt mischief in a campaign advertising.
Now, the difficulty here is judges long have ruled that campaign speech is protected speech.
And so candidates are really allowed to say anything they want on the campaign trail,
including statements that are completely false, like judges have said,
that election discourse is so important to democracy, like we're not going to regulate it at all.
So as of right now, there are no guardrails in place for the mischievous use of AI.
And so there are some proposals that are starting to circulate, trying to think about, you know, should we look at that?
But that's a very difficult area to legislate just because, you know, we can't really limit what candidates say on the campaign.
campaign at trail. So any legislation that does that is likely going to be ruled unconstitutional.
I think that's a good point talking about how the state of our current election rules and what
you can and cannot say as a candidate or a campaign, the FEC took some initial steps on Thursday
towards considering whether or not the current FEC rules against fraudulently misrepresenting a
candidate applied to AI generated content. There was a petition last month that tried to get the
commissioned to consider it. They shot it down, but there was kind of a re-up version of that commission
that they're now considering. A lot of the commissioners think that this is a very important
issue that they should explore the rulemaking, but that they don't necessarily have the current
authority to actually do this underneath their current rules. It sounds like you'd be in favor
of some type of FEC ruling or Congress giving FEC more authority to regulate what candidates are
able to do in terms of that fraudulent misrepresentation.
I mean, I was amazed on a federal election commission vote.
It was actually a unanimous vote among the Democratic and Republican members of that commission.
The FEC doesn't agree on hardly anything these days.
So the fact that there is unanimity indicates the paranoia that exists on both sides.
Like, Republicans are worried Democrats are going to use this stuff against them.
Democrats are worried Republicans will use it against them.
And in fact, they're both right.
Like, wherever one is on the political spectrum, there is a risk.
that your opponents will use these tools against you and possibly in nefarious ways.
So I actually applaud the FEC for starting the process of trying to think about this.
I don't know exactly what the right solution is going to be,
but we definitely need to have conversations about this and try and work out some common-sense garb rails.
So at a minimum, people have to disclose the use of AI-generated content in their ads.
And perhaps if there is fraudulent depictions of candidates,
that there could be some recourse there as well.
Yeah, and just so listeners know,
even if the FEC decides to apply their current rules
that they have on the books to AI-generated misrepresentations of candidates,
their current rules don't apply to super PACs.
They don't apply to like aligned groups
that aren't officially tied to the campaigns.
And we've already seen that in this cycle
where videos that might even be made by a campaign staff
or funneled to friendly Facebook and Twitter pages
and kind of released.
And that's a gray area.
that without expanded FEC authority, the current rules can't deal with.
Candidates often employ a good cop, bad cop routine where the candidate organization
basically does the less nefarious versions of this, but then the outside groups that are
technically separate from the candidate organization do the tougher stuff.
I mean, we've seen that with television ads and digital ads in the past.
that is likely to be part of the 2024 election.
I kind of worry less about what the candidates are doing,
but worry a lot more about the super PACs,
the independent groups, and the outside organizations
because historically those have been the ones
that have really pushed the envelope
and sometimes step over the line of ads
that are not very factual or accurate.
The disclosure requirement approach strikes me
as particularly interesting,
just because with a technology
is new and evolving as AI, having a heavy-handed regulatory regime, not only is it difficult,
it just takes a while to get it sorted out. And we're months away from a very important presidential
election and this type of content is already heating up online. Are there movement in the tech
developers, AI developers themselves and trying to develop essentially counter AI technology to
to identify what's generated through AI, either through a digital watermark or something of that
sort? Yes. There are a variety of things that on a voluntary basis, some of the platforms
either have done or are likely to start doing. The digital watermarking is a way to increase
accountability in the sense of having ads or other digital products have a watermark
so at least people know who put that together and who is sponsoring it. So that's something that
could help. There also are situations where platforms are basically trying to use AI to combat
AI, meaning if there are AI generated ads that are like completely beyond the pale, using AI to
help spot those things, refer them to humans who can actually make a decision on whether
that is a completely inaccurate type of ad.
So there are a variety of voluntary measures
kind of short of overt regulation
that either have been used in the past
or may end up being used in the upcoming election.
A lot of generative AI, in particular,
the large language models that campaigns could use
to develop specific messages
or to even write speeches for their candidate,
those are only so good as the data
that they're based on, essentially.
As these tools become more prevalent,
could this raise the stakes for data protections
for voters or people in general
as the data sets that campaigns or other groups
are looking to help these tools and systems
learn better and become more realistic?
Those data sets become more valuable.
I'm thinking of an AI Cambridge Analytica scenario
sometime in the next year or so,
recognizing that there's a lot of work to be.
done on data protections and privacy protections.
I mean, a lot of the AI generated content now is pretty generic in the sense that, you know,
these large language models are kind of surveying what's on the web and then taking the
least common denominator of that content.
So a lot of the stuff that I've seen kind of has a Wikipedia style approach to it in the
sense that it's an aggregation of a bunch of stuff out there, but it's pretty generic in
nature. But what we're starting to see is kind of the next generation models that are actually
much better, more specific, that are fine-tuned for particular areas. I mean, there are thousands
of different generative AI tools that are being released all the time now. And those are often
targeted on particular applications. And so therefore, they're going to be better. They're going to
be more persuasive. They're going to sound more authoritative. That's a great point. And I guess going
back to the trust issue, it's one potential dynamic that we could see happening in campaigns
is not just the overt misinformation or disinformation being put out there through AI tools,
but it reducing trust and information just in general so much so that candidates can start
denying or saying things just didn't happen that did. A lot of folks have been discussing
the Access Hollywood tape with President Trump in his 2016 campaign, if that incident occurred
today, he could plausibly just say, oh, that's fake, that that's an AI audio generated
video. So I think there's lots of different angles for this that can change candidate behavior
and campaign behavior that we're still just now getting a handle on. But thanks again for taking
the time to chat with us, Daryl. You've given us certainly a lot to think about. And we'll
we'll both be keeping an eye on this as this 2024 gets closer.
Thank you very much.
You know,
