The Daily Signal - ‘Fog of War’ Intensified by AI and Social Media, Tech Policy Expert Says
Episode Date: November 2, 2023As the war between Israel and Hamas rages on, Jake Denton, research associate for The Heritage Foundation’s Tech Policy Center, breaks down what the role of artificial intelligence has been in the... conflict. (The Daily Signal is the news outlet of The Heritage Foundation.) “I think the one that everyone jumps to is the [artificial intelligence]-generated content, deepfakes, things of that nature,” Denton says. “There’s a few stories of synthetic audio recordings of a general saying that an attack’s coming or those types of things that we’ve even seen in the Russia-Ukraine conflict,” Denon says. “They go around on Telegram or WhatsApp.” “They’re taken as fact because they sound really convincing. You add some background noise, and suddenly it’s like a whole production in an audio file,” Denton adds. “And typically, what you’re picking up on in a video is like the lips don’t sync, and so you know the audio is fake. When it’s an audio file, how do you even tell?” Denton also highlights social media platforms such as the Chinese-owned app, TikTok. “And so, what you’re seeing right now, especially on platforms like TikTok, is they’re just promoting things that are either fake or actual real synthetic media, like a true deepfake from the ground up and altered video, altered audio, all these things are getting promoted,” Denton says, adding: And kids, at no fault of their own, are consuming this stuff, and they’re taking as fact. It’s what you do.You see a person with a million followers that has 12 million likes and you’re like, “This is a real video.” You don’t really expect these kids to fact-check.Denton joins today’s episode of “The Daily Signal Podcast” to also discuss President Joe Biden‘s executive order on artificial intelligence, what he views as social media companies’ roles in monitoring artificial intelligence and combating fake images and videos, and how people can equip themselves to identify fake images and videos online. Hosted on Acast. See acast.com/privacy for more information. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Joining today's episode of The Daily Signal podcast is Jake Dent, Research Associate in the Heritage Foundation's Tech Policy Center.
Jake, welcome back to the show.
Yeah, thanks for having me back.
Of course.
Thanks for joining us.
On Monday, President Joe Biden issued an executive order on artificial intelligence.
According to a White House fact sheet, the executive order establishes new standards for AI safety and security, protects Americans' privacy, advances equity and civil rights,
stands up for consumers and workers, promotes innovation and competition, advances American
leadership around the world, and more.
Jake, before we get too far into today's conversation talking about this executive order,
first and foremost, what is artificial intelligence?
It's a topic of a debate that we're still having here in Washington when it comes to, you know,
formulating legislative approach, regulatory approach, but just generally speaking, it's kind of
artificial intelligence is intelligence that simulates human intelligence is like the simplest,
dumbest version you could possibly have. But people are taking it very different directions.
And every piece of legislation we're seeing has a different definition. I don't think there's a
unified view of what it should be here in Washington. But that's a point of contention all the way down
to the most simple aspect of this whole policy debate. We still haven't defined the tech in a
meaningful way. Thanks for that explanation. I wanted to now talk about this executive order.
If you could just break it down for us a little bit more.
I know it was rather long.
The document's huge, you know, depending on your font size, you can be upwards of 110 pages.
And there's a lot in there, as you kind of mentioned at the beginning here, everything from
national security to AI labeling.
So that kind of gets into the, you know, synthetic media, deep fakes, having a watermark
potentially.
And then also, as you mentioned, this kind of diversity, equity inclusion agenda.
I think across the board, the framework that's outlined in the fact sheet,
is strong. It's something that most of us wouldn't really object to, but when you read the actual
order, it's not really what's presented in the fact sheet, which is kind of typical, right? I mean,
they butter it up, they make it sound nicer. And really what you find when you start to get into the
details is that it's really pushing that diversity, equity, inclusion agenda throughout all those
various pillars and puts a little bit less priority on the actual kind of intent of the pillar.
So like a national security pillar, for instance, includes red teaming, which is,
intended to find vulnerabilities and flaws within the AI model. But when you read the definition
of red teaming that is laid out in the executive order, it includes looking for discriminatory
or harmful outputs of an AI model. And so, you know, if your red team who is supposed to basically
find vulnerabilities that could implicate national security is focused on finding, you know,
outputs that might hurt someone's feelings or that are discriminatory, is our national security
really safer? You know, what are we doing? And I think it all boils back down to something that we
encounter all the time of they just use these blanket terms that have really been left undefined
like diversity, equity, inclusion, harm. And it just gives them broad authority to label
anything they want that term to take action. And so we still, you know, we do this in the social
media realm. Now we're doing it an AI. We still don't know what any of these words mean to them.
They might not really even know what these words mean to them. And, you know, who's to say how
it ends? But, you know, they're going to use this as a wedge in for these kind of policy debates.
And just speaking of using it, how are you expecting the administration to actually enforce this executive order?
Yeah, well, it's really the elephant in the room. You read through it and when you consider the disclosures they're asking for, there's still a lot of autonomy on the company side, which isn't necessarily a bad thing.
Like, they shouldn't be forced to give away everything.
But it's going to be really hard for the government side to say if they got what they should have in that disclosure.
And then furthermore, there is just kind of a competency shortage here in Washington.
There isn't a AI workforce that is occupying the halls of Congress or even these regulatory agencies.
And what this executive order does is it breaks off particular jurisdiction for each agency or regulatory body so that, you know, consumer issues are under a consumer regulatory agency like the Commerce Department and, you know, maybe nuclear related things are under Department of Energy, which, you know, probably a good thing.
Separate expertise.
but those agencies don't have this robust AI team that can understand really what's going on.
And so there's going to be a huge hiring push.
The Biden White House rolled out AI.gov, which is like a jobs portal.
But, you know, think as the AI developer, you're in demand.
You currently work at a Silicon Valley company making seven figures.
Are you really going to just throw that away to come move to crime-ridden Washington, D.C.,
and take a government job?
Like, probably not.
So who's filling these roles to interpret the disclosures?
Like enforceability just kind of starts to crumble when you consider you don't have the talent there.
You used overly ambiguous words, which means even if you do enforce, it's going to be very selective.
It's like across the board, it's really hard to see what this looks like in practice.
And I think that's what's really like the big struggle right now for all these people is, you know, everyone's asking,
what does it look like in 10 years because of this executive order?
I don't think anyone knows because there isn't a really clear path presented.
And they're calling this like the foundation of further AI policy.
well, the foundation is seemingly non-existent.
Like we just kind of threw stuff out there.
So hopefully we actually get a real foundation later on.
But it's really tough to say what AI policy is going to look like in 10 years.
And just speaking of policy, obviously we saw the president and the White House take this step on Monday.
What would you like to see from Congress in terms of legislation?
Well, I think the core focus right now is all on national security and, you know, rightfully.
So these systems are going to pretty quickly.
integrate into very sensitive areas, critical systems, that we can't afford to, you know,
just be vulnerable to, you know, adversaries or bad actors here in the States. And so something like
explainability legislation, which I believe we talked about before on the show, is critical. And yet
it's really like non-existent in any of these proposed bills. No one's really talking about it on the
hill. We go over there. People still aren't getting it. And it's pretty simple. All it is is we want to be
able to audit the decision-making process of this model. Everyone would think that if you ask it a
question, you'd be able to figure out why it drew that conclusion, but not even the engineers
in most of these companies can tell you why the model came up with that answer.
And so we want to essentially lift that black box.
That's kind of what that phenomenon's called.
So we can go through and figure out what data that they scraped across the internet contributed
to that answer.
Maybe it was a fake news story.
Maybe it was a statistically flawed and disproven academic study.
You think about all the different things on the internet that are disproven, and there's
no way to tell if that's not the basis, like the foundation for a given decision. So for a
critical system, like, you know, is targeted in this executive order that we're really worried
about causing real world harm. You would think that there would be a way to audit them.
There is nothing in that order for it. And so it's almost like we're just kind of skipping
steps here, trying to check every box, please the public, but we're missing kind of the
unsexy computer science 101 type stuff that's going to make this, you know, either
work or fail. And so we need to almost just go back when it comes to Congress and start from
the ground up, which is explainability. Now, of course, we are having this conversation against the
backdrop of the ongoing war between Israel and Hamas. What has been the role of AI in this war that
you've seen? Yeah, I think the one that everyone kind of jumps to is the AI generated content,
deep fakes, things of that nature. There's a few stories of synthetic audio recordings of like a general
saying that an attack's coming or those type of things that we've even seen in the rush of Ukraine
conflict. They go around on telegram or WhatsApp. They're taking this fact because they sound
really convincing. You know, you add some background noise. And suddenly it's like a whole production
and an audio file. Typically, what you're picking up on in a video is like the lips don't sink.
And so you know the audio is fake. When it's an audio file, it's like, how do you even tell?
And so you've seen a little bit of that, the synthetic audio. It's been pretty well documented.
And then on social media, you're starting to see, you know, synthetic images or even like
ARMA 3, the video game clips being taken as like real world military footage and it's
shifting the news cycle and where people are paying attention.
A lot of that has to do with the algorithmic recommendations, which we've had them for a while,
but it's very foundation as like still AI recommending you this content.
You know, there's an element of AI in that of, you know, how what it's recommending to you,
what it's recommending to you, what it's prioritizing, what new stories that's deciding you're
going to see on your feed.
What you're seeing right now, especially on platforms like TikTok, is they're just
promoting things that are either fake or actual real synthetic media, you know, like a true deep
fake from the ground up, an altered video, altered audio, all these things are getting promoted.
And, you know, kids at no fault of their own are consuming this stuff and they're taking
as fact.
Just, it's what you do.
You see a person with a million followers.
It has 12 million likes and you're like, this is a real video.
You don't really expect these kids to fact check.
And even then, how are you going to fact check a video of a conflict in the desert, right?
Like, who do you know that's going to tell you if that building's real?
And so everyone's running around with all sorts of ideas in their head.
They might, that scene might not have ever happened.
That building might not have existed.
And it's all from the recommended content on the feed.
So it's the fog of wars being essentially intensified through social media and these AI systems.
It's scary.
It certainly is.
It's really scary.
And just speaking of social media companies, what do you view as their role in monitoring
artificial intelligence and combating these fake images, these fake videos.
Yeah, it's tough because the generation side is rapidly outpacing the detection side.
And so it's like you want a platform like Twitter and you expect them to detect everything.
But it's just simply not possible.
The tech isn't there yet.
I think we all saw this with the Ben Shapiro image of the dead baby.
And then it ensued this crazy debate of like, was it real, was it fake?
And still there's honestly like not a great answer on if it was real or if it was fake.
each side backtracked a little bit.
An image generator, our image checker said that it was fake and then it said it's real.
There's a huge error variable.
So it just kind of presents this point of like even if we, you know, require Twitter or
Facebook or whoever to verify the authenticity of the image, they're not going to be able to do it
100% of the time.
And so I think the mechanisms like community notes on X are probably the way forward.
It's like a band-aid until we can figure out how to have a reliable detection system.
but community notes are flawed.
I mean, anyone can make a community notes account and then just troll people.
And you saw it with that Ben Shapiro case.
I was on Twitter myself, or X, rather, I guess.
First post I see of the photo says this has been verified to be true for these three reasons.
And then literally the post right below it was the same image.
This has been verified to be false for these three reasons.
You're saying like, okay, like what is real?
You know, this is all just a smoke and mirrors game.
And so I think it's going to get worse, unfortunately.
I think there's going to have to be a straw that breaks the camel's back for like a real overhaul of how media is handled on these platforms.
I think we're inching closer with stuff like TikTok and the way that they're promoting content.
But we're a long ways from like a clean information environment again that we may have seen pre generative AI boom or even pre-Ticot.
It's probably further away or maybe even out of reach than more out of reach than people realize.
And where do you even draw the line from your perspective between actual news being censored, which might be harder to verify, versus just letting fake news out on the internet?
It's a real struggle because, I mean, particularly within the generative AI lens, you know, deep fake media is getting a lot harder to detect.
There basically at some point has to be a human who verifies the check of the automated system that, you know, flags it.
And then it's that person's choice.
And so there is a world in which we give increased authority to these platforms to flag, you know, AI generated media.
And it results in real stories being censored on the seemingly innocent basis that it was possibly like AI generated.
But we're just trusting a person again, this fact checker who, you know, maybe worked the 2020 election and was flagging stuff for a whole other reason.
And so really, you know, you can't untangle the political kind of will of the Silicon Valley worker.
they're obviously going to exert their authority.
And so you want to figure out a way forward here where, you know, independent journalists still are allowed and platformed.
And just because you don't have a mainstream backer that's verifying this is true doesn't mean the story's AI generated, right?
But there's like a very real world in which every independent journalist just gets de-boasted or shadow ban because they don't have this verification enterprise behind them.
And, you know, these legacy media outlets are getting sucked up into this stuff just as much as
anyone else, you know, retweeting or posting fake media.
But they're just kind of going to skate freeze.
So it's really tough.
I don't really see a clear path forward.
I think we're just going to have to play around with the correct level of automated detection,
human review.
There's got to be a way speedier appeals process, right?
if your thing was wrongfully flagged at a critical time, you need to be able to get that up as quick as
possible. And, you know, maybe the platform's not incentivized to hear that appeal out. I think about,
like, critical election moments, like the Iowa caucus, right? Let's say, like, a candidate had a stroke
and someone takes the photo and posted it. It's flagged as fake. That story doesn't get out.
30 minutes before voting starts, it's confirmed as true, but every video has been flagged as fake.
You know, maybe you have a strike on your account now. Who's putting the image up, right?
and then someone then screenshots the initial takedown.
It's like this was verified as false.
And like suddenly it's that voter at the vow box is like, is the candidate I'm voted for even alive?
Like you don't have any idea.
And we're just like months away from those scenarios.
Not a single barrier has been erected to prevent that from happening.
And if anything, just fuel has been added to the fire.
People are like getting reps in with the Ukraine stuff, with the Israel stuff.
And then just recreationally playing around.
So we're refining our ability on like an independent level to cause chaos and the platforms.
And these sites are doing nothing to, you know, prevent it.
So I think really it's there's going to be a big boom.
Something crazy is going to happen and it will require us to think things through a little bit deeper.
Well, before we go, I wanted to ask you how people can equip themselves to identify fake images and videos online.
Yeah, it's tough.
I would say the best thing.
It sounds crazy, tinfoil hat kind of, ask, but just assume that everything is fake and it's the best way to be safe.
I mean, the reality is people you trust are going to retweet and amplify things that are fake.
They might very well know it's fake and they viewed a satire, but you're just not expecting it so you think it's real.
And your just forces this like independent fact checker.
Well, we're going to pretty quickly hit a point where the fake media outweighs or ways or
is more prominent than real media.
And so it's better to just change your mentality now and be skeptical of everything.
Even the highest production video, it's coming from a legacy news outlet.
Just question, think back of the best deep fake you've ever seen and compare it and be like,
that actually, that could be a deep fake and just kind of be skeptical.
I would say the best way if you're really trying to figure it out, you look at, you know,
things like the eyes, the fingers, the mouth.
Like these are areas where the details don't.
really come through very well in the AI generated images, but skilled people can fix it.
And so it's like you just hit this point where it's better to just kind of have a high guard,
make a way higher barrier for you to, you know, trust an image.
You know, we grew up just thinking everything we saw is real.
The camera is an accurate representation or accurate representation of reality.
Today that's not really true.
And so it's going to be really hard to deprogram that and build up this new defense system.
start now because it's going to get a lot worse.
Well, Jake, thanks so much for joining us today.
Any final thoughts before we go?
You know, just keep an eye on this stuff.
The conflict takes your attention away from Silicon Valley, but Silicon Valley keeps chugging
along.
And so every day there's a new leap in the AI world.
We're kind of on that part of the curve where we're making giant leaps every day.
Not every single one gets attention, but they're about just as important.
Try and keep up with it because you'll get taken advantage of if you lose track.
of what's going on. Well, Jake Den, thanks so much for joining us. For sure. Thanks for having me.
