Life Kit - How to spot AI-generated and other fake content
Episode Date: June 13, 2023It's easy to be fooled by AI-generated images and other content. We talk about how to identify them, how media literacy can help, plus how to use these tools responsibly.Learn more about sponsor messa...ge choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
You're listening to Life Kit from NPR.
Hey, everybody.
It's Marielle.
There's this image that went viral earlier this year of the pope looking, honestly, pretty
stylish.
He was wearing this white, puffy designer coat and rocking an enormous crucifix necklace
outside of it.
And he looks like he's just, you know, out for his
morning coffee run. So I know Pope Francis is supposed to be like the relatable Pope,
but I regret to inform you that this image is fake. It was created using AI, artificial
intelligence software. Shannon Bond is a correspondent at NPR. She covers misinformation.
And she says, if you look closely, you can see the clues.
You know, one of the sort of classic tells that people talk a lot about is that these
image generators can really struggle with creating realistic hands. Hands are, for some
reason, particularly tricky. Also things like teeth and accessories like glasses and jewelry.
And so, you know, that was an example we saw with the image
of the Pope wearing the coat. He seemed to be holding a coffee cup in his hand, but his fingers
weren't actually like holding on to the coffee cup. And if you looked at the side of his eyeglasses,
it sort of disappeared into his face like there wasn't a rim.
The guy who made this image, by the way, is quoted in BuzzFeed saying,
I just thought it was funny to see the Pope in a
funny jacket. And it may seem like, okay, what's the big deal? This picture wasn't hurting anybody.
But it does show that AI tools can make fake images that are convincing enough at first glance.
And obviously that can be abused and it can be a way to spread lies and misinformation.
On this episode of Life Kit, Shannon and I are going to talk about
what you can do to spot AI-generated images,
audio, and video.
We'll also talk about how to use these tools responsibly
and to talk to the kids and teens in your life about them. So one way AI can be abused is politically,
to make people think a politician or a government official said or did something they didn't.
Recently, Shannon reported on a video posted by the presidential campaign
for Florida Governor Ron DeSantis.
It included images of former President Trump
hugging Anthony Fauci.
Those images were apparently fake, AI-generated.
This general topic came up at a recent Senate hearing
with the company that makes the software programs
ChatGPT and DALI.
Senator Richard Blumenthal from Connecticut,
he played a
synthetic version of his voice. And then when you heard it compared right to his real voice,
you know, that sounded similar, but maybe not exact. We can take a listen.
We have seen how algorithmic biases can perpetuate discrimination and prejudice,
and how the lack of transparency can undermine public trust.
This is not the future we want.
If you were listening from home, you might have thought that voice was mine and the words
from me. But in fact, that voice was not mine. The words were not mine.
Yeah, wow. I could not tell the difference between those two. It doesn't seem obvious hearing them back to back.
Yeah, I mean, I think it sounded like he might have just been playing a recording of himself, maybe, right? I mean, it can be pretty uncanny. And if you're missing any sort of other context clues, it is really hard to tell this apart. And the people that I've been talking to about
these questions about detecting AI and how to spot fakes, there's a real issue here,
which is there's sort of an arms race, right? The software is rapidly improving. So newer versions
of some of these image generators are actually much better at making hands. And as we've heard,
the voice technology is getting increasingly accurate and, you know, videos are also getting better and better. And so there's a real danger
in relying too heavily on these sort of these tells that might be disappearing.
Are there any tools that you can use to help you figure out when something is AI? Like you could
paste the photo, for instance, into some kind of
software and it'll tell you? Yes, there is detector software. And it can be accurate to a certain
degree. But first of all, it won't catch everything. And again, the software is sort of constantly
improving. And in some ways, it'll improve because of the things that can get caught by detectors,
right? And so again, then the software gets better and it can't be caught by the detector,
so the detector needs to get better. And it ends up, you know, I think that can be really difficult for people to sort of keep up. And I also think in some cases, these sort of tools
can be difficult for people who aren't really versed in digital forensics to understand and use.
And so, you know, there's this issue where we don't want to encourage people to be too skeptical in a way as well about this content because that itself can backfire.
You know, if you're telling everybody that they need to be sort of doing pixel by pixel analysis of every photo they see.
I mean, first of all, I just don't think that's realistic.
I mean, that's not the way we interact with the Internet, right?
That's not like what you're thinking when you're scrolling through Twitter or Instagram.
There's also this idea that it could give bad actors the opportunity to discredit real images and video as fake, right?
If there's this idea that you can say anything is fake, then that's something that can actually be weaponized against us.
And so we can't rely too much on these technological
interventions. Yeah. I wonder if you can't rely on looking at tells or running photos or videos
through some kind of detection tool, is there anything else you could do? I mean, it seems like
context is important, right? Yeah. A lot of this comes back to sort of some real basics of media literacy, right?
Or one of the researchers I spoke with, Irene Suleiman from the AI company Hugging Face,
she calls this people literacy.
And it's the idea that, right, you don't need like sophisticated technological analysis.
You need to do things like think about context.
You need to slow down.
And this can sound like pretty dry, but it's actually really
important advice, not just for thinking about AI generated content, but pretty much, you know,
anything that you're encountering on the internet. I mean, I cover dis and misinformation. You know,
these are the kinds of tools we talk a lot about in just terms of helping people like navigate
what they're seeing. I mean, think about When you see something that is really appealing to you online,
like what is it doing? It's probably triggering your emotions. Right. How do you actually verify
if something in a photo is true? If you're looking at it, let's say it triggers an emotional reaction
and then you want to see, did this really happen? So there's a method that's been developed by a
research scientist named Mike Caulfield that's called the SIFT method.
There's a pretty good framework for this.
And SIFT stands for stop, investigate the source, find better coverage, and trace the original context.
And so, like, one really basic thing is, you know, say it's something about a public figure like the pope.
There's probably not going to be just one photo of this, right?
There's probably going to be additional photos from multiple sources, you know, if this is a public event, you know, so that's just kind of
one of the real basics. There is technology like Google reverse image search, where you can click
on a photo and basically Google will look and see if it's appeared elsewhere on the internet. And
Google has just recently announced some improvements to this reverse image search to actually make it
a lot easier to see, you know, has a photo appeared online before, you know, in what context. And, you know, that can be,
like, this is, again, not just about AI-generated photos. That can actually be really helpful about
kind of any images that are shared with misleading or false context. You might have seen, you know,
during hurricanes, sometimes people will share, like like there's this viral image of a shark swimming on a flooded highway.
Yeah.
Those things get shared over and over again.
And that's kind of a good way to say like, hey, wait, this is actually is not like this is a really old photo.
Right.
It might be a real photo that wasn't altered, but it says it's from a certain place and it actually is from a different place, different time.
Exactly.
So that can be a really important step to do. We were just talking
about audio and just like how accurate that Blumenthal audio sounded. That is a real challenge.
And we've seen already scammers have been using these kind of spoofed AI generated audio to call
up people and impersonate their relatives or a friend in distress asking for money. You know,
this has raised enough alarm that the Federal Trade Commission actually has
put out a warning about this.
And, like, that's really basic advice there.
Like, if you get that kind of call, like, don't immediately, you know, open up your
– grab your credit card and give your credit card number or, you know, start sending something
on Venmo.
Call them back at a known number that you know is theirs to confirm, like, did you really
call me or, you know or is this really you there are some fakes that don't seem they don't seem malicious on the face of them and so they've
tricked me before like sometimes if I see something that's accusing a public figure of doing a certain
thing I'm usually going to give that more scrutiny because like you say it triggers a certain thing, I'm usually going to give that more scrutiny because like you say, it triggers a certain emotion in me, right? But I don't usually do that when I see videos that trigger
like a positive emotion. I saw this thing that I sent to a friend the other day. It was like
just a woman opening up, I guess, clams or oysters, like giant clams are oysters and in some part of the world.
And she found gold pearls inside.
But I sent it to him.
I just thought it was cool.
And then he was like, that's definitely a fake.
And I was super embarrassed.
But also I was like, why?
Why would somebody fake this?
I mean, I think we saw like so remember back to last summer when when Dolly, the open AI image generator first launched, and then we quickly had a couple others. There's one called Mid Journey. There's one called Stable Diffusion. And I think people love playing around with these. I mean, that's how we got this Pope photo. That was not meant to be some sort of like misleading, like, I don't know, there was no sort of, you know, deep intention behind showing the Pope in the Balenciaga puffy coat. You know, the guy who made it apparently was just like,
he was just playing around. And he thought it would be funny. Yeah. And that's what's really
cool about this technology. Like, you know, you can have a lot of fun, you can use it for satire,
you can use it with your friends. I think the problem is that, you know, we haven't really
grappled with is this stuff gets decontextualized.
Right. So it's one thing if you know you're going to create that photo and post it and say, hey, I made this.
Isn't it so cool what mid journey can do right now?
It can make this amazing image.
But then that might get shared elsewhere without the context, without any kind of disclaimer that it was created by AI.
And what do we do with that? And whose responsibility is it?
This is something that has not been at all settled.
We're just in this real kind of wild west right now
where there aren't any norms around this.
And actually, we're sort of,
we're in the process of developing norms around this.
It will not be the last time, you know,
you fall for what looks like a really cool video, right?
Yeah, and I imagine it could also be a way
for an account, say, on Instagram to get a lot of followers if they have, like, right? Yeah. And I imagine it could also be a way for an account, say, on Instagram
to get a lot of followers if they have like cool nature content, even if a lot of it's fake,
you know? Right. Yeah. There's good ways to monetize this. You know, it's causing huge
disturbances in the art world. The way this technology, especially the image technology,
works is, you know, it's trained on actual images out in the world.
And so you have artists who are very concerned that basically, you know, Dolly can create a painting or a photograph in the style of a known artist.
And what does that mean, you know, for their livelihoods?
There's all sorts of ways in which people can be using these things, not again, not maliciously, but in ways that are really going to disrupt the way we think about, you know, creating and interacting with content
online. Right. What about the language-based AI like chat GPT, which seems to be everywhere
these days? Those chatbots can basically, you can ask it a question and then it'll spit out an answer or it'll maybe do computer programming for you
or write you a poem or whatever you ask it to do? Yeah, I mean, they're incredibly,
these systems are incredibly good at producing all different kinds of text. They can sound like
a person wrote it. It can sound, they can imitate Shakespeare. As you said, they can do computer
programming. ChatGPT recently released an iPhone app. And I was using it the other night. You know,
I have a seven-year-old. He has 10 million questions about everything in the world.
And we were using it to like ask questions and see what it came up with.
And it is pretty striking how much it sounds very persuasive, right? You can ask it a question and
it'll give you an answer that does sound plausible.
It's really important to understand just because they sound realistic, that doesn't mean that they're true or accurate.
Here's how Gary Marcus, a cognitive scientist and professor emeritus at New York University, put it.
They don't have models of the world.
They don't reason.
They don't know what facts are.
They're not built for that.
They're basically autocomplete on steroids. They predict what words would be plausible
in some context. And plausible is not the same as true.
So they make mistakes.
Yeah, they make mistakes. Yeah. And they make things up. I think there's a couple of things
to think about. These choppouts are producing text that sounds really authoritative, but they
can be wrong. So they can just like insert errors. They can do
things like make up quotations. They can make up research papers. Our colleague, Jeff Brumfield,
who reports about science for NPR, he was able to get ChatGPT to just like fully invent a news
story that he never wrote. And some of this stuff is, you know, it is less serious. But then you
some of it's much more serious. There was a case where ChatGPT
fabricated an allegation of sexual harassment against a law professor. And so, again, even
though they have this format that makes it really seem like you are talking, A, to a person and that,
B, that the person is giving you authoritative information, that is not always the case. And so
it's really important to like
double check anything that you hear from a chatbot, even if it comes with a link to a source,
like does that source say what the chatbot says it says?
Right. So how should you, if you're going to use something like chatbot, like
how should, what role would it play in your work or your life?
I mean, I have found playing around with these things, you know, I think they can be really, first of all, just to be really fun, right?
They can be, it's fun to see what they can do, how they answer a question.
One of the questions my son, I was asking ChatGPT the other night when we were asking questions was, you know, why is Shaquille O'Neal so tall?
He's like obsessed with basketball right now, right? And, you know, it's the answer started off like pretty plausible. It was like, well, you know, these characteristics come from genetics.
But then it kind of went in this weird direction where it said, and also he became very famous
playing basketball. So the reason that Shaquille O'Neal is tall is because of his genetics and
also because he likes playing basketball.
Oh, right. That doesn't make any sense.
Right. It doesn't make any sense. But I think that it was really instructive actually to say,
like, oh, look, it got that wrong. And that was really obviously wrong. But if you were looking for information, you might not know that the thing is telling you is wrong. So I think some
of the advice here is to just double check if it's, I mean, if you're actually getting concrete
pieces of information from a chatbot or some other kind of, you know, language producing software, actually just
check out the facts. Like, does this seem true? Right. You know, one of the folks I've been
talking to about using AI is a professor at the Wharton School, the University of Pennsylvania
named Ethan Malik. And he teaches graduate students about entrepreneurship. And he's
really interesting. He is really embraced using AI in the classroom. And he even requires his
students to use AI as part of their work. But he's also wary about the ways that it can get
things wrong. And so this is how he described what he sees as the best way to use it.
You can think of it as like an infinitely helpful intern
with access to all of human knowledge
who makes stuff up every once in a while.
You know, I'm wondering if ChatGPT makes stuff up
because it's embarrassed if it doesn't know the answer.
Think about how we're even talking about it,
like saying that it knows or that it's making things up.
Like it's really hard to talk about these things without even like attributing some kind of intentionality or like kind of like personhood to this.
But it's really important to remember they're not thinking and they're not – they don't understand, right?
They can play chess against you, but that doesn't mean they have a conception of like what chess is or what a chessboard is. Like it's broken everything down into sort of this system of statistical
relationships. And that's where things can get really weird. You mentioned that you're doing,
going through these exercises with your son with Chachi PT, and it sounds like you're both kind of
learning about its limitations at the same time. Is that something you suggest that parents do, that they kind of, that they talk to their kids about AI and its uses, but also many limitations?
I think it's incredibly important because, like I said, we're sort of in this moment where we're trying to develop norms around this stuff.
And I think a lot of parents are trying to figure this out. Like, you want to kind of be there with your kid and kind of walk them through and talk about
it and like have a conversation. And so, you know, some of this stuff, you know, is, you know,
you have to think about the age you're at, you know, my kid's seven, like he doesn't have
unfettered access to the internet or to any of these tools. And so we were just sort of talking
about like, you know, how this works, does that sound right? That doesn't, you know, maybe that's kind of weird, like how, or how,
how better should we ask this question to try to get it to give us the answers that we're looking
for, like to keep it focused. I would think, you know, I think for older kids, you know,
a really important thing to talk about and that adults should be aware about too is like,
if you're interacting with these systems, like think about the personal information you might be sharing. Probably don't put your personal
health information into it. I think there's also, you know, conversations to be had about
the ethics, like of why you're using this. What are you going to use this to create? I mean,
certainly with kids, that's going to come up in the context of school, right? I mean,
you know, what's happening in high schools and universities,
professors are trying and teachers are trying to deal with, you know,
what appears to be, you know, a lot of kids using chat GPT to do their homework.
And so thinking about, you know, what you're using it for
and how you're disclosing if you have created something with AI,
something that you create might be shared, you know, out of context.
And, you know, what's your responsibility to label it and to say this was made with
DALI or this was, you know, I asked ChatGPT this and this is the answer that I got.
Shannon, thank you so much for being here.
This has just been super informative.
I feel like I learned a lot.
Thanks for having me.
And this is really me. This is not my AI voice. Don't worry.
Oh, my God. That terrifies me, by the way.
For more Life Kit, check out our other episodes.
We have one on how to find balance when you're spending too much time looking at screens and another on what to do when you're anxious.
You can find those at npr.org slash
life kit. And if you love life kit and want even more subscribe to our newsletter at npr.org slash
life kit newsletter. Also, have you signed up for life kit plus yet? Becoming a subscriber to life
kit plus means you're supporting the work we do here at NPR. Subscribers also get to listen to the show without any sponsor breaks.
To find out more, head over to plus.npr.org slash LifeKit. And to everyone who's already subscribed,
thank you.
This episode of LifeKit was produced by Thomas Liu. Our visuals editor is Beck Harlan,
and our visual producer is Kaz Fantoni.
Our digital editors are Malika Gharib and Danielle Nett.
Megan Cain is the supervising editor, and Beth Donovan is our executive producer.
Our production team also includes Andy Tegel, Audrey Nguyen, Claire Marie Schneider, Margaret Serino, and Sylvie Douglas.
Engineering support comes from Josh Newell, Stu Rushfield, and Stacey Abbott. Thanks for listening.