PBS News Hour - Full Show - How to fight AI slop, according to Hany Farid
Episode Date: March 3, 2026Pioneering digital forensic expert Hany Farid talks with Amna Nawaz about the dangers of artificial intelligence being deployed irresponsibly, why social media companies prioritize engagement over aut...henticity and how to sift through what's real and what's not. PBS News is supported by - https://www.pbs.org/newshour/about/funders. Hosted on Acast. See acast.com/privacy
Transcript
Discussion (0)
Hi everyone, it's Omna Navaz.
Welcome to another episode of Settle In.
Well, now that anyone in the world who has a smartphone or a laptop
can produce a hyper-realistic deep fake,
how can you trust anything you see or hear online?
We're going to get into all of that today
with an expert named Hani Fareed.
He's a professor at UC Berkeley.
He focuses on digital forensics and misinformation.
He's also the chief science officer at Get Real Security.
That's a company that focuses on detecting
deep fakes. We talked about how fast the technology has moved just over the last 20 years, how hard
it is to spot something fake online, and also what people in their everyday lives can be doing
to protect themselves and also get better about what's seeing what's real and what's not online.
So settle in and enjoy my conversation with Hani Farid.
Hani Farid, welcome to settle in. Thank you so much for making the time.
It's great to be with you, Onna.
So we're going to talk about AI-generated images.
We're going to talk about digital forensics, about deep fakes.
You are, when people talk about you, they attach words like renowned expert and pioneer.
So I'm really excited to make sense of a very confusing world with you.
But before we get into all that, just tell me a little bit about you.
What led you into this line of work in the first place?
What's amazing about the work I do today here in 2006 in the world of AI and deep-fakes,
and social media is we didn't really see this coming 25 years ago when I started my academic
career. So I like to tell students I started my academic career in the previous millennium,
which is in fact true. I started my career in 1999. And it's hard to remember 1999, but we
largely lived in an analog world. Digital cameras were still on the rise up. Social media was
non-existent. Internet was burgeoning. And I started to sort of see a digital world in our future and
started to think about how we can authenticate digital media.
Because at least with analog, we had some sense that when we recorded a film or a photo of a
voice recording, it was reasonably we could trust that.
And so what started 25 years ago was a bespoke narrow field has really changed dramatically
through a few waves.
So the first wave was going from an analog to a digital world.
That's the early in mid-2000s where the internet became part of our lives, social media
became part of our lives, mobile became part of our lives. Things like Photoshop and Adobe products
for manipulating images and video started to become very sophisticated. But then, of course,
starting in around 2015-16, we saw a whole new wave of this with the power of AI. And suddenly we went
from sophisticated state-sponsored actors could manipulate reality to anybody with a keyboard and an
internet connection. And suddenly, we are now looking at a landscape where anybody can create images,
in video and audio of anybody doing or saying anything.
And so my academic career, you know, sort of has followed this trajectory from the
analog to the early digital to the now AI.
And what we focus on in my lab here at UC Berkeley and previously at Dartmouth College
was developing computational and mathematical techniques that can analyze images,
audio, and video and tell you whether they are real or not and if they've been manipulated
or altered in some way.
And way back in the 1900s when you started.
this work. I mean, when you say you were kind of focusing in on authenticating, this idea of
figuring out what is real and what is not, was that you worrying about what was coming down the
pike? I mean, why focus on that part of it? Yeah, yeah, it's a great question. I started worrying
about it because of the courts of law. That was really all I was worried about. I wasn't worried
about disinformation. I wasn't worried about state sponsored actor. I wasn't worried about deep
fakes. I was thinking about courts of law, which is when we introduce evidence into courts of law,
how do we authenticate it?
And it was really quite narrow and quite focused.
And then what happened over the years and now decades is that we started to see citizen journalism.
We started to see reporting around the world not from people like you, but from people with a mobile device.
And then, of course, we started to see disinformation campaigns.
And then we started to see state-sponsored actors.
And so it just kept growing and growing our, as we call it, convex halls, kept opening and opening and more and more things started to become relevant.
But the first days of it were really was courts of law. We were worried about evidence in courts
of law. And it was, I long for those days because say what you will about the courts, one of
the benefits is that they move slowly. So we had time. We could think. We could analyze. We could
create. But now something happens and, you know, stopwatch starts. And you've got seconds or minutes
before these things go supernova viral. And suddenly, you know, our world got very, very big and very,
very complex. I mean, the speed with which things move now and the speed also with which the
technology has developed. You mentioned sort of the democratization of that ability, that technical
ability across millions and millions of people, too. How would you characterize that? Like,
where were we back then versus where we are now? So two speed dimensions. So one is the speed with
which technology is advancing. And we used to measure these things in 12 to 18 months.
the cadence of upgrades to your latest laptop, the latest device, the latest version of Photoshop
was roughly 12 to 18 months.
Yeah, that was like every new iPhone release, right?
Like when new phones were coming out.
We're lined up outside stores.
Yeah, exactly.
But now it's 12 to 18 weeks.
Wow.
We measure it in, sometimes 12 to 18 days.
The cadence is so much faster.
Now, in fairness, we are in the very steep part of this AI revolution, but I don't think this
is going to change in the coming year.
So that's speed vector number one.
Our adversary, if you will, is moving at a much, much faster pace.
But, of course, the other speed is the speed with which information moves online.
We went from the 24-hour news cycle to social media.
And now we measure the half-life of a social media post is about 90 to 120 seconds.
Half of all views happen in about a minute and a half.
And that is a speed that we are not used to without, by the way, any,
zero editorial control, right? There's nobody policing these networks anymore. And so these things
spread like wildfire. And the thing you have to understand about social media is not only is it
doesn't care about real, fake, true lies. In fact, it actually prefers algorithmically the spread
of mis and disinformation because that's what leads to user engagement. So the algorithms have
learned how to spread the most salacious, outrageous, conspiratorial content because that's what
the billions of people online click on. And so in many ways, we've known this for a long time,
is that the Y spread much, much faster than the truth, which of course adds a whole other
complexity to the speed with which we have to respond and also the consequences for getting it
wrong. I mean, you just said something fascinating, though. It's not just that people are spreading
these things because they do. It's not a coincidence. Ragebate was the word of the year last year,
right? But it's that the algorithms actually prefer them over real information or real images. Is that
Right? That's 100% right. And the reason, of course, is because the business model of social media,
think X, Facebook, Instagram, TikTok, et cetera, is user engagement. The more you click, the more ads we
deliver, the more money they make. And so the algorithms, they didn't set out to burn the place to the
ground. They didn't set out to do that. It was learned. And, you know, you could blame the social media
giants for this, and I think we should. But at the end of the day, we're the ones clicking on those posts.
We are the ones teaching the machines that this is what we will engage with.
And so, yeah, it's learned that, well, when the user clicks on this, give them more of this.
And rage bait works.
Clickbait works.
We click on it.
We're going to talk about all of this in a lot more detail.
But let's just back it up for a second for people coming to this without your level of
expertise, which will be everyone who's listening, including me, kind of walk me through
definitionally here.
What are we talking about?
When we talk about a deep fake, what are we talking about and what does it take to
create one? Deepfake is an umbrella term, and I've never actually liked the term very much.
Generative AI is maybe the more benign term. So deep fake or generative AI is a broad term
for using modern AI machine learning tools to generate images, audio, and video. And they come in many
different flavors. So let's enumerate a few of them. So with images, the one probably people are
most familiar with is you type a caption. By the way, I still say please when I prompt AI chatbots.
I don't understand it. I think it's weird, but I do.
You put pleas into your prompts always?
I do, I swear God.
Did that affect the outcome?
I'm curious.
I don't think it does, but one of my students told me, this has haunted me, is be nice
to the chat bots because when the AI overlords come, they're going to remember that you
were polite to them, and it sort of haunts me when I prompt them.
So I'm just going to take her advice right now.
I'm going to start doing that now.
Thank you for that.
I know.
You can't be too careful.
So an image of a penguin wearing a baseball cap,
walking down the time square.
And it will generate an image exactly of that.
So this is what we call text to image generation.
And that has been gotten just incredibly good.
And there's many, many different AI models for doing that.
And you are only bounded by your imagination.
Type what you want, and it will generate an image.
With audio, it goes something like this.
I can take about 15 seconds of your voice.
I can upload it to a service and say, clone this voice.
and then I can type and I can get you in your voice to say anything I want you to say.
It's called voice cloning.
It's also a type of a deep faith.
Now, with video, there's many, many different incarnations of this.
So one is the same as text to image.
There is text to video.
A video of a penguin wearing a baseball cap walking down the streets of New York.
And it will generate a video of that.
So I think SORA, V-O, C-Dance, Kling, there's a whole bunch of those.
Now, there's another type of video which is what I call a talking head video where I can now take your voice that I just described, a single image of you.
I can take a screenshot right now.
And then I can drive an avatar of you talking.
So I will sit in front of the camera the way I'm right now in real time.
It will modulate my voice to sound like you.
And as I move, I will animate an avatar of you.
And so now I can get on a call, both recorded or in real time.
and I can sound and look just like you, or a president of the United States or the CEO of a company
or whatever I want.
Wait, wait.
I'm going to stop you right there for a second.
You need a single image of me and some version of my voice, a few seconds of my voice,
and then you could impersonate me in video form.
Is that right?
Saying whatever you want me to say.
Yeah.
Yeah.
And the question you've got to be asking yourself right now is, am I recording this?
And I mean, I could do this right now.
In fact, I don't even need this.
I just go online.
And you have enough of a digital presence where I could do this with just what you have online of yourself.
And how easy is that?
We were talking about the sort of the historic trend of deep fake.
So it used to be 10 years ago that the people who were vulnerable to this type of attack were, well, people like you who have a big digital footprint or the Scarlet Johansons of the world with a huge digital footprint.
But now we have gone from needing hours of audio and hundreds of images to 10 to 15 seconds of voice.
audio and one image, which means that the threat vector has gone from people like you and
Scarlett Johansson to, well, anybody, anybody who has any, any type of social media presence
or online presence. And that's a very different threat vector for us as a public. Who is
vulnerable to this? I mean, I went immediately to the, that's terrifying reaction because it really
alarms me in terms of the stories we cover and the kind of things we focus on. I'm just going to
ask us to put it out there. Is there a good case usage here?
Like, is there mostly fear built around this idea? Or do you see the potential for good in this technology as well?
First, I love this question because I think it's an important one, and we shouldn't be fearmongering.
So a couple of things.
One is, when I describe non-consensual intimate imagery, child sexual abuse, small-scale fraud, large-scale fraud, disinformation, these are not hypothetical.
These are happening every day to individuals, organizations, societies, and democracy.
So that threat is not, oh, this may happen to us at some point in the future.
This is happening today, and it will continue to happen, number one.
Number two is to your question now, are there interesting and positive applications of the technology? Sure, of course they are. It's incredibly creative. I can imagine, by the way, in the coming years that the next blockbuster movie will not come out of Hollywood, but it will come out of the mind of a highly talented 15-year-old in their bedroom working with General of AI who can now create entire scripts and videos and music and sound, and it's going to be amazing. There are really, really exciting applications. But now the question you want to ask is, the
counterbalance. Okay, let's factor in the creative side of things and then the threats to individuals,
organizations, societies, and democracies. And I would say the threat is right now outweighing the
social benefits. But that's okay, because now our job is to start putting in guardrails,
both technological, both policy, both regulatory and both liability, to counterbalance.
Everything in our world as positive use cases and bad use cases. And our job is to figure out
how do we counterbalance. And every once in a while, we say, okay, look,
The bad use cases are simply higher than the good use cases.
We're not going to allow this technology until we can figure out how to counterbalance.
So I and nobody is saying get rid of generative AI, get rid of AI.
But what we are saying is let's not repeat the mistake of the last 25 years with how we have allowed technology to be weaponized.
Let's put in thoughtful guardrails.
By the way we do in the physical world, when we release products, when we release medicines and we sell food and we build airplanes and we sell products,
electronics, there are guidelines, their safety guidelines that say, yes, the stuff is great.
But if we're not careful, it's going to kill us. Okay, let's do the same thing with the internet.
The internet has had its time of move fast and break things. We now recognize that that policy
simply doesn't work. I want to talk a little bit more about the guardrails and what you see
is solutions, bright spots ahead, but also to make it real for people.
because sometimes we do focus on the very high-profile instances in which we see deep fake,
generative AI having some sort of negative impact, right?
I mean, I haven't shared this story with you before, but I was the target of an AI image
and video online because someone took my voice and made it sound like I was saying something
I was not and put it over a fake video.
Now, thankfully, the video was so fake you knew it wasn't real.
It was like recreating news events that didn't actually happen, but it was my voice.
And it was clearly enough my voice that people flag me because I have a recognizable voice.
And it was by the time it was flagged to me viewed more than 17 million times.
And it was incredible to try to, it's the efforts to try and take it down.
But in terms of what people may see in their everyday lives, not high profile people or
politicians or anything, where can they be targeted?
When you talk about the threat vector for everyday Americans, where is that?
Yeah, I'm glad you asked that.
And look, maybe we accept as a public figure there's some amount of risk to that for people like you.
I don't necessarily think that's an okay thing.
But, okay, I'm going to concede that point for a moment.
So here's the things that we are seeing for people not like you, the average citizen.
So we are seeing things like parents and grandparents getting phone calls in their kids' voices saying mom, dad, grandma, grandpa, have been hurt.
You need to send $2,000 or, you know, I won't be able to get my car out of the impound or I'm going to jail.
And we have seen these types of scams.
particularly hitting senior citizens who are already vulnerable to these types of cyber attacks.
That's number one.
We've seen children, horrific things happening to children where people will take a photo of a child
that is posted online, make an explicit version of that where they are, for example, clothing
has been removed.
It sends it to that kid and said, if you don't do this, we will do this.
And horrific things are happening to those children.
So the small scale fraud to adults, horrific things that are happening to children is happening today.
Think also all the phishing and scams that you've been seeing on your text messages and on your email are now being supercharged by generative AI.
Where you're not getting a text message from what seems to be your boss, but you're getting a voicemail or a video call from them.
And it is not who it purports to be.
So a lot of the cyber threats that we have been dealing with over the last 10, 20, 30 years are now being amplification.
simplified and accelerated by both AI bots that are driving them so you don't need a human
in a loop and also AI powered in terms of the deep fake.
And those, these are not, by the way, hypothetical, these are happening.
And the last one I will say, particularly since we have a midterm election coming up here
in the United States, is disinformation around what's happening in the world.
And that very much affects all of us as individuals and how we think, how we reason, how we vote,
and what we know about the world.
And that is absolutely impacting every citizen, particularly those who get the majority of their news on social media.
So you run a expert-led, sophisticated lab and team to work on the digital forensics piece of this, the authentication piece of this.
For everyday Americans, if someone gets a call from someone who sounds like their grandchild, crying and asking for help in $2,000, or they see an image of someone who seems to be someone in their family in a compromised position,
how can everyday people understand what's real and what's not?
So there's bad news and more bad news.
So the first bad news is in addition to developing computational mathematical techniques to detect these things, we also run perceptual studies.
We show images and audio and video to the average person on the internet.
And we ask them, can you tell the real from the fake?
And this is, we are setting you up for success.
We're putting you in a lab environment.
We are telling you exactly what to do.
or telling you that some of the things you're seeing are real, some of them are fake, and we simply
try to see this versus a call it two in the morning or rage scrolling on social media where you
are not set up for success. And what I can tell you is that when it comes to images, audio,
and video, the average person is not much better than flipping a coin. They're basically at chance.
It's over. And even if it wasn't over, and even if I could tell you, look for this, listen for
this, that advice will simply not work three months from now when the next generation of technology
technologies emerge. And so using your senses, you have to understand, is at best limited. Okay, so that's the
bad news on top of the bad news. The slightly better news is this conversation right here, raising
awareness, letting people know that when they get a phone call from what sounds like their loved one,
it may not be them, knowing that when you get a video call or what you see online may not be real,
I think is really important to public awareness. Now, I will tell you what my wife and I do is we have
a code word. So if one of us gets a call from the other.
They're saying something that is out of the ordinary.
We have a code word.
And it's only the two of us.
I have actually code words with other people I work with in sensitive cases.
And I love this because it's an analog solution to a very modern high-tech problem.
And so you can have a code word with your family.
In fact, you should do this.
And it's super easy to do.
You can also hang up.
I know that sounds harsh, but call back.
Because a lot of time what is happening is that the phone numbers are being spooked.
And when you call back, it is not what it purports to be.
So awareness that this can be going on is really, really important.
And by the way, that's true with spam and fishing and malware and ransomware.
A lot of this is don't click on a link from an email that you don't know.
It's a lot of this is just we have to keep talking about this as a PSA.
But then starting to put some guardrails in place.
You know, code words are super, super easy to do.
Call back is super easy to do.
Code words with your family.
So meaning if you don't hear this word, you know this is not someone that you trust.
Is that right?
Yeah, yeah, exactly. Yeah, super easy. Yeah, it doesn't take anything. Yeah, I remember it. I have to, I check with my wife every few weeks. I'm like, what's the code word?
It's almost like in the movie Inception, like the totem, right? It's like the one thing you can turn to to figure out if you're in the dream or if you're in reality, something you come to again and again.
I don't love the analogy, but yes, because it's, it's, yes.
Try to make it relatable for the rest of us here, honey. Let me ask you this, because the fact that the fact that. The fact that. The fact.
that it's evolving so quickly, the technology is developing at such a speed. You tell us a little bit
out what we can do is everyday citizens trying to figure all this out, but you guys are working at the
highest levels here at your lab. Tell me about what your lab is doing. How were you responding to
this in real time? This has always been a hard problem. Playing defense is always harder than playing
offense because the way this works is my adversary, if you will. They get to work in quiet.
They get to work for as long as they want and then they release a product and then my starter gun goes off.
And now we have to start playing catch up.
So we are fundamentally always behind.
Always.
That's just the nature of what it is.
So what we have been focusing on is we want to be able to detect fake images, fake audio, fake video,
to do it both in real time and be able to do it for the types of things that we get,
that we find online.
But what we have decided to do both in my lab, but also I co-founded a company called Get Real Security.
So over at Get Real as well is to focus on images, audio, and video of people.
We're just going to narrow the scope a little bit for now because we are so overwhelmed by the complexity of the problem.
So when we get a video of a person behind a microphone talking, how do we authenticate her voice?
How do we authenticate her video?
And if that person says she is Amna, how can I make sure it is?
So can I use biometric facial identification?
Can I use biometric voice identification?
Are there things about your environment, for example, that I know about, that you've seen you in this environment before?
things about that microphone I know about and using that type of knowledge about an individual to
protect the omnis of the world, the CEOs of the world, the members of the Senate foreign relations
of the world. And so the good news and the bad news is, so the good news is we are getting better
at this and the defenses are getting better. The bad news is, and I think this is not going to shock
anybody, they're not perfect. Nothing is 100%, right? And so I think it's really, really important in this
space to not think of cybersecurity as an elimination.
We don't eliminate threats.
We mitigate them.
And that's true in the physical world, too.
I lock my front door when I leave the house in the morning, but I'm also aware that
somebody can still break into my house, but I've made it more difficult.
And I think that's the way we have to think about this, is that this is one layer, technology.
That's what we do.
And then, of course, there is education.
That's what you and I are doing right now.
And then I think we need our government to wake up and start thinking about a regulatory
landscape to put some guardrails on this technology. I think we need the corporate world to start
taking more responsibility for how their technologies are weaponized. And the goal of those four
things is to come together and to take advantage of this amazing technology, but also to keep us
safe. One of the places people may have seen this in headlines and in news recently was,
it's something you've been raising the alarm about, was people using AI enhancement tools to try to
identify people in videos, right? We saw this in Minneapolis with people trying to use these tools
to identify the masked ice agents who had shot U.S. citizens there. We saw it with people trying to
enhance video and identify the man who shot and killed Charlie Kirk. You've said that this is
a wholly inappropriate use of the technology. Tell me about that. Why? So I have to say,
I've been doing this for a long time and I'm pretty good at, and I didn't see this coming. I
didn't see this application of AI and how it was going to be misused. So let me, let me set the
stage a little bit and I'm going to talk specifically about this. So setting the stage, we are
seeing three types of applications of AI. People using AI to whole cloth create events in people who
don't exist. So just outright lies. This is not who it purports to be. This is not a shark swimming down
route one after a thunderstorm. We are also seeing AI being used to recreate events that actually
happened. So, for example, when the U.S. invaded Venezuela and took the president Madura, people were
creating fake images and video of that event. The event happened, but what we were sharing were not.
And that one's a little bit more complicated, right? Is it false? No, that did happen, but that is not
an example of or an image of that happening. And then number three, as you were just describing,
is that now people on the internet have taken to using AI to quote unquote enhance. Think the
the most silly stereotype of your favorite crime drama on television.
You hit a button and you go from four pixels to an HD image, right?
So what we have seen people do, for example,
is take an image of an ICE agent who killed Renee Good.
And he was wearing a mask.
So his face is covered from the base of the nose down and said,
we want to identify him.
Ask AI to do that.
AI doesn't know anything.
So it simply does it.
And it creates an image that looks like a person.
It just wasn't the ice person.
In fact, it led to a misidentification.
And that person started getting death threats.
Same thing happened with the Guthrie case, in fact, arguably worse because that was an image
where somebody was wearing a full-blown ski mask.
And they gave that image to an AI and said, what does the person look like underneath?
The AI gladly did that.
And the person responded, well, this is what I got better than nothing.
And my response was, no, this is worse than nothing because you are absolutely making stuff up.
This is the mother of all hallucinations.
AI doesn't know what somebody under a scheme act looks like.
This isn't magic.
Right?
This isn't, I don't even, this is, I don't think the people doing this are trying to create
harm.
I think they simply don't understand the technology.
And because the technology is so enticing, you ask it a question, and it's very certain
of an answer, even when it doesn't know the answer.
And so you can see how this technology is being misused.
And we've seen multiple cases of this.
Can I stop you for a second there?
You used a word there that I want to make sure people understand hallucinations.
What are you talking about when you mentioned hallucinations and AI?
Yeah, so this is the term.
I don't actually love the term, but it is the term that we use where you ask AI something,
like your favorite chat bot like chat GPT or Gemini or Claude.
And you say, tell me about X.
And it says something very definitively, but it is made it up.
It's actually not true, simply untrue.
And that's why the term hallucination is overly generous.
It's just wrong.
It's just wrong.
Now, when it comes to images, it will gladly take an image of a person that is masked and generate an image with them unmasked.
I can tell you, we've done the study where when you do that, it is completely wrong.
It has no relationship to the biometric identity of the person that started with.
But, you know, when you look at the image, it's enticing.
It's accurate.
I mean, it's not accurate.
Sorry, it's high resolution.
And it looks like a person.
And so people are like, oh, this is much of what they look like.
And I think it, I don't think it comes from a bad place.
I think it comes from just not understanding the technology and certainly not understand the limits of the technology.
And I think this is a sort of a whole new level of danger that we are seeing.
And we saw it with Charlie Kirk.
We saw it with Minnesota.
We saw it with the Guthrie abduction.
And we're going to see more of this.
There's another one where somebody, it was coming out of the Epstein files where somebody took a profile photo of somebody like this and said, what do they look like straight on?
No, no, no, no, no.
You can't do this.
This doesn't work.
This isn't CSI, push a button, enhance.
It doesn't work this way.
So it's people trusting the AI more than they should
because they asked it a question.
It gave it an answer and they think,
well, this is the technology, so it must be right.
Yeah, I think that's exactly the right way to put it.
They trust it, right?
And they trust it because they don't understand it.
There's also, we've seen instances,
a very intentional image manipulation.
We see this in political circles often as well.
We've seen even at, you know, the level of this
administration, this government, using AI-generated, using digitally altered images and videos on
their social media platforms. Some of them are very obvious, right? I know recently we saw the
very racist video featuring the Obamas and a bunch of other people as animals and characters
from the Lion King. But others have been photos that have been doctored and been altered,
and it takes time for the news and reporting around it to show that it was altered. And I'm
thinking about the woman Nikima Levy Armstrong was crying when she was detained. Her photo in which
she was not crying was altered to make it look like she was crying. This is one of the women who
went in as a protester to the Minneapolis Church. Is that level of intention and dissemination
at that level? Is that something we've seen before? Well, we have certainly seen state-sponsored
disinformation campaign from Russia, from China, from North Korea, from Iran, and from around the
world. We have not seen it from the United States. So in that regard, it is new. It is new to see
this level of deceit coming out of the White House. And look, I think it is wholly inappropriate for
the President of the United States and for the administration to be posting videos of the Obama's
the way they did, or videos of Donald Trump in a jet plane dumping brown substance onto protesters.
wholly inappropriate. And I think it demeans the White House. It demeans the office. And it means
why should I ever trust anything that you post anymore? And there's going to be good reason for us not to.
Well, walk me further down that for a moment. Because when you put it that way, it doesn't make sense, right?
If there's a body that wants to be trusted, that wants to deliver a message to people,
why continue to undermine it by putting out images and videos that call into question, whether they're real or not?
When you've seen that strategy in other places, what is it led to? Why is that done?
Yeah. I don't have a great answer for you. I think the generous interpretation is they're not
thinking ahead. That short term, we think this is funny. There's a political gain to be had and we simply
aren't thinking forward. I don't have a great answer for you. And honestly, when the White House was
asked, why did you do this? Their response was disheartening. The response was, well, why not? So I just don't
think they're thinking this through very carefully. And that is that is worrisome. And it's not just
worrisome. Look, this administration will come and go. We know this. But you are eroding trust in our
institutions that we need not just for the next three years, but potentially a long time out. And that is,
that is really worrisome. Look, when one mainstream media outlet gets it wrong, it creates a shadow over
all of them, right? We're all in this together. We can disagree on lots of things. It's okay to have
political disagreements, but we have to be able to trust institutions and organizations,
and we all have a huge, huge reason to want to do that, regardless of our political affiliation.
We have seen, you mentioned, the spread of misinformation and disinformation and how much that's
changed just over the course of your academic career. And a lot of that was before
genitive AI became as good as it is and as widespread and as easily accessible as it is.
I think anyone listening right now is just going to wonder, like, how can you trust anything
you see anymore? So first of all, 100% right. Look, even in the analog world, we had missing
disinformation, but it was it was reasonably controlled, right? Because we didn't have the internet.
We didn't have digital. We didn't have deep fakes and AI. And then, of course, as the digital
revolution came, that started getting amplified. Social media amplified it more. And deep fakes and
generate AI amplified it even more. And so everything is a multiplicative amplification factor. And here
we are where I'm going to get to your question. How do you believe and trust anything you see
anymore? And I think the short answer is on social media, you don't. Stop, for the love of God,
trying to get news and information from social media. It's not what it was designed for.
It's not what it is good at. It is not a place to become an informed citizen. If you want to be
entertained there, if you want to be rage baited there, I think that's a problem, but fine.
But that is not a place to figure out what is going on in the world.
I mean, how many years do you have to get lied to before you start getting mad and start saying,
okay, I give up.
So I think, and look, I'm not saying media outlets are perfect, but they get it right way, way,
way more often than they get it wrong.
And you can't say that about social media.
You can't even say that about the CEO of some of the social media companies.
They get it wrong way more than they get it right.
And so we have to return to our trusted sources.
We have to understand that people like you are really, work really hard to figure out what is going on in the world.
They talk to people like me, to understand it and to bring that information to you.
And you don't have to figure out what's going on in the first 10 seconds of an event.
There is a speed accuracy tradeoff.
The faster you get your information, the less accurate it is.
I would much, much rather wait until tomorrow morning and know what is going on than be lied to for the intervening 12 hours.
And we just have to change that culture that people have to understand that social media is not designed as a reliable source of information.
It's not. It never has and it never will.
Can I ask to follow up on that, though, and thank you for the very kind things you say about me and my colleagues here.
But the majority of Americans do get their news and information from social media.
That's where we are right now.
What you're calling for is an enormous cultural shift, the likes of which we are definitely not.
trending towards, right? So just culturally, short of even policy guidelines or companies and CEOs
completely changing how they do their work, how does that kind of thing start? Yeah. So first,
I'm not naive about this is a massive cultural and not just here in the United States, but globally.
And arguably, what we are seeing outside the U.S. is even worse in some ways because of the echo chamber
of how social media has dominated other parts of the world. I am aware of that. But I'm also old enough to
remember when something like one-third of Americans smoked cigarettes. And we thought, well,
that's just the way it is. And we smoked in grocery store. We smoked in restaurants. We
smoked in airplanes, for God's sakes. And it was the norm. But we changed because we realized that
this stuff is killing us. And I would argue social media is not that different. Now, I don't think
that change will happen slowly. I don't think it will come without pressure from Congress, from our
regulators, with liability from the courts holding these companies responsible for the harm.
the parallels to tobacco are not far off.
We have had things in our society that had been decidedly bad for us for a very long time,
and it took decades pushing against corporate interests, understanding what is happening here.
And I think this will take that amount of time.
Now, the good news is I think there's some, at least a glimmer of hope in the horizon.
So if you look, for example, today, there are massive litigations happening around social media
and addictive properties and the impact of children in a way that I think tend to,
years ago, I didn't think we would see these cases. And so there is movement. Australia has
banned social media for kids under the age of 16. The EU and the UK and other parts of the
world are considering similar legislation. I think there is an awakening that while there are
positive aspects to these technologies, to social media, it is clear the harms are unambiguous.
And now we have to start responding. I think we're late to the game, but at least we're moving in
the positive direction. So I think there is some positive movement here, but it will take a lot of
conversations. It will take a lot of serious people thinking about this in a serious way. And it will
take fighting back against massive, massive global corporate interests. But I don't know what the
other option is. You mentioned that some of the court cases right now, we've been covering those.
And the landmark one right now really seems to be the one in which CEO Mark Zuckerberg,
the meta just testified recently about the socially addictive nature of Instagram and other
platforms as well. And we've seen other social media companies settle, right? They said they don't want to go to
court. Facebook and meta seem to be fighting this. They're facing thousands of other related lawsuits.
Do you feel like that and this moment could lead to a change in the incentive structure for CEOs,
for the people who've been driving this content? Yeah. And I do, and I'll tell you why. Because up until fairly
recently, these social media companies have been able to do whatever they want with very little financial
implications. Yeah, they pay a fine every once in a while, but it is nothing. It is nothing compared to
profits. When you affect change is when corporations have to internalize not million dollar fines,
not $10 million, $100 billion fines. When they start to internalize that type of financial
penalty is when the business model changes. But up until then, they're doing a calculation. Yeah,
we get bad PR. We got PR people for PR problem. Yeah, we pay a $10 million fine.
We don't care. It's a rounding error. It's what we pay on paper clips every day. It doesn't matter. But when this starts to become 10% of global profits, then the calculus changes. They internalize that liability and they change the business model. We know this. We know this from product safety. When did companies start making better products that were safer? Is when the liability's law said, you create a product that you knew or should have known was going to create harm, we are going to sue you back to the dark ages. And products got safer. Medicine's got safer. Food got
safer. Transportation got safer. We're going to have to do the same thing with social media.
We let them off the hook for 25 years. We've created these monstrosity, global monstrosities,
but I think there is an awakening. And I think it started outside the U.S., but now,
thanks to some of these lawsuits and other similar ones, I think there is going to be an awakening.
And it's, look, it's not a complicated calculation, right? The fines have to outweigh the profits.
That's it. That's it. And then change happens. It's not complicated.
Is there something generationally, too, that gives you hope? And I ask this because, so I'm in my mid-40s,
I'm part of the generation that grew up in my childhood analog and that I've lived most of my adult
life in the digital world, right? So my skepticism has been learned over time. My children are very
skeptical of what they see sometimes online. And I mentioned this because literally my 10-year-old
the other day saw a photo, an actual photo of a friend's wedding. And it was a beautiful photo.
it was so beautiful and perfect that she looked at it and said, oh, that's AI. That was her first reaction.
And so I wonder for the generation that's growing up fully digital, fully aware, fully familiar,
is there a better sense of skepticism?
Yeah, yeah. Two things. One is yes. But we have to be careful, too. We don't want to get over skeptical.
We don't want to say everything is fate, right? So we want to be careful. But here's what I can tell you.
I'm on a university campus. And one of the joys of being on a university campus is you get to see every generation as they are coming through.
And there is a new generation coming.
And they are thinking about this differently.
I also think they don't care about social media.
They're not on Facebook.
There's a little bit of Instagram, a little bit of TikTok.
But, you know, they're also, I see kids with Polaroid cameras, analog cameras doing this with their photos.
That makes me so happy.
Flip phones are back, right?
I think there is a realization that social media is toxic.
And I think this generation is starting to, you know, use technology in a very different way than the previous technology, the previous generation and the previous generation.
in the previous generation. So I do think there's some hope here. I think we have to help them.
I think we have to enable them. One of the great things about the Australian ban on social media
is that it empowered parents to say, hey, look, I, you know, because now you know what happens.
All your friends are on social media. Okay, I've got your, now you are on social media,
but parents have to cover now. And this is what the governments have to do. They have to empower parents.
You can't say, you can't let Mark Zuckerberg getting away with saying, oh, we've got parental
controls. You can do what you need to. No, no, no. This has to come from the top down.
So I am hopeful that this generation, I think, is being more critical, is being more thoughtful, has learned from the lesson of the previous generation.
But I think we have to empower and enable them.
So social media bans are definitely another tool.
When you look at other potential policy steps that could be put into place, what else can leaders do right now?
So I like the ban for 16 under.
I think it should be 30, but I'll concede 16 is a fine age for now.
I mean, by the way, this isn't an outrageous thing.
I mean, all kinds of things we don't allow kids to do before they're of an adult age.
So this doesn't seem particularly outrageous.
And the studies are overwhelming evidence.
The former surgeon general has said the time for debate is over, overwhelming evidence that social media is negative for kids.
Are there positive aspects to it for certain marginalized groups within that age group?
Of course there are.
But you have to look at the broader issue here.
So in addition to putting bans or restrictions on 16.
18, whatever you want that age to be, I do think there are things I can do.
So one is these tech companies have to stop being let off the hook from a regulatory
perspective.
We have deregulated the tech industry in a way that we would never allow in the physical space.
And we need to start thinking about how these tech companies should be held responsible for
creating safer products.
Liability.
I think when the liability starts coming down hard from the courts, I think it's actually
going to start to affect change.
And then here's the big one.
The winners from social media, the last, you know, the five giants of tech are going to be the five winners in AI because those are inextricably linked.
We have virtual monopolies and we need to allow for competition in this space.
You can't allow these global monopolies to keep dominating.
We have to make room for better business models, better CEOs, better technology, safer technology.
And that can't be done when you have these massive 800-pound.
gorillas that can squash any competition at will. So I think economically, we have to create a
lively ecosystem that allows small companies to compete with these massive giants. And by the way,
when Google tells you, no, no, no, you can't do that. That is bad for competition. You should
remind them that when they were a burgeoning startup, they went to the Department of Justice crying like
a bunch of babies because Microsoft, the then 800-pound gorilla was beating them up. And the DOJ stepped in,
And it worked. We created Google. You don't get to then close the door and say, well, we don't want that kind of regulation anymore. That's why you exist. It's because the government stepped in and said, stop using your monopoly to squash competition. And it worked. So let's do it again. Let's protect the companies now from Google, which was protected from Microsoft.
What about more people like you and the people who work with you and the students that you teach? I mean, I'm old enough to remember when everyone thought you can future proof your career by learning to code. And now,
There's a program that can do that for you in seconds online. Are we doing enough to train the next
generation of digital forensic experts, people like you? Yeah. Yeah. No. And there's a couple of reasons for that.
So one is there has been over the last year a general assault on the academy, international students,
researchers like me who research deep fakes and disinformation. There has been assault on our funding,
on the type of work we do because it's not aligned with this administration. And that's been
real problem and that is a decades-long problem because you have to feed the pipe right it takes a long
time to educate these kids but the other reason is that if you look both at everything from where the
venture capital money is to where the the young people want to go it's on the generative side
the young people and the money and the power and the fame is all on the generative side we are
100 to 1,000 to 1 outgunned the vast majority of students want to be on the other side of the
aisle. And I, you know, part of that I think is financial. They see a windfall there and they don't
necessarily see a windfall in this type of work I do. Nobody wants to pay billions of dollars for
defense. They'll pay billions of dollars for the offensive tools, but not for the defensive.
Now, the good news, though, is since we were talking about young people, is I do think that this
generation coming up is more idealistic. I don't think they all want to go work at Facebook and Google and
sell ads to get people to buy things they don't need. I think they want to have a sense that the work
they are doing is meaningful, important. And so I'm hopeful for the generation that is making
its way through college now, as opposed to the previous generations that I think got bamboozled
by the tech industry. You've been doing this work for decades, right? I mean, you can speak
about it with such familiarity and some fear, but a lot of, some optimism as well in terms of what
needs to happen. For anyone listening to this conversation who hasn't really thought deeply about
it before, who's trying to find their footing right now and maybe a little worried because of
everything they just heard. What should they take away from this? I mean, what are, you mentioned
the code word earlier in terms of being able to figure out what's real and what's not if you
are specifically targeted. But what else should people be doing right now on an individual level
so they don't just feel like this is all happening to them in the moment? Yeah, yeah. Well, I'm
stockpiling food and water, so I don't know. No, that's not. Sorry, that's,
I'm not helping. Sorry, that's a terrible thing.
Not helpful, honey.
Okay, a couple of things. One is you got to get smart fast.
You can't say, wow, this whole AI thing.
This is like saying, oh, the whole internet thing, the whole technology thing.
We've got to get smart fast.
And you can't ignore it, right?
Like it will impact you.
Yes, this is coming.
This is coming.
I don't know if it's going to be today or tomorrow or next year, but our lives, both
personal and professional lives and certainly the lives of our children and grandchildren are going to be impacted.
And I know it is unfair.
to say, well, you got to get smart about this stuff, but you do. You've got to start
understanding this. We have got to start voting for politicians who want to protect us and not
protect five people in Silicon Valley. We've got to start voting with our feet and start telling people,
look, this is not okay, right? If you don't like the way Mark Zuckerberg and Elon Musk and Jeff
Bezos do things, stop using their products. And I know this seems dumb. You're like, I'm one of a billion
customers. What does it matter? But it does matter. The same way voting matters, the same.
way it matters that you recycle. Small actions matter, right? And start voting with your feet.
Start voting with your wallet. Start telling these companies, this is not okay. Let me give me an example
of that. When ABC canceled Jimmy Kimmel's show after pressure from the administration,
hundreds of thousands of people overnight canceled their Disney and Hulu subscriptions. And it affected
change. They reversed course. And two days later, he was back on the air. We have power,
even when it doesn't seem like that.
And so let's exercise it.
Let's demand more of our corporate overload.
Let's demand more of our elected officials.
And I know it seems like, well, I'm just one voice.
But yes, you're one voice.
But talk to two people and they'll talk to two people and they'll talk to two people.
And that's how change happens.
Grassroots can happen, but we've got to decide we are not going to be held victim to five multi, multi-billionaires in Silicon Valley.
I mean, you are in the thick of it every single day.
you see the worst of what's possible. You can see ahead in a way that many of us cannot in terms of
what more could be coming. And you know which way this is trending. Just on a personal level for you,
what keeps you going every day? How do you stay in this? There are days, there are days where you're like,
I'm in a fetal position under the bed. So look, I think we're at a fork in the road. And it's coming up
fast and we're going to have to make some decisions. And one leads to some weird dystopian future
that I don't think is going to be good for anybody except for maybe five people in the world.
And the other leads to something that I think can be exciting and powerful and enabling for
society. I think we're leaning in the wrong direction right now. I think we're going down that
it seems like we're leaning in the direction of the path that I don't want to go down.
But here's my thing is what are the options, right? I mean, somebody's got to do something.
something and I'm going to do my little part and I'm hoping that other people are going to do the
little part. You're doing your part. And I think, you know, otherwise, we give up, right? We say up.
This problem is too big. But I don't think that's an option. And I think that's true of the things that
we're talking about now. I think it's true of global climate change. I think it's true of many of these
massive, massive problems. We have got to stop being defeatist about this and say, we're going to do
something about it in our own small way. Otherwise, it's the fetal position under the bed for the next
few years. And that's not an option. Well, I find great hope in the work that you do and the way you
talk about it and your colleagues as well. Find great hope in my 10-year-old skepticism as well.
As do I. Hopefully we can turn this around. Hattie Phaerid, such a pleasure to speak with you.
I learned so much. I'm sure our listeners did too. Thank you very much for making it.
the time. Thanks, thanks, I'm wonderful to talk to you.
