Your Undivided Attention - Stranger than Fiction — with Claire Wardle
Episode Date: March 31, 2020How can tech companies help flatten the curve? First and foremost, they must address the lethal misinformation and disinformation circulating on their platforms. The problem goes much deeper than fake... news, according to Claire Wardle, co-founder and executive director of First Draft. She studies the gray zones of information warfare, where bad actors mix facts with falsehoods, news with gossip, and sincerity with satire. “Most of this stuff isn't fake and most of this stuff isn't news,” Claire argues. If these subtler forms of misinformation go unaddressed, tech companies may not only fail to flatten the curve — they could raise it higher.
Transcript
Discussion (0)
Whether it was the bushfires in Australia, whether it was the downing of the plane in Iran,
whether it's about the impeachment crisis, whether it's coronavirus.
Almost every story now has an element of mis or disinformation connected to it.
That's Claire Wardle, the co-founder and US director of First Draft,
a non-profit that trains journalists how to combat miss and disinformation worldwide.
Claire says the problems journalists are confronting overseas rarely get any press coverage here in the US.
Certainly, the conversations in America do not recognize.
what this looks like globally.
In fact, the conversation seems to be stuck in 2016.
While we've been talking about the four-year-old threat of fake news,
Claire has been watching whole new categories of threats go unacknowledged.
When people would use the phrase fake news,
I would say, well, most of this stuff isn't fake and most of this stuff isn't news.
If there's anything Claire can teach us,
it's that most of these threats are so new, we're at a loss for words.
How do we deal with a genuine photo that's three years old?
It's a genuine photo, it's three years old,
but the problem is the caption is placing it in a different context.
Lots of research shows that audiences can't even consider that that photo would be three years out of date
because that's not a problem that they've ever encountered.
So they're not prepared to fight it because they don't know it's a thing.
But Claire can describe the specific problem.
She calls it false context.
And that's just one of the seven distinct harms that she's identified in her eye-opening presentation,
the seven types of miss and disinformation.
It's a presentation that takes us into the grey zones of information warfare,
where bad actors and their unwitting victims slip between facts and falsehoods, news and gossip, sincerity and satire.
They can even share the truth and nothing but the truth, but still mislead users through the power of narrative and repetition.
Bad actors, they understand that they are pushing a particular frame.
It could be about illegal immigration.
It could be about vaccine.
Whatever it is, it's about frames and narratives.
And those of us on this side who are trying to push quality information are playing whack-a-mole with these atoms of content.
I just think we're not going about this the right way at all.
this idea like we're going to fact check a thing, not how it works.
Today on the podcast, Claire Wardle tells us how dis and misinformation really work.
Now keep in mind that we recorded this interview in late February before the coronavirus
upended life as we know it.
Now that we've all retreated to our homes in this collective attempt to flatten the curve,
this conversation about misinformation has never been more important because how we understand
what the coronavirus is and how dangerous it is and what we should do and how do we help each
other is all mediated on screens. All of us are living on screens. It is the new digital
habitat. And so everything we're about to explore with Claire about what needs fixing in our
information environment and how our minds really process that information are all the more important
to tackle now. One of the hopeful things here is that Corona is like a tracer bullet moving
through our information ecology. And it's a united threat, a united enemy that we can all finally
face. So this is the perfect time for platforms to get it under control. So I'm Tristan Harris
and I'm Azaraskin. And this is your undivided attention.
I actually have a PhD in communication, and I thought I was going to be professor for
rest of my life. I was researching user-generated content. So how did newsrooms verify content
that came in from the audience over email? And so it was a very niche research topic and I thought
nothing of it. And then the plane landed in the Hudson River in January 2009 and the head of
News Gathering called me and said, not one of our journalists in the newsroom knows what
Twitter is. And somebody tweeted a picture of passengers on the wings. And we didn't know how to
find that picture, how to verify that picture or knew whether legally or ethically we could use
the picture. Can you leave academia and help train all of our staff around the world on how to do
this? So I called my mum and said, I'm going to leave academia and that nice pension. And so for the
last 10 years, I've been travelling the world training people. And then three years ago, the question
of how do you verify information online became a thing that a lot of people cared about.
I think what's interesting is 10 years ago I started my career teaching journalists how to find content and how it was going to open up their black book, how they'd be able to get different voices, all the things that we loved about this concept of what social media would bring. And now I'm training journalists to say, stop, be really careful. Those sources probably aren't who they say they are. I mean, in 10 years, it's been 180 degree shift, and it's kind of astonishing. And so what woke you up to this shift? You said three years ago, something shifted. What was that?
So first draft was founded in 2015.
As a project of Google, actually, Google recognized that journalists were struggling to know how to verify, particularly images and videos online.
And so first draft was founded with very little money to say, can you just build a website to help journalists?
We taught journalists how to do geolocation on a video.
How can you independently assess where something has been filmed?
How can you do digital footprinting?
How can you look at metadata to understand where a photo has been taken?
So we still use those same training materials now, but back then it was about how can you make sure that material during a breaking news event is authentic.
Now it's how do you know that that trending topic is authentic?
How do you know that this network of accounts pushing the same material is authentic?
So the shift, again, has been quick, but it's using the same tools.
When I first met you, I think it was in 2017 right after the election.
It was at MIT, the Media Labs fake news conference.
Oh, yeah, the Miss InfoCon.
And one of the things I really appreciated in a world of simplicity in black and white thinking was your first desire to say, wait, hold on a second. How do we actually define an ontology, a framework for saying what's the difference between misinformation, disinformation, and these even new proliferating types of fake hashtags, fake trending topics, all these kinds of things? Do you want to just quickly take a moment and define some of these distinctions because the word fake news, as we all know, is really unhelpful and we want to have a dialogue about what the deeper stuff is?
Yeah, and I think if I'm being honest, this goes back to my academic roots.
I mean, I'm glad that I did my PhD, but it taught me to think about language and to think about terminology.
And so I remember in my bedroom, it was kind of like November 2016, just off the election.
I remember with Post-it notes, being like, well, here's an example, that's this, that's this.
And I kind of created a typology with post-it notes.
And at that conference, I remember putting up the seven types of miss and disinformation as kind of like a testing ground.
And somebody tweeted it, as people do at conferences.
And Brian Stelter from CNN picked up the tweet and put it on reliable.
sources. And since then, this kind of typology has taken on a life of its own. And whilst I
don't think it's perfect, it can definitely be built on. What it did was make people recognize that
this isn't about truth and falsity. So the seven types starts with satire, which, interestingly,
lots of people said, oh, Claire, satire is a form of art. You can't include that. Well, now we see
satire used deliberately as a ploy to get around the fact checkers. So what's an example of that?
We saw this a couple of weeks ago with the Bloomberg clip, where he slowed down the debate,
which made it look like when he said,
I'm the only person on stage that has run a business.
And they basically went from candidate to candidate
with reaction shots where they look stumped.
And he added chirping crickets as a kind of way of saying,
look, nobody could answer my question.
And then when people push back and said,
that is a false video, that's disinformation.
He quite rightly said, it's not disinformation.
I was trying to make a point and I was using satire as a technique.
Got it. So that was satire.
Yeah.
Or we just see people label a piece of satire.
It could be a slumophobic, for example.
And when people push back, they're like, we're making fun of people who are a slammophobic.
So satire is now something that we see as a tactic.
Things like false connection.
If you have a clickbait headline, the idea that you're taking somebody to a piece of content that doesn't deliver what it was promised.
I argue that's a form of polluting the information environment.
You call that false connection.
Yeah.
And then we talk about misleading content.
We've seen this for years in tabloid newspapers.
They're using statistics in a way that's trying to slant something, you know, bias through omission.
and all the techniques that we've seen around misleading content.
We then talk about false context, which is genuine information but used out of context.
Say there's an earthquake tonight in Chile, and I go and Google it.
And the first thing that comes up is an image of an earthquake, but it was three years ago in Iran.
And I'm like, oh, my goodness, what a picture.
And I share that on Twitter.
And I'm like, I can't believe this earthquake.
It's genuine, but it's used out of context.
So it was true about a different thing.
Exactly.
And so the context has been collapsed.
Exactly.
We're seeing a lot of the coronavirus rumors are actually genuine photos of people with face masks from previous times.
It's the easiest form, but most effective.
Because as we know, the most effective disinformation is that which has a kernel of truth to it.
Why fabricate something when I've got something already that I can recapture?
So that's false context.
We then talk about imposter content, which is when people use logos or names.
So maybe a journalist that they trust, that name gets used to sell soap or as part of a propaganda campaign.
So this is the Pope endorses Donald Trump?
Yeah, 100%.
But in that example, you'd probably need the Vatican logo.
Like you'd need something that like it's, you know that it's, or you're led to believe
that it's official.
Right.
And then we talk about manipulated content, which might be...
Would you also include an impersonated content, like if someone starts a Twitter account
called Tristan Harris 1?
And so it looks almost like my Twitter account or the real Tristan Harris, right?
Because they're using your credibility.
Right.
And again, as we're scrolling through Twitter, it's very easy to create...
I'd write Tristan with a 1 as opposed to an I, that's what I would do.
Yeah.
So, and then...
And that's what people do with these like fishing attack type type things,
is that the characters look almost identical, the one in the eye.
When Donald Trump confirmed Gina Haspel, who's the CIA director,
the first response to his tweet was a fake account called Gina Haspel 1 or something like that.
And it was just, thank you, Mr. President.
I'm so excited for the job, but it was, I think, a Russian bot or something.
And the name was almost identical to Gina Haspel, but it was just had one character off.
Yep.
and people, you know, wouldn't know.
No, exactly.
And that's, I mean, when people, I'm sure we will talk later about deep fakes,
but whenever I have a conversation with something,
I'm like, why am I going to spend the money and the time to do that
when look what I can do by putting a one as opposed to an eye?
I mean, there are too many ways to game the system right now
with very little work.
And so then the last two are manipulated content.
So imagine I have two genuine photos spliced together.
So we have an example actually that did very well just before the 2016 election of
It looks like people waiting to vote at a polling station
and there is a guy being arrested by ice
and he's wearing his ice jacket.
And we use it in training all the time
because I often say, well, what do people think of this photo
and they look at it?
And they're like, oh, and then after about 30 seconds,
they're like, oh, I think it's false
because everybody's looking down at their phone.
I'm like, yes, because this is a genuine photo
from earlier in the year during the primaries.
Here is a genuine photo of somebody being arrested by ice.
You put the two of them together with Photoshop,
it takes two seconds.
But if your worldview is believing that illegal immigrants are voting, why are you going to stop to make – well, of course, that doesn't quite fit.
Or the shadows don't quite go in the same direction.
So manipulated is taking something that's genuine and, you know, changing it or splicing it together.
Again, much easier than starting from scratch.
And then the final bucket is fabricated content.
So that's the 100% completely false.
Pope endorses Donald Trump, deep fake, something that's completely, you know, comes from nowhere.
If I'm a bad actor, that's my least favorite of.
the seven buckets. Because I haven't tested it. I don't know whether people are going to believe
that rumor. I'm going to have to spend money to fabricate a video, you know, why am I going to do it?
So of all those seven buckets or types, the thing that we see the most of is this false context,
which is genuine content used out of context. Right. And then what is the difference in your
official definition between misinformation and disinformation, just so people have that? Yeah. So
disinformation is false information that is created and shared to cause harm.
Misinformation is also false information, but the people sharing it don't realize it's false.
So we see a lot of that during breaking news events, where people see something, they don't
know it's false, and they share it trying to be helpful, but actually we have become weaponised
and that if we didn't share, if we were more thoughtful and slowed down, all the things that
you talk about, we wouldn't have so much of a problem. But in this whole space, the number of
bad actors who are really trying to cause harm is small. The problem is us. And because the
actors are very good at making us do this by taking advantage of our fears, emotions,
all the stuff we're probably going to talk more about, then it really works.
So one thing listeners should know is just, when I think of Claire,
I think of someone who is on an airplane flying between election to election,
like swinging in with a cape.
You kind of have this like SWAT team that you bring in to try to prepare news
organizations in different countries around certain information concerns.
What's an area of harm that you think is underappreciated and what are some of the kind of
alarmist concerns that have been over appreciated just to kind of calibrate like the public debate
about this topic it focuses on fake news and then the Russian bots yeah and so that's kind of like
we would if you were drawing out a continent map you would think that's 80 percent or something of
the problem I don't know just something like that but then the actual map if the map is not the
territory yeah what what is what is kind of where should our concerns be so it's a great question
because because we obviously work globally and I feel very lucky to do that because when you spend
any time in the US, the focus is almost entirely about political disinformation on Facebook.
The reality, the rest of the world, it's health and science misinformation on closed messaging
apps.
I think the coronavirus has made people recognize that this is much more complex. It impacts
so many different topics.
You said closed messaging apps, by which we mean we chat, WhatsApp.
Yeah, line, kukal talk, telegram. And yes, there are some differences, country to country,
and slightly different technologies. But the biggest learning is,
all of this is about human psychology
what works in Brazil
works in America and it's all about
tapping into human fears it's about
tapping into in groups and outgroups
and it doesn't matter whether it's Nigeria
and it's a country split between two religions
or India I mean
and that's to be honest it's on one hand
it makes it easier to understand but the other hand
it's kind of depressing because you realize
that it's kind of technology agnostic
and it's about technology activating
the worst right and so
do you want to give people a little bit of
hint of some of the places you've been and how these things have showed up around the world?
So after November 2016, as you can imagine, there was much more concern about this globally.
And just after the election, actually, we held some partner meetings with newsrooms,
two in the US and one in London, actually, with European partners.
And French journalists said, you know, we're about to go into an election.
We're concerned, based on what we've just seen, that we're not ready in France.
And so we worked with French partners to work on a collaboration with over 30 news
rooms who said, we don't think we can do this alone. We want to work together.
So this is the election that ultimately Macron had won.
Yeah, May 2017. And in that, we tested this new methodology, which we called cross-check.
Our belief is that no newsrooms should compete when it comes to disinformation.
Because really, journalists have never had to deal with falsity. That stuff ended up on the
cutting room floor. What now is happening is the audiences are saying, well, yeah, we also care
about the truth, but can you help us navigate what's false? I want people to understand
this innovative strategy that you came up with, which was in an environment where there's media
that is highly polarized where the public doesn't trust different newspapers. And now, let's say
there's this new false information story that comes down the pike. And if one of the, let's say,
the CNN of France says, that's not true. If CNN of France doesn't have credibility, people
aren't going to trust them shooting that story down. And so the innovative approach that you came up
with is when you go into a country, whether it's France or Brazil, what if we got all the
news organizations together because people would trust it if like 30 different newspapers
all said that this isn't true?
Yeah.
So we did this project in France and what we realized is that journalists working together was
this kind of amazing moment where people were teaching one another skills, et cetera, et cetera.
And so we now have rolled that out in places like Nigeria and Brazil and we've worked with
journalists in Australia and Myanmar.
And that's an interesting in a moment to relate to one of our earlier conversations with Rachel
botsman on the erosion of trust in society, in a low trust world, people don't trust those who
are even providing the corrections. So we're kind of, I sort of see ourselves as a global civilization
running around trying to just grab the last tiny little bits of trust that we have in our
institutions. Like what does have authority to shoot down that's something that might not be true?
And I see you as finding this kind of nonlinear effect that if we got these 30 newspapers to
shoot it down, that might do something. Yeah. And I actually have to, I don't know,
people know this, but first half is one between myself and my sister. She's based in London.
I didn't know that. And she came up with this cross-check methodology. And Jenny doesn't come from a
newsroom background. And she said to me one night, she was like, I think what we should do is get
newsrooms to cross-check each other's work. And I said, can I just introduce you to newsrooms? This is
never going to happen. They are not going to collaborate. Nice try. And she was just like, I don't
care. Like, this is, we're in a moment of crisis. We have to do it. And I think it's another reminder of
innovation comes from not having that sense of it will never work. And so,
the very smart thing in France is in the same way as here where people don't trust, you know,
the Beltway or elites. In France, it's the same thing. Lots of people in France say, I don't trust
the Parisian media. And so we had a coalition of, yes, Le Monde and France Van Kath and what you would
expect, but we also had Strasbourg 89. We had local newsrooms who were also in the coalition. So
not only did they get to work with Le Monde and some of the big players, but they were much closer to
their audiences. And as the same as the case here, people are still more trusting of local news.
because they're more likely to know the journalist.
They feel like it's more connected to their lives.
And so when we did the evaluation of the project, surprise, surprise, some people said,
you know what, didn't trust half the people in your coalition, but I do trust my newsroom.
And to be fair, I'm pretty right wing.
And I think the Parisian media left wing.
And I didn't love everything that crosscheck did, but I trusted that because I trusted my news outlet.
And that was when we had this moment of no one organization is trusted by everybody,
but is there a way that we can think about coalitions that might help audiences navigate this
and recognize, well, maybe if we've got 10 different logos, then maybe there is something
to this. And it's not easy. And people sometimes label this as cabals and newsrooms shouldn't work
together and this is collusion. But right now, we need to try whatever we can. And I think we did
this in France. We did it in Brazil and had very similar feedback. I think there is something to
this. Whether we can do it in the U.S. for me is that's the big challenge for me in 2020.
But the thing I think I find most interesting about this strategy is this wouldn't appear to naked eyes
like it would work because you have these newsrooms that are competing with each other.
And they're on opposite sides of the political spectrum.
And you said, no, actually, there's this kind of common good we need to protect here,
which is the shared basis of truth and facts.
And surprisingly, they were willing to sign up for it.
Yeah, and I think this was because their newsrooms are very frightened about making mistakes.
And I think there were many newsrooms that said, we don't necessarily have the skills in
the house and we can't resource this, and particularly at the smaller local level.
But I think that competition piece is, and I can't do a French accent,
somebody said, like, there's no scoop in a debunk, you know, like in an amazing French accent.
But what they were trying to say is, yes, we compete, but we compete again around the good stuff
about the investigations, not, you know, cleaning up a internet.
You know, I mean, excuse my language, but I think.
We're not competing on brooms. We're competing on exciting, explosive material.
Yeah. And we should, you know, and to be honest, media has always had a pool system,
you know, particularly in TV news. You don't send everybody with a camera to follow the president
or the queen. One newsroom goes and then they share the footage. And that, there was that belief,
which is on this, why are we all wasting time chasing?
Why are we all verifying the same meme on Twitter
when actually one person can do it?
We can all look at the reporting,
but yeah, we agree.
And that's how it worked.
So it actually worked that way.
So there's sort of a feed or something like that.
And then one news organization says,
this is a thing we think is a correction.
And the others can quickly validate it
as opposed to everyone trying to research it from 20 different size.
When you're checking the evidence
is much quicker than starting from scratch.
And to be fair, we saw many times when somebody was like,
yeah, but actually we're not going to run it
unless you actually get a quote from that person.
Or we're still not.
not 100% certain. So it slowed down the process, but it also meant there were absolutely no
mistakes on any of the projects we've run now. And when you talk to journalists afterwards,
they would say, it made me feel uncomfortable that I was forced to slow down. But surprise,
surprise, when I was forced to slow down, it meant that the reporting was more accurate,
was stronger, and we didn't make mistakes. Like, that's ultimately what we want.
I feel like this is something that as consumers of media and information, we also have to gain
a tolerance for. Like, it's almost like sugar, what's going on, right? Because sugar just gives
us that immediate hit and we like it. But then we all,
know would be better off if we just probably didn't have as much. And we've been, you know,
sort of tasting this immediate access to, there's a breaking news story, Parkland shooting. I want
to know in the next 30 seconds what the first report is of whether they know who the gunman is.
Yep.
But do we actually need to know that? No. And how many in which human beings on planet Earth,
when that happens, if you had to draw like a distribution curve, like how many people needed to
know that within the first, even, let's say, 24 hours? Did it actually consequence
actually affect our lives. And I say this because I think we're in this uncomfortable tension.
We have to trade some things. Like right now we say, well, we want that immediate access to whether
the coronavirus killed exactly 57 people or the next hour is it 58 people. And like, and I was
checking the news this morning on coronavirus. I'm very interested and concerned. But I guess it's just
like what's the humane clock rate in which information is dispensed? Because if we want the
fire hose, we're going to live in this hyper noisy environment.
It may not be so bad if it's not consequential, but when it's about whether or not you're going to go into quarantine for a month and lock yourself up with food or whether or not you're going to go inside because you're worried there's a Las Vegas shooter, this just doesn't work.
So how do you see this tension resolving?
I mean, I felt this on the evening of the Iowa caucus, which was, of course, lots of mistakes happened.
But seeing the media in that role was like if there was no expectation that you would get the results for another 48 hours, as is the case just with the Irish.
It took three days, and there was this sense.
You know, that's how long it takes.
We also have to think about the political economy here
is it's very easy to wag fingers at the media.
I mean, right now, the reason that people are so competitive
and that every second counts is because they are desperate for clicks
because they're desperate for money and many newsrooms are struggling.
But what that means is people are rushing.
And when a mistake happens, people are recognizing that the speed cannot, you know,
we have to slow down.
We are being approached by more newsrooms to do more training
because standards and ethics editors are like, yep, we cannot afford
a mistake, not this year, and I think it's not in anybody's interest to be quick.
Let's talk about the cost of mistakes, because I know something in cognitive science,
there's an effect, it's basically the first person to frame the debate wins, because you set
the initial frame.
Let's say it's, you hint that the shooter was actually this kind of disturbed military person.
I don't know, I'm making it up.
Now your mind is setting up an evidence accumulator, so your mind is pre-tuned to find and
want to confirm and affirm evidence of that specific explanation, which might be different than
like, I don't know, something totally different happened.
It had nothing to do with the military, nothing to do with that kind of gun.
But that other kind of evidence doesn't have the same or neutral acceptance by the mind
because the mind has been pre-framed by the first frame.
And I think when we think about the cost of misreporting those mistakes, it's like people
don't trust the corrections.
You kind of entrenched yourself, just not fully, but in a deeper way in the first explanation.
Yeah.
No, and there's, I mean, so much literature from social psychology about effective ways of debunking
and issuing corrections.
but we know it's really difficult, and we know even when you do it well,
people are much less likely to share the correction.
And even if you hear the correction, if you get asked two weeks later,
you're more like to be like, oh, I can't quite remember, but there was something,
no smoke without fire.
And you tend to go to the original claim.
Exactly.
This is Bioni Swire-Thompson's research at Northeastern University.
I love it.
It's just that people end up going back to the original belief.
So talk about corrections.
You've learned a lot about in elections.
You know, what kind of corrections and what are the cognitive strategies for producing
an effective correction. Yeah. So from doing the work that we've done, one concept that we've
come up with is this is the idea of the tipping point. So if you go searching for false information,
you will always find it. If I go searching for some conspiracy, Facebook groups with anti-vax
content, I will find it. If I go looking on 4chan, I will find all sorts of things. Now, it's
very tempting going back to political economy. If I want a headline that's going to get a ton of
clicks, there's a ton of that stuff that I could write a piece about and I would get clicks.
But of course, if you're a mainstream media outlet, you are giving oxygen to,
these rumors and conspiracies. So we talk about the tipping point to say, well, if you report
to you're giving oxygen to something that you shouldn't do. But at the same token, if you wait
too late to report on these rumors and conspiracies, it's almost impossible to bring it back. But there's
this sweet spot, which from our work in these election projects, if you get it at the right
moment and you get enough newsrooms at the same time pushing out responsible headlines, we have
seen evidence of slowing down the misinformation or having that misinformation taken down. But that
tipping point is something that's really crucial. So this is an example for France, but I think
it's a powerful one, which is we saw a very sophisticated hoax website that looked identical to
LaSois, which is a Belgian newspaper. And in fact, every hyperlink clicked back to LaSua.
But the headline was saying that McComb was funded by Saudi Arabia. Explosive content, we, you know,
of course did very quick reporting and found out this was not true, looked like there was some
bots in Tehran that were pushing it, blah, blah, blah. We were like, if we report on this,
it's irresponsible. But we sat on it and everybody was briefed. Everybody was briefed.
had the reporting. We didn't do anything about it until Marie Le Pen's niece, who was the niece of
Marie Le Pen who was running, she tweeted it. And the tweet of that link meant that it suddenly
passed the tipping point. And so then collectively, cross-check issued the report and we were able to
slow it down and get that taken down. Now, again, that's the perfect example. I use it in training,
everybody loves it. But the concept of that, which is how do you measure the tipping point?
How are you sensible about that? There is evidence that you can slow this down. But that's part of
our training with newsrooms to get them to think critically about when to report on
information. They should not be giving oxygen to everything. It's like the thing they say about
timing and comedy is like timing and corrections. You're essentially saying, hey, look, we actually
sat on this correction. We knew we had it, but this wasn't the right time to do it. Then it suddenly
spikes because of a natural organic event, which is Maria Penn's daughter, her niece,
yeah, post it and you jump on it. A lot of the psychological theory also talks about the power
of familiarity and repetition. So again, when newsrooms are being trying to be distinctive and
they're competing, you don't have that familiarity and repetition. But if
you see a number of different outlets pushing out corrections using similar language, similar imagery,
which makes editors go, oh, that's not what we do, Claire. But actually, this is the new public
service that is kind of unfortunately required of newsrooms, which is to help audiences navigate
our polluted information environment. And in that environment, we have to think differently
and familiarity and repetition work. And that's not what the news industry is about.
This reminds me of George Lakoff's work on if you're issuing a correction or if you see
something false and there's different ways of reporting on it put it into a truth sandwich so you first say
truth sandwich like two loves of bread and then something in the middle you first say the truth
then you say the false thing that goes against that truth then you repeat the truth at the end because if
you just think about it in terms of quantities and repetition yeah you said the truth two times and you said
the false thing one time so if you just add that up you're you're fixing it one of our listeners in a past
episode on hate speech said that one solution if there's a hate campaign that's later found out
to also provide deterrence on future hate speech is if you say to any poster of deeply hateful
material, if we discover your hateful content, we will later go back to everyone who saw the
hateful content and we will back post twice as much positive content about the same minority
group that you were posting about. So we're just kind of like changing the saliency and
repetition rate of the other positive story. Yeah. And I mean, this goes to the core of a lot of
training we're doing right now with journalists, which is how do you word headlines? Because in a
headline with 40 characters, you don't have, you don't have the chance to do that truth
sandwich. So a lot of the evidence is where possible lead with the truth. You know, Brian Eswyer
Thompson will say, you know, in the nut graph, you can talk about the falsehood, because next to it,
you can talk about the truth. But the challenge is in the headline, we shouldn't really be
repeating the falsehood in the headline. However, if I'm a journalist, I'm going to say immediately,
Claire, have you heard of SEO, search engine optimization. I have to repeat the rumor in the
headline, otherwise I won't get any traffic. So this is kind of a really fascinating.
question for search engines, which is, I'm out there telling journalists, be really careful,
try not to repeat the rumor in the headline, because you're actually reinforcing and giving more
oxygen to the rumor. And in an era where lots of people just see the tweet, just read the
headline, and they don't read the nuance, we have to be really careful. Yet, I've got journalists
who've said, Claire, my newsroom has spent a fortune on search engine optimization training,
and we've been told we have to replicate the exact rumor in the headline to get the traffic.
And the reason for that, right, is because for someone who's doing a search query about the false
thing they want to be found. Is there no setting where Google can say essentially here's a set,
I mean, they have these like meta tag names, right, where you can say these are other search
terms for this article and take them as seriously as these other ones and only authorized journalism
outlets can use these meta tags so that Google has can privilege them in some kind of high
authority type of way. I mean, I'm not joking. I mean, this is this is happening every day in
training rooms that we're doing. We also talk to journalists about the difference between people
searching versus stumbling upon. So you're absolutely right. If there are,
about coronavirus. People are typing that into Google, and we want that data void filled with
something that responds to that. But at the same time, I don't want somebody stumbling on Twitter
over a tweet that's repeating a rumor that I hadn't even heard of something that I should
even be concerned about, because our lizard brains remember the falsehood. So this is much more
complex than these platforms are designed to respond to. But I would love to have a conversation
with Google, which is like, how can we flag the fact that this piece of content should be
connected to a rumor without basically repeating the rumor everywhere through it just to get the
traffic. You're talking to a set of listeners who are often in the tech industry or around and
the surround sound of people in the tech industry. What do you really wish that they were doing or
could do to help you more? So by working globally, I also have huge sympathy with these companies
who need global responses in order to scale the work that they're doing. I used to work for the UN and
every year you had to change roles and move country. Like I wish there was more of that in Silicon
Valley because many of these companies tend to be in northern countries. They tend not to be
lower middle income countries. And so many of the challenges here, whether they're linguistic,
whether they're ethnic, whether they're religious, whether they're the different types of
technology, whether they are the different conspiracies that have existed in these cultures
for years. So much of that requires on the ground knowledge. But could they even? I mean,
even with your work, so there you are, you've aggregated the 30 different newsrooms in France.
The volume of things that are coming through the pipe, like trillions of content items,
are these matched?
Is there just as many hoaxes and conspiracy theories as there are capable journalists
waiting to pick up the phone to then shoot down the rumor?
Or like what kind of asymmetric in a situation are we talking about?
Yeah, so let's take Brazil for his example.
It's a huge country.
Their news industry is struggling more than this one.
They, almost every newsroom has a paywall.
So if I'm in Brazil, here's my choices.
I either can pay money to get access to a quality newspaper.
Well, I can't really afford that.
Or my WhatsApp and Facebook, because of free basics,
means that I don't even have to use up data costs to access WhatsApp and Facebook.
What am I going to do?
I'm going to go to WhatsApp where all of my friends are sharing screenshots of news sites.
So this is something people don't know, I think, our listeners.
And I'm not fully aware of.
So people are sharing screenshots of news sites
because they'd actually have to pay money, which they don't have,
the vast majority of people, to look at news sites.
And this is because of the free basics program, which to quickly catch up those listeners who don't know, was a program that Facebook used to say, hey, you can get a cell phone.
And so long as Facebook and WhatsApp basically are the Internet for your cell phone, it comes with it on the cell phone, then you get the Internet, quote unquote, for free.
But then that privileges in terms of usage, the WhatsApp and Facebook as the Internet.
Those are the primary surface areas through which people get their information.
Yeah.
And, you know, the number of people who are sharing screenshots of news, let's just be honest, small.
The number of people who are sharing memes and old images that are used out of context.
Oh, false context visuals.
So, for example, during the Brazil election, the number, we had a WhatsApp tip line.
We received over 200,000 tips from the public about things they were seeing that they wanted help working out.
The number one piece of content that was shared was a photo of a truck with what looked like a ballot box open in the back of the truck.
And the caption was, these ballots have already been pre-filled in for Haddad.
It was a genuine photo, but the caption was false.
That was not true.
But it was shared everywhere.
Do you know how widely it was shared?
I mean, we received it over 1,500 times on our tip line.
So it was the number one piece of content.
But again, lots of these countries, you have lower literacy levels.
You have people who have never had email addresses.
They are, for the first time, they've got their smartphones.
All the stuff that we know.
We had to learn not to take scam emails from Nigeria seriously.
You know, we can laugh about it.
But it's taken us 20 years to kind of figure some of this stuff out.
We were actually at a conference in Singapore hosted by Google.
who had bought amazing people from Malaysia and Myanmar and, like, I mean, these amazing people doing
the same work that I do on a daily basis. And I remember the first hour, there was kind of like
seven minute lightning talks. And by the end, I was almost in tears because there was just
incredible story after incredible story. I mean, people saying, I'm a mum, but I do this as a fact
checker because I just really care. But I see a lot of content now that makes it difficult for me
to sleep. I want people in Silicon Valley to hear these stories. What I think is interesting about this is that
From the outside, Facebook or Google or YouTube can say, look, we're hiring all these civil society groups.
We're paying these fact checkers.
You know, we're actually doing all the work with every single nonprofit on the ground in Myanmar, in the Philippines, in Cameroon.
What do you want us to do?
I mean, we're now working with all the groups that have those resources and have that local expertise.
But then what ultimately that amounts to is conscripting them into a feed of essentially like the worst parts of society.
I think the biggest counterargument from those who are in the tech industry is like, yeah, we know there's.
some bads, but there's also just all these goods. And so there's some goods, there's some
bads. Who's to say? You know, or we think that basically that's goods are enough to justify
this. One way we could talk about whether the good balance sheet compares to the bad balance sheet
is we could say, well, how often are the goods happening and how often are the bad's happening?
So that's one way to do it, right? You could do it based on volume, like how much of the good
is happening. A different way to do it is on consequences. What are the consequences of the good
things that are happening? And what are the consequences of the bad things that are happening? Because
if fake news, I think in Brazil, it was something like 89% of the people who voted for Bolsonaro
had believed in at least one of the top 10 fake news stories. They were like complete crazy,
like out there fake news stories. If the consequences of the bad are authoritarianism rules the world
because elections are debased and what people believe as the basis of their thinking, when your
brain is believing some basic set of cognitive frames and beliefs about the world and other
people and the politicians you hate. And then on top of that, your mind is looking for evidence
to confirm what you already believe. If that's the cost of the bats, that's a highly
consequential set of bats. I would argue that's a dark age entering set of bats, especially in
these vulnerable countries, can we just shut it down? I mean, I sometimes just say, do we really need
this? Is this really helping? Or is there a safe way where just say, look, can we just do one-on-one
messaging, you know, and that's it, because anything more than that is just actually too
damaging. It's too consequential. Well, I mean, I think the good, bad debate is, as you're saying,
it's way too simplistic. And actually, what we're doing here is we're experimenting with people's
lives in a way that we can't stop the snowball. And I'm having a conversation with a Facebook
engineer a year or so ago. And I'm saying, you know, I'm a social scientist by training. And
what I worry about is we don't have longitudinal analysis. So we've got,
Psychological experiments right now, mostly done with students in large Midwestern American universities
deciding whether or not our corrections policy works.
And if so, Facebook has flags or doesn't have flags.
One of my heroes is Afimazoma.
She's a public policy manager at Pinterest.
She said, I don't want people searching for vaccine information on Pinterest.
Until we know what the impact is, why should we have it on our platform and we're going to make that decision?
And my worry is, exactly to your point, when people are scared, they are more likely to be supportive of authoritarian leaders.
because they're terrified and the strong man,
you know, this is a George Lakov stuff,
like you want the strong father figure.
So if you are terrified in Brazil
about the fact that corruption has completely changed your country,
you have less money in your pocket
because you haven't dealt with the impact
of the 2008 financial crisis,
Bolsonaro looks like a pretty good deal in that situation.
The same with Duterte.
You could argue the same with other leaders.
And so what I think we don't understand
is this drip, drip, drip, drip, drip piece.
And to your point about consequences,
we don't know.
And so I'd rather that we stepped right,
right back and we tested some of these things without being like, oh, God, in 15 years time,
we're going to say, what the hell did we do?
And it doesn't mean shut off, like, the entire internet or shut off www. dot, like, again,
that's a different thing than let's create viral amplification of the fastest, least checked,
most friendly to bad actor type of speech.
For example, Facebook Live, I remember at the time knowing people at Facebook who were kind of
saying, hypothetically, what if we created a tool that allowed people to just live stream from
their kid's birthday party?
Wouldn't that be great? Of course, any journalist or foreign correspondent,
but I'll tell you what'll happen immediately. It's going to be terrorists, it's going to be suicide,
and they went to a dark place. And Facebook was like, no, I think it's going to be about birthdays.
You know, it didn't take very long for Facebook Live to really be deranked as a tool within the Facebook ecosystem
because people realized it was too difficult to moderate and bad actors or for different reasons it was going to look bad.
And that's how I feel now, which is can we look at all of the features, all of the tools,
pull back on the stuff that we know is potentially going to,
have more consequences and really go with the bits that we know and have been tested to
really, and I know this is very simplistic, but I think that there's so much experimentation
and things like more friction, which we know from all of the research is one of the best
ways of slowing this stuff down, more heuristics, more labels, WhatsApp, give me more stuff
to give me that context. We know some of this stuff that works. And so what I would love to see
is more of that less jazz hands about everything's going to be great. If Silicon Valley engineers
could spend more time in the pub with foreign correspondents and journalists,
I think would be in a better place because you need some dark,
some people who have experienced the darker sides of the world to say,
I'm sorry, like this is how your platform is going to be weaponized.
You're always laying out so many things, and I just want to double click on several of them.
One was the fear strongman-based thing.
I remember Brittany Kaiser, who was on this podcast,
said that in Cambridge Analytica's psychological targeting,
that one thing they found was that people who have the psychological characteristic of neuroticism
always respond to fear.
It was only fear that really had a massive impact.
So they spent the rest of the super PAC's money on fear, yes.
And we have a click-based system.
It's a lizard brain enhancing system, and fear always works.
It's a two-step process from when you're more fearful in general of coronavirus,
of what Russia could do to this country, of whatever it is, what a foreign power,
you're going to go with the strong guy.
So if you just think about it that way, that you have this system that rewards fear over calm truth,
then it's sort of obvious to see why you would get kind of authoritarian people everywhere all at once.
And to your earlier point, you know, this is the largest unregulated psychological experiment done in history.
Where's the, you know, the IRB review board?
Where's the people who said who could be hurt by this experiment?
I mean, I remember back at Stanford, you know, if you wanted to do study with 10 people,
you have to go through this incredible process to even run the experiment.
And in those IRB processes,
you have to go through extra steps if you're researching vulnerable people.
And I think about that in the global context, which is who are vulnerable.
Well, people who are newer to technology, people who have lower literacy levels.
You know, we know it.
And I think that's my frustration is that there hasn't been a recognition that we want to scale globally.
And, you know, yes, many countries have been transformed by this.
And that's an important thing to remember, of course.
But I don't think there's also been that recognition of there are vulnerable communities here.
the communities that have been ripped apart or have religious tensions,
which mean you've already got this like tinderbox.
That's how I feel.
Like there are many countries in this world who I would deem as a tinderbox.
And for me, this technology is the spark.
Right.
What worries you about how governments are responding to this,
especially the countries that are tinderboxes are more vulnerable?
What are governments doing?
What are you seeing that works?
What do you wish they were doing?
So I am very concerned.
We've seen the passing of some pretty problematic regulation.
Who writes this stuff? Politicians who are actually terrified that they will lose an election because of disinformation. So they are not neutral actors here. And I can understand why there is this, there's been this kind of panic about it globally. And so they want to be seen to be doing something. But again, I'm a social scientist. We should not be doing any of this unless we have a foundation of empirical evidence. And we have almost nothing. If I was to say, how much disinformation or misinformation as distinct categories is there? How is it different around the country? Are any of these,
solutions slowing anything down. Like, where's the benchmarks? I can't tell you how much of this stuff
is out there and what impact it's having on society. And it's now 2020. And most people inside the
platforms couldn't say this either. And most social scientists can't say either because we don't have the
data because it's inside the platforms. But in that context, no government should be passing any
regulation because we don't know how much of it is out there and what impact it is having.
What I would love to see governments do is hold the technology companies to account to say in order
for us to have responsible regulation, you are going to need to work with us to allow us to
audit what you're doing. We did some work last year around auditing what people saw when they
searched for vaccines on Instagram, YouTube, Google and Facebook. And we paid people in 12 countries
to send us screenshots. We got over, you know, 500 screenshots, beautiful. Because you didn't have a way
to get that data yourself. No other way to see. So you paid in 12 different countries. People to send
screenshots. That's an audit. Now, that's what I would like to see governments do. And I understand
stand right now we see this tension between academics saying to the platforms, give us all your data.
And quite rightly, the platforms are like, no, we're not in a position to do that. We've seen
a little bit with this social science one, having to build differential privacy into a platform.
It's hard. But if I'm a government, I would argue that there is a way that data can be shared
to simply say, show us not the algorithm, but what's the output of some of your algorithms?
Similarly, at the moment, we see, you know, different people, for example, Mark Zuckerberg,
talking about we've got new transparency measures now if i right now try and use that facebook ad library
API it is an utter disaster i cannot hold those ads accountable because i cannot monitor them
because the API does not give me the information that allows me to do that that's a fundamental
problem and say more about that what is the information that it's not it's not giving so it's buggy
but also if i wanted to say what ads right now are running in tallahassee and are they targeting
people of color is there a voter suppression campaign happening right now i cannot do that
I can search by state, but I cannot search by those demographic categories beyond gender and age.
Like that's, I don't think that's good enough.
It's not legible, in other words, for research.
Yes.
And so my concern is these governments have been putting more pressure on the technology companies.
We have had more transparency mechanisms, but actually, you know, Macon isn't testing the Facebook API.
We have not good transparency measures, yet they're allowed to tick a box.
And as I often say, they are marking their own homework right now.
now. They write the transparency reports to say, we promise you we're doing better. That's not,
it's not viable. And so I want the governments to work with the technology companies to get
access to some of these data to really have an ability to show whether or not these changes
that are being promised, are they making a difference? That still puts, this ad library approach,
the ad transparency system, still puts responsibility on society to monitor how much harm is there.
Yeah. And the problem is like, does society, does the,
not do the nonprofit civil society groups have even the resources like first of all why did they
send it for that job anyway is like why should we have to have an ecosystem like it's like we created
all this work for people to just review how bad it's like a gun manufacturer who's like we're
going to provide reports every quarter on exactly how many people are guns have killed we don't
need transparency reports we need systems that don't kill people yeah and we need systems that don't
target vulnerable populations now the question is what does that look like and again i i think that
in Western markets, things like the ad transparency API, has some basic level of scrutiny
from journalists and so on. I think that's, it's better than not having it. But that's assuming
that there is this kind of, for the state that's monitoring the things that are going on. In
countries like Cameroon or, you know, Kenya or whatever, how many organizations are looking at
the ad transparency API? Well, they don't have it. I mean, Australia doesn't have the ad library.
Oh, they don't. They went through an election. It's certain countries, and you can imagine which
countries have them. They are countries that have put more pressure on the tech platforms to say
we require transparency. But Australia had an election last year. We worked there. We've got a bureau
in Sydney. We couldn't do the work that we were doing in the UK. And what was the cost of not
having that transparency? A complete black box. Yeah. You know, that's that's the problem. And so,
of course, we don't have to have a long conversation now about whether Facebook is correct to say
we're not going to fact check those ads. But it is not correct to say that we have transparency
methods that allow others to check
whether or not. During the UK
election, for example, we found that
one of the parties, the Conservatives,
88% of the ads they were running
were using content that had been labelled as misleading
by the fact checker, full fact.
88%. And that was
because we had downloaded the content, worked
with journalists and said, this is problematic.
So let's imagine 88%
of ads running in Sydney based on
misleading claims and Facebook
isn't checking it and no Australian
journalists can check it. It's just not...
If those are the numbers 88% in the UK, and we know in Brazil, 89% there, like, this does not look good.
I mean, if we don't have the data, we don't need more transparency that says how bad is the, like, 90%, 95% rates in these other countries?
We need to shut it down.
What I'm curious about is, let's say five years from now, it's 2025.
And we've transitioned to a humane technology future.
We've reversed out of this sort of like heading towards a dark age or people don't know what's true.
What in the period of that five years?
What happened?
What did we do?
What did governments do?
What did Facebook do?
What did we as a society do?
This is such a great question.
We took it seriously.
And by taking it seriously, we slowed down.
We will add a ton of friction into the system
that will stop our basest instincts
just sharing without trying.
And what does that friction look like, for example?
So the great work by Nathan Matthias,
who's now at Cornell,
who's shown that the more you ask people,
are you sure this is true before you share it?
the more you put delays in.
Limiting the number of people, you can reshare something to.
All the stuff that we know.
Giving more credibility, you know, for example, if you have heuristics about where this
information came, you're more likely to, like, oh, are you sure?
We have enough evidence now about things that we know.
But I think it's about slowing down.
If we had more data, we had not rolled out new features without proper testing.
There was real work with academics before stuff was tested in the wild.
Like, I don't think it's a million miles away.
from where we could get to, the last three years, I've probably gone to 150 convenings about
misinformation globally.
I think you're the information disorder conference queen. I've seen you at every single one I've
been to. Right. And there has been some very good conversations at some of these types of things.
But in these last three years, whenever we've talked about this problem is complex, it's going
to require a lot of people being involved in the solution and it's going to take time. People are
yeah, yeah, but we just need Facebook to tweak the algorithm. And we've had three years of people
expecting a simple quick response. One thing I would say in the last six months is there's
recognition of wowsers. This is going to get a lot worse before it gets better. It's probably
going to take 50 years not to solve, but to get to a state that's more humane and is actually
not causing harm. And in order to do that, we need deep education of everybody at every sector
and it's going to require real cultural shifts. I want to walk outside and I want to say,
I have faith that the people around me are being embedded in a healthy information environment,
slower, more thoughtful, more reflective, more careful, higher friction information environments.
My friend Eric Weinstein has this great saying that instead of critical thinking,
we're going to need critical feeling.
Yeah.
That we have to examine our own emotional reactions to the information that is presented to us,
and not just information, but to experiences that are presented to us.
I'm actually kind of hopeful in this weird way with coronavirus where we've been in this low trust amusement-driven world.
Well, you better bet people are going to suddenly be concerned with what's actually true when it comes to the health of your family.
Yep.
Disinformation works if it taps into fears about your own safety and those are the people that you love.
And so on this, that's exactly, you can dismiss all this political nonsense, but this is about real harm.
You touched a little bit on emotional skepticism, and there are great media literacy programs and there's a lot of money that's gone into.
oh my God, we've got to like teach the 13-year-olds.
I think we know now that actually some great research from NYU,
it's actually over 60, the other biggest problem.
But I wish that we could do more to teach people
how to talk to one another about this.
So we talk about this in training.
But for example, if I, you know, go home for Thanksgiving
and I'm like, hey, Uncle Bob, couldn't help but notice,
posted something on Facebook.
It's wrong.
And here's a Snopes article that proves that you're wrong.
Doesn't work so well.
And we know there's psychological like worldview theory,
which is like you double down on your worldview.
People feel great when people
tell them that your identity is wrong.
Yeah, it turns out.
So we talk in training about how actually, like, hey, Bob, couldn't help I notice.
You posted that.
But I've been thinking a lot about this.
Like, why are people trying to manipulate our communities?
Like, I'm watching this happen.
Like, people trying to divide us.
Like, what do you think?
Why are people doing this, Bob?
And I know that's a very simple explanation.
But the language of we and us is not the language that journalists and fact checkers and
researchers like to use.
But that, like, we have to be better at teaching one another how to slow it down.
and how to get people to take responsibility
for the information they share.
I mean, you and I've been talking for how long now.
We haven't talked about people, the users,
and how we are being weaponized.
And so we can add flags to Facebook
and add more label, all the rest.
But actually, if we don't stop my mom or whatever,
sharing it, then we're in trouble.
And we don't talk about that at all.
And so teaching people how to reverse image search a picture,
yes, fine, how to read a headline, fine.
But we haven't talked about the psychology of us.
Right.
And we should be doing more of that.
There's this campaign that was brought to my attention by someone who actually helped create one of the major platforms.
But the theoretical name for the campaign was, we the media, instead of we the people.
Because in this new world of user-generated content, you and I are the journalists now, even though we don't even think of ourselves that way.
Because we are the information ecologists.
But instead of someone who went to journalism school, knew the training, know that you have to ask for the opposite opinion before you publish the thing,
or at least make sure you reach people with corrections,
all of those kinds of basic rules,
we're not operating with any of those rules.
And so when you sort of do the Indiana Jones swap
of like we have this media environment
where you had all these people
who studied certain norm, standards, ethical codes,
producing information with certain flaws.
And then you do the Indiana Jones swap into this new environment
where each of us are now the unpaid gig economy attention laborers
who are driving around attention for free,
using our own vanity and ego and narcissism
going to get as much attention as possible,
each of us are essentially the information providers,
but we don't have the responsibility
or the norms that protect us from making mistakes.
So if I do something that's misleading, right,
that never shows up on my reputation.
Yeah.
Imagine if, like, next to your, you know, on Twitter,
it shows you the number of followers someone has,
the number of people that they follow.
What if it had, like, the sort of responsibility score?
People aren't going to like this.
Sounds like China, but if you imagine there is some notion,
we could agree on some set of values.
and that set of values would accumulate into a reputational score, like a credit score.
Yeah.
But not based on true or false, but just here are some standards, some open standards for,
I don't know what those things would be.
This would be kind of a, we work out in real time.
Like, how would technology adapt to support something like that?
Because you can imagine that being built into the design of products.
Yeah.
I mean, I say this sometimes, which is, you know, 30 years ago, we could be at a party and you
could be very drunk.
And I could let you get into the car and go.
And I'd say to my friend, oh, I hope Tristan gets home all right.
Now, I would have to take the keys away from you because society has said,
I cannot knowingly let you get into a car drunk because that's not appropriate for society.
With this, how do you say, wow, like Brian, like last week you posted at least three false things
on Twitter. It's kind of embarrassing, mate.
Like we have to, I think we have to, you as society, say we have to take responsibility for
what we share. And again, I use this sometimes when it seems simple, but like littering.
Like every time somebody shares something on Twitter that's false, they're like, oh,
it's like throwing a can of coke out the window. What's the worst? Somebody's going to pick it up.
We have to say, yeah, but if we all do that, we're in.
trouble. And I think we just haven't, because I think the audience has been completely absent in these
conversations, you go to any of these convenings like, what can the government do? What can the
platforms do? What can educators do? Yep, it's like we have to take some responsibility. And right now,
I don't think there is any responsibility placed on us. I think the reason people like myself, we often
turn to the technology platforms to say, you've got to fix this is because they operate at scale.
The tech platforms are the vehicles by which you would distribute this education. So if you were running
Facebook right now, I know people ask you this all this.
time or you're running Twitter or YouTube, what would be the way you would use that distribution
vehicle to even enhance personal responsibility?
So the worry about, hey, Bob, you shared three false things last week is it's really
difficult.
We don't have the AI systems right now to be able to automate that process.
And a lot of this stuff is the gray stuff.
So it's actually harder to do.
But, I mean, a couple of years ago, the Guardian newspaper started adding yellow labels to say,
this is from 2014.
I loved that.
Yeah, it's a great example.
It's like such a simple intervention.
And so, of course, there's a lot of hoo-ha about wood labels work.
Maybe they're going to backfire, blah, blah, blah.
But there is research now that actually the satire label would make a difference.
Because there's 82 different satirical sites around the world.
How many do you know? Probably the onion.
Right.
Like the absence of those heuristics means that Bob, you can't really blame Bob.
If he's, nobody's helping him here.
If we build in friction, we add context, then I think that...
We do the Google meta tags thing so that they can publish the article with the headline that's about the truth instead of the false thing,
but the false thing still can get picked up.
Yeah, all those kind of extra things.
And a lot of this is how can we research in real time
whether or not this is having unintended consequences.
So, I mean, the label thing I'm really obsessed with at the moment,
we're partnering with partnership on AI for a research fellow for six months to say,
can we have a universal visual language around manipulated media?
So when the drunk Nancy Pelosi video appeared,
some people called it manipulated, some people said doctored, some people said transformed.
Like, as an audience, I don't really know what's happening here.
So can we have a joint visual language that doesn't say, that's a bad video, but we are in some way saying there's been an addition.
Like, how can we help the audience know what's happening?
And so I'm interested in not media literacy campaigns or education, but how can that be baked into the platform in useful ways?
I love this.
I mean, we, I think it was five years ago I had a side project that instead of fact checking, we called it frame check.
I think we do need as an industry a common language and vocabulary on just what is the difference between the words?
distorted versus manipulated. What's the difference between to steer, to guide, to persuade,
to influence, to seduce? They're on different dimensions of the degree of control that you have
and the degree of asymmetry between one party and the other. How much does one know about the other
party's manipulation? We need this common language for the subtler terrain of how the mind is
being influenced and persuaded. And this would be a great thing to have baked into the common tech
industry, because I do think it's almost like a missing component. We want to kind of import this sort of
subtle human mind humane framework for how we work yeah and i mean all jokes i did my phd
back in 1999 in communication and i was like oh mickey mouse degree claire and now i'm like turns out
it was all this it was all about framing priming agendas all that stuff and i'd say you know bad actors
although i don't really like that phrase but they are really good at psychology and emotion really
good to your point about disinformation that all these kinds of strategies have a kernel of truth so with
voter suppression. I'm not saying don't vote today. I'm saying, oh, the lines are long today.
Is it wrong to say the lines are long? The lines are? I mean, what is long? What's the official
definition of long? Is it, you know, two miles long? Is it like a long? It's all arbitrary,
but I'm creating a suggestion that maybe might be kind of hard. And if you're feeling kind
of busy today, maybe it's not worth voting. And that subtle ability to persuade, I'd love to
see a full-on cultural transformation, both in journalism and media and in technology and
anyone working in the field of communications, where we stop talking in the language of speech
and we start talking in the language of subtle cognition.
Yep.
And those of us who are pushing quality information, we are dreadful at it.
We're rational.
We're all about facts and it is an asymmetrical playing field.
I actually did a talk last week at the National Academy of Sciences and it was the ugliest slide deck
I've ever created because I basically just created meme after meme after meme.
And I said, this is how your adversaries talk to each other and this is how you talk.
Like, here's your 187 page PDF with an image of a dripping needle on the front cover.
Like, that's not how this works.
And, of course, there was laughter around.
But there was a recognition afterwards, which was like, we're really bad at communicating.
And just the other day, the WHO put out a kind of a leaflet about coronavirus.
And they did exactly what you're not meant to do, which is like myth, myth, myth, in big letters.
And then underneath in small letters, the truth.
Right.
You should never include the myth.
Like, it's just, I don't know.
What's the whole thing is that you're doing a lie sandwich where you're doing the lies more often than doing the truth.
How are we in 2020?
in the middle of a coronavirus crisis, the WHO, who are an amazing organization,
who hasn't taught them how to effectively push out this kind of information,
using emotion, but in a way that's compelling, not dumbing down, but like, it's crazy to me.
Claire, before we go, what can people do to help your work?
I know that you are on the ground on the front lines of these things.
I know that psychologically it's hard.
I know that our lifestyle, it's hard.
I know like you, we probably lose a lot of sleep.
What kind of support to you in the organizations that you're most, you know,
working with need help with.
I think particularly people who listen to this kind of podcast, I think there has been this
creation of this divide between the tech press and the platforms, which means that understandably
there's this kind of like rejection of wanting to partner and work together.
And my sadness here is that there are many of us who work on the ground around the world
who really have got things to offer.
And I think sometimes just sometimes a quick phone call or just coffee or break.
I just love some moments there, which is without signing.
an NDA, nobody's trying to get a gotcha moment of journalism. That's one thing. And I think the other
thing is to recognize that this work is, you know, just let's just talk about it for a second.
I mean, I spend the majority of my time fundraising as opposed to doing what I'd want to do.
And I say to some people, I feel like we've got two years to save the planet. And that sounds
crazy and insane and maybe over the top. Tell people why that's true. I think people really don't
understand that why is it true that we have two years to kind of save this? So, I mean, I started
to do this work 10 years ago, and I would stand in a room with BBC journalist and be like,
don't worry too much about this, but just to let you know, during breaking news events,
there's a couple of hoaxers who are probably going to try and manipulate you. That was 10 years ago.
Now I stand in rooms with the same journalist being like, you might have gone on hostile
environment training previously when you're about to report for the Middle East. I'm about to
give you hostile environment because the way that you work now on the internet, it is a hostile
environment. It's a hostile epistemic information environment.
And let me tell you how you protect yourself, how to stop yourself being doxed.
How do you stop yourself from being harassed?
How you stop yourself from being manipulated.
And I see the speed at which this is happening.
In two years' time, this country will be fully polarised.
We will have two different sets of media.
Nobody will believe anything from anybody.
And I do think that there is still hope, but we cannot keep talking at convenings.
We can't keep talking at podcasts about what are we going to do.
I mean, we could have done this podcast three years ago and said it was exactly the same thing.
And so I don't, you know, we don't need.
a UN agency for disinformation, because that's going to take too long to set up. But we need to
work quickly. We need to be agile. We need to, like my sister, creating a cross-check project
around journalists collaborating when that never happened before. You know, what do we do at this
moment of inflection? And I know everybody says this, but like, what do you want to look in the
mirror and see? And I just don't think we're taking this seriously enough. And I think
coronavirus might be the thing that all of a sudden makes people go, this isn't a joke.
Claire, thank you so much for coming on. And I'm just
a huge fan of your work. Please keep doing what you're doing, and we all support you even if
you feel alone sometimes. Thank you very much.
Your undivided attention is produced by the Center for Humane Technology.
Our executive producer is Dan Kedmi and our associate producer is Natalie Jones.
Nor Al Samurai helped with the fact-checking, original music and sound design by Ryan and Hayes
Holiday. And a special thanks to the whole Center for Humane Technology team for making this
podcast possible.
A very special thanks to the generous lead supporters of our work at the Center for Humane Technology,
including the Omidiar Network, the Gerald Schwartz and Heather Reisman Foundation,
the Patrick J. McGovern Foundation, Evolve Foundation, Craig Newmark Philanthropies,
and Knight Foundation, among many others.
Huge thanks from all of us.