Your Undivided Attention - From Russia with Likes (Part 1) — with Renée DiResta
Episode Date: July 24, 2019Today’s online propaganda has evolved in unforeseeable and seemingly absurd ways; by laughing at or spreading a Kermit the Frog meme, you may be unwittingly advancing the Russian agenda. These campa...igns affect our elections integrity, public health, and relationships. In this episode, the first of two parts, disinformation expert Renee DiResta talks with Tristan and Aza about how these tactics work, how social media platforms’ algorithms and business models allow foreign agents to game the system, and what these messages reveal to us about ourselves. Renee gained unique insight into this issue when in 2017 Congress asked her to lead a team of investigators analyzing a data set of texts, images and videos from Facebook, Twitter and Google thought to have been created by Russia’s Internet Research Agency. She shares what she learned, and in part two of their conversation, Renee, Tristan and Aza will discuss what steps can be taken to prevent this kind of manipulation in the future.
Transcript
Discussion (0)
It started with Kermit memes.
A lot of Kermit sipping tea memes, you know, kind of Kermit commenting on Ms. Piggy.
It was actually kind of raunchy.
Like the post was very irreverent.
Then one day there's a post of Homer Simpson and it says something like,
I killed Kermit or Kermit's taking a break or Kermit's gone.
Now this is my page.
That's Renee DeResta, one of our nation's leading experts on information warfare.
In 2017, the Senate Intelligence Committee asked Renee to investigate Russian attempts
to manipulate voters through social media by handing her data sets from suspicious
social media accounts. She started piecing together what Russian Asians posted to these
accounts from day one. And what's the first thing she sees? Images of Kermit the Frog and
Homer Simpson. She's mystified. Is my data set broken or my number is pointing to the wrong
things? What the hell's going on here? What's going on here is one of the least understood
aspects of how disinformation campaigns work. Russia's campaign, for instance, didn't necessarily
begin with a masterful manipulation of voter sentiments. Many accounts made no overt reference
to politics at all. Instead, they posted content that was eminently likable. They cycle through
beloved cultural icons like Kermit the Frog, Homer Simpson, or Yosemite Sam, because their goal
was deceptively simple. Rack up followers. And so you see them doing the hashtag follow back
and all of, you know, there's like 20 hashtags, 30 hashtags per post at the beginning.
Like follow us back. Follow us back, basically. So just trying to, trying to get new followers.
They're trying and failing, but they learn from their mistakes because the memes keep evolving. They're
getting stickier until one day salvation arrives.
I want to say maybe even 900 memes before they finally got to what this was,
which was the Army of Jesus page.
Many people have seen Army of Jesus content because the Senate and others have shared it.
It was pictures of Jesus, often with a MAGA hat on.
It's tempting to laugh at these bizarre memes.
I mean, Renee has laughed at a few herself, but she argues that these memes are no laughing matter.
For all their crudeness, they're actually really sophisticated.
They represent an evolutionary leap in propaganda, perfectly adapted to our time.
One of the things that I talk about a lot, particularly with Russia, and when we talk about disinformation campaigns now, I hear a lot like, oh, it's just some stupid memes.
And it's interesting to me to hear that, because I'm like, well, you know, they were running the same messages in the 1960s in the form of long-form articles.
And that's because in the 1960s, people were reading long-form articles, or they were listening to the radio.
And so you would hear propaganda on the radio.
So the propaganda just evolves to fit the distribution mechanism in the most logical information, you know, kind of reach of the day.
And that's why you see propaganda kind of evolving into memetic propaganda.
So in a way, they should be using memes, in fact.
That is absolutely where they should be.
And it's interesting to hear that spoken of so dismissively.
Today, on Your Undivided Attention,
we're going to take this new form of viral propaganda seriously.
We'll ask Renee how bad actors can craft a message
that seizes our attention and plays on our fears and grievances.
We'll see how they game social media's algorithms
to promote their inflammatory ideas
and, more importantly, drown out the voices of reason.
We'll see how a few users can overwhelm an online forum
and create the illusion of consensus.
And we'll see how fringe ideas as they enter into mainstream
can lead to world consequences on a scale
that we're only beginning to comprehend.
Kermit was only the opening shot.
I'm Tristan Harris.
And I'm Azaraskin.
And this is your undivided attention.
Renee, thank you for coming on the podcast.
Thanks for having me.
Renee, I am so excited to talk to you because I think you are sitting on top of one of the most interesting views of like how does our brains work, how do our cognitive biases work, our social biases work so that like not just our behavior is manipulated, but that our identity is manipulated so that we end up becoming useful idiots.
And I just love to hear like your overarching frame of like what's,
going on? Yeah, so I studied disinformation campaigns. I got involved in this looking at American
conspiracy theorists, looking at the anti-vaccine movement and the impact that they were having on
shaping conversations about legislation. So not their outreach to parents, but actually their
outreach as a political force. As we began to have more and more disease outbreaks in 2015,
there was that cause and effect on that front. But then there was also, as we tried to use legislation
to deal with that, the way in which they were able to really galvanize a small group of people
to have a disproportionate impact on the conversation by using things like automation,
by doing very interesting strategic kind of social media coordination, looking at that.
When you say automation, what kind of automation did they have?
The time it was mostly primitive bots, but primitive bots was all you needed in 2015
because Twitter wasn't very sophisticated about what would make something appear in a hashtag.
And so owning the share of voice around a conversation was much easier because you just really had to kind of own the hashtag.
And so they would just have these automated accounts that would be pushing out content 24-7.
So anytime you pulled up the hashtag SB277, which was the hashtag for the bill, what you would see was their content, was their point of view.
And I did a network map of the clusters of people in this conversation with a data scientist named Galad Latan.
And what we saw was this remarkable, highly centralized, deeply coordinated on-message collection of communities that were leveraging automation.
And then we looked at the public health voices, who were sort of the other side of this debate,
they would kind of like occasionally tweet.
There was no real message to be on even.
There were hashtags like the hashtag vaccines work.
But anytime the pro-vaccine side, which was so much smaller and less coordinated, would try to create a hashtag, it would just get taken over by the opposition because they would just add it to their list of things that were being pushed out.
And they would flood the channel on top you mean.
Exactly. Yeah.
And so the idea that this was a conversation was wrong.
And so one of the things that I did was, you know, look at this conversation, look at this hashtag, and then actually go to legislators and say, here are examples in which this is not really indicative of the balance of people who hold these points of view.
So when you're polling your constituents and they're telling you, 85% are telling you that they're in favor of this bill to remove personal belief exemptions, which is a way to just kind of opt your kids out of getting vaccines for school, 85% of your constituents are telling you they want you to.
to revoke that, to close that loophole.
But 99% of the social media conversation is saying the exact opposite.
And so that was the ratio, right?
You would see 99% in favor of the anti-vaccine movement.
It was overwhelmingly anti-vaccine on social platforms.
I don't know that we ever sat there and quantified the percentage of all messages
through the entirety of the hashtag.
But one thing that we did see was we would see these instances where like the top
20 accounts were sending out 65% of the content. There was a really strange distribution by which
10 participants in the hashtag were dominating the hashtag, and that's because there would be
these accounts that would just be kind of on 24-7. So it was really interesting to see that
divide. And even if you look at vaccination rates in California, you would still see that, you know,
85% or so were still vaccinating their kids. But if you were to look at the social media
conversation, it seemed like nobody was anymore. You know, it was all done. So it was
really kind of profound to have a first-person experience of that as a parent, as a parent, as a person
who was fighting to get that bill passed. I was in no way neutral on this whatsoever, just to be
clear. I was really deeply surprised as we started to dig into how this conversation was taking
shape. I am not saying in any way that these were Russians or that these were fake, you know,
that this was a point of view that wasn't real. This is a point of view that is very real. But the
proportional representation of that point of view, that's very real. But the proportional representation of that
point of view in conversation. The amplification was not real. I hear this as a kind of consensus
hacking, that if you can control what people hear, it starts to be like, well, I guess everyone
else believes this. Absolutely. And that's where we didn't really have a term for it. I think
Sam Wally, who's a disinformation researcher, came out with the term manufactured consensus,
maybe six months after or something. Because I wasn't the only one who was looking at this stuff.
There were researchers who were starting to say, like, something is really weird here. And we need to have a
better understanding of how this dynamic, this online dynamic, is changing our offline dynamics by
influencing policy. And I started writing these articles saying, you know, one of them was titled
something really blunt like social media algorithms are amplifying conspiracy theories. And I couldn't
this is back in 2015. This was early 2016, I think was when that article came out. That was in
Fast Company. And that was because we didn't have a terminology for it. But the same thing that we had
seen with the anti-vaxxers in the U.S., all of a sudden there was, do you remember the Zika outbreak?
There were these insane conspiracy theories going wild on Twitter that Zika was, you know, a government-created disease.
The proliferation of these pseudoscience conspiracy theories and the way in which they were hitting people's Facebook feeds, people's Twitter feeds, because this was not yet seen as something that the platforms should have to deal with.
Right. Well, if we take their model, it's like, I mean, if I'd ask you right now, like, so isn't the solution to,
bad speech, just more speech. Everyone has a right to say what they want to say. So if people are
saying things that are crazy, just make sure there's more speech. Shouldn't that be adequate?
Well, that was the state of the conversation in 2016 for sure. And there's an article
that I actually co-wrote with Sam Wally and I think Ben Nemo and two or three other researchers
at the time. It was in motherboard. And actually, one of the things that we say in there,
which maybe is going to sound shocking now.
This was related to ISIS and the terrorist Twitter bots
was maybe we should just be running our own bots
because there was the idea that as long as the platform
wasn't going to do anything about it.
And that was the state of affairs at the time.
Yeah, there were a lot of people who were saying
maybe the solution to the ISIS Twitter bots
is more bad automated speech.
And it's, you know, and I'm almost hesitant to say that now
because it sounds so terrible, but that was,
there was a sense that either the platforms would have to come in
and somehow change, you know, restore some kind of balance.
Nobody even knew what to call it.
It's that we were so lacking in vocabulary for any of this stuff.
And we began having these convenings.
Researchers, platforms would participate, a lot of people in the conversation.
And there was a deep recognition across all parties
that we did not want the platforms to,
be the arbiters of truth. Now, that was their term, and they used it quite often. We don't want to be
the arbiters of truth. But there's a lot of... Which, of course, presupposes that they aren't
already the arbiters of truth. Well, it's interesting because there's the content, right? And then
there's the distribution pattern. And if you kind of divorce those two things, the platforms
didn't want to be seen as being disproportionately biased against a point of view or a piece of
content. And that gets into realms of things like censorship and who decides
who decides what narrative can be said.
But when you look at the problem from a standpoint of distribution,
then you can say, okay, having accounts that are on 24-7
that exists solely to shift the share of voice,
maybe that's not indicative of the most authentic view
of what a conversation would look like.
So it became more of a conversation about integrity.
And how could we think about ways to interpret?
sure that we didn't get into the morass of is this point of view better than that point of
view or is this an opinion that is allowed to be said but not that one but instead look at things
like is there what came to be called coordinated inauthentic activity in which distribution is being
gamed and that is much more quantifiable it's easier to detect you can you can look at it as like
an anomaly detection point of view this is not moving the way we would expect a normal
pattern of a reality to look like. Here is this account that's never been seen before that
has 500,000 retweets on its first tweet. How did that happen? Who even saw that first tweet? So you
can look at it more as a signals and patterns phenomenon. And so the content of the tweet or the
content of the post is not... You're not looking at what it is. You're looking at how it moves.
Exactly.
Sort of saying like, okay, well, we don't know how a virus works or what its DNA is, but we can know
that it's a virus based on how it's spreading from person to person.
That's the current kind of best practices as we think about how do we balance that right to
expression with the recognition that there are certain distribution tactics that game the
algorithm.
So you have this line from your early work on this.
I remember when we were briefing Senator Mark Warner that if you can make it trend, you can
make it true.
Why is that true from a brain perspective?
So there's a lot of studies about repetition and people who hear a thing repeated over and over and over
again. This is how manufactured consensus works, actually. You begin to believe that this is a
dominant point of view or a thing that is true. The illusory truth effect, I think, is the cognitive
bias here. Yes. There is also the phenomenon that the correction never spreads as far as the
original sensational moment or meme or whatever. And so that's where you get at things like
something that sounds salacious, outrageous, sensational is going to spread like wildfire.
because people want to share it.
They want other people to know.
It's not being done out of maliciousness.
It's being done out of a desire
to communicate information to your community.
And the tools that we have built,
they offer up velocity and virality, right?
So it can spread fast.
Yes.
So it can spread fast and it can go far.
And the way it hops from person to person
and community to community is easier
than it's ever been before.
But it's very rare to see that very boring,
usually quite mundane or scientific
or measured correction.
put out after the fact, achieve any kind of similar distribution.
Let alone stick.
I think there's these studies where if you issue a correction, people actually do receive it.
They nod their heads.
And then if you test them five weeks later, they completely go back to the false thing,
which was more sticky in their brains.
Yeah.
And so the stickiness and the repetition and the exposure.
I think it's the ease with which people can encode or remember a message is how true we think it is.
So if you get hit again and again with a message, you've seen a lot, so it's easy to encode.
But humans also have a rhyming bias.
So if something rhymes you view it as more true, alliteration bias, we view it if it alliterates, you view it as more true, confirmation bias, then actually makes sense in this frame because you're like, oh yeah, it's easily encoded because I just, I have all the bits, it fits with my worldview, so I'm going to view it as more true.
And this is where we realize that we're sort of always fighting fire with fire because it's really just understanding our own nature.
I mean, we have this line together.
Freedom of speech is not the same as freedom of reach.
Now, why would we encode a line like that?
And it's like, well, it's alliterative and it's more sticky.
And we're trying to put it out there because we're saying that we have to have a new conversation from free speech to talking about reach and amplification.
But we're using the same techniques.
So are we being, you know, and that's the thing you can't escape this arms race, this land grab for these sort of tuning frequencies of the human nervous system.
You know, if that's a thing that resonates at that part of our nervous system,
who's just, like, it becomes this race to who's going to play more chords, right, on the human piano?
Hey, listeners, we're going to pause our interview with Renee for a few minutes to jump off that last point.
If these platforms weren't hacking into our nervous systems, what could they be doing?
I know oftentimes we can sound anti-technology.
We're not.
The point is technology is shaping exponentially more of the human experience.
So what is the world we actually want to build?
I want to credit that Facebook has made a lot of progress in cleaning up the hatefulness and division of news feeds and the outrageification of news feeds over the last two years.
They've done a lot on that.
The problem is they've probably only done that in English or a handful of Western languages from which they're getting the most negative press coverage and government pressure.
And all the countries, Kenya, Nigeria, Cameroon, Angola, South Africa, all these places where it's bad and there's probably very few people reporting on it or not getting nearly enough pressure, I mean, they're not building language classifiers that anti-maximize outrage in those countries.
And, you know, I think that's just a huge issue that has to do with their business model, that we cannot have a world where what Facebook is about.
A lot of people think, you know, what are Tristan and Aza and Center for Humane Technology advocating for?
You know, like, just better ranking functions?
What do you want us to do?
Make people read the New York Times?
Should we give people fewer notifications?
It's like, no, let's just change the purpose of what Facebook is for.
So long as its primary interface is peer-to-peer sharing of links and content, it is vulnerable to the problems of a race to the bottom for attention.
But it doesn't have to be for that.
And even Mark Zuckerberg himself in 2005, when he described what Facebook was at the very beginning,
like a year into Facebook, he didn't say it's for getting content out to people and publishers
and making sure that people can browse the content they love or something like that.
He said it's a social utility that's kind of like an address book for keeping track of
and connecting with your friends.
And it could be about bringing us back to our local environments, embodied environments,
where, you know, more time at dinner table conversations with friends, more time having rich discussions, more time, you know, doing things we love, knitting groups, sewing groups, church groups, reading groups.
And that fits some of the rhetoric that they're making about, you know, Facebook groups.
But I imagine that the new Facebook groups are only embodied groups, like meaning places that you gather with physical people.
People are more reasonable when we meet them in person than when we see them from a distance.
Podcasts are more reason.
We get a more reasonable view into people when we hear.
hear them that way versus when we see, you know, 255 characters of text on a screen.
And, you know, this is, I think, one of the simplest things that all of these tech social
platforms could do is they could start by asking us about our values, right?
If Facebook asked me whether I cared about climate change, and I said yes, then there are a number
of things that Facebook could do, whether it's reorganizing the news feed, whether it's about
telling me about groups that I could go participate in right now, showing me all the people
instead of it showing me the people that are getting engaged, it shows me all the people
switching to a plant-based diet, which I've said I've wanted to do anyway. All of these things
help my impulses line up with my values because they can ask. And not a single one of these
platforms asks us about our values right now. I mean, and this is the design research project,
which is how will and how can software actually be in conversation with our values as opposed to
our lower level nervous systems? And the part of this Copernican revolution of moving the moral
center of the human universe from the authority of human feelings, you know, the triggers in her
nervous systems and behaviors, and calling that our truest revealed preference and say, no, no,
that's our revealed nervous system, but our true revealed values and moving to a
revealed values model has to involve a new kind of design that's actually good at eliciting.
What about this is important to you? It could be as simple as, let's say you post an article
about climate change. You know, when I do that, why is that meaningful to you? You know,
people have to be articulate about this, but I think this is the research project. Like,
this is what all of Silicon Valley needs to engage in is what is important to us? And right now
software doesn't, we don't even know. That's like an open research question. It's exciting.
Like, let's figure out how to get into people's values instead of get into their nervous systems.
All right.
Let's go back to the interview with Renee.
The memetics is interesting because, so memes is not just cat pictures, right, but memes is units of cultural transmission, ways in which we encode meaning.
Memes is, you know, using the Dawkins sense, genes of culture, right, cultural genes.
And so the building blocks, the foundational building blocks of culture, which spread from person to person.
And there's that stickiness in memetics, which is why the thing that I think about a lot is ways in which platform design choices have facilitated memes as the kind of dominant form of information transmission, right?
So you want a square picture.
You've got to communicate your information in that picture.
People aren't reading long form articles quite the same way.
So it's a way to take something.
It has a – there's a visual – you know, you remember the visual.
You remember the alliterative message.
Usually it's quite simple.
You're going to get max two sentences in there, maybe even just one.
It's a fundamentally different way of transmitting these kind of short bite size, completely nuanced-lacking pieces of information very rapidly.
And they lend themselves to this ferality.
Well, let's jump into that because this comes up so often.
People say, well, hold on a second, Renee.
You, Tristan, and Azza, hold your horses here.
We've always had propaganda.
We've always had advertising rush has always been doing this.
We've always had fake news. It goes back thousands and thousands of years.
Aren't you all just overreacting about this current state of play in 2019 with how platforms work?
What's your response to that, Renee?
And Aza, feel free to jump into.
Yeah, I say, well, absolutely. Propaganda's always been around, right?
Propaganda is information with an agenda.
There will always be information with an agenda because this is how we persuade people, two points of view.
This is how we get people elected.
This is not new.
But then there's no cause for alarm.
We're in exactly the same state as we've always been.
I talk a lot about the kind of unification of three factors, which just kind of came about as social platforms evolved.
And that's the mass consolidation of audiences onto like five places, which means that you no longer have to reach people in every like local paper or local radio station or whatever.
You have this deep consolidation onto five platforms.
So I have to go to five places instead of going to a hundred.
So it's much cheaper.
It's right.
So there's more focus.
You can direct your energy and reach millions of people.
in communities online, there's targetability.
So these platforms offer the ability to reach people.
That's how they make their money, which means that if you want to reach a particular
group of people, you can in a way that you never could before.
So you have that granularity where you're able to reach people, not just according to where
they live or what they read, but who they are, which is a very different thing.
And then the third piece is the gamable algorithms.
And that's where you get at the unintentional lift that you get from the platforms.
And so you see this when you look at going back to 2016, before Twitter decided to take action against the manipulation mobs, whether Russia or domestic ideologues or spammers, you would see nonsense trend regularly because they just knew that if they could get it trending.
They could make it true.
Well, but not only that, they could also get it into mainstream newspapers because journalists were on Twitter.
So you could get extraordinary distribution there.
Let's talk about that too, because you brought this really important point when we first started.
doing, having some conversations together that once you make it trend, the reason you make it
true, conspiracy, for example, if the media reports on it, then they make it true because
they're spreading it everywhere. If the media doesn't report on it, then it's a media
conspiracy that they're not reporting on something that's true. And so it's a double bind. I've
tied your hands behind your back. Magicians do this all the time. You know, you create a false
choice. So do you want to talk a little bit about that? What we kept seeing was something would trend.
And if you remember, Facebook had a trending topics thing too. And this was a huge deal because
That's right. People forget about this because it's no longer there.
Because it happened more than six months ago.
Exactly. You mean yesterday.
Six hours ago.
Almost against to the downgrading human attention spans that we can't remember six months ago. Go on.
So with Facebook trending topics, there was this controversy called, you know, came to be called kind of conservative gate, where conservatives felt that Facebook was somehow silencing or preventing conservative topics from trending.
And Facebook's response was to eliminate human editorial curation from trending topics.
entirely, just to avoid any potential appearance that human bias was setting the agenda
for what people were seeing in trending.
And immediately after that happened, absolute nonsense started trending with regularity.
So there was an article about Megan Kelly getting fired by Fox News, blatantly false.
There were articles by conspiratorial sites that nobody would think of as in any way reputable
or, you know, that would make their way to the top.
I got one night I logged in and there was like a witch blog talking about a new planet that was trending.
Literally a witch blog.
I'm not kidding.
I took a screenshot and I sent it to a data scientist at Facebook and I said this is a disaster.
And this is because just to slow it down.
So people are hitting share on these stories.
Share, share, share, share, share.
That's my assumption.
And there was a one click instant share at the time.
Is that correct?
Well, Facebook, there was a, I think that there was one click instant share at the time.
And then there was also trends were like moderately personalized, I imagine.
And then there was also that subset where you could click into the science trends.
Oh, I mean, there's a science trend one day about how, like, dinosaurs didn't really exist.
And these were things where I was just like, oh, my goodness, this is just all being gamed.
And they don't want to be seen as putting their fingers on the scale.
And so anybody who comes in with a click farm.
But aren't we just giving people what they want, Renee?
I mean, isn't this just, and you have two billion people jacked into a trending topics list of 10 things.
And I think the thing people miss about this, that, you know, attention is finite.
So when these things start flooding the channels, it's not as if there's this infinite supply of the alternative ecosystem.
This garbage starts to fill up the airwaves and becomes the airwaves.
It becomes the new normal.
When manipulation works at the first layer, you're trying to do something directly.
You're pulling directly on the puppet strings.
But once you kind of get into them, let's say you implant a habit or you implant a deep-seated belief.
You don't have to then pull on the puppet strings anymore.
You can take your hands off and watch the puppet go walk around in the world like shouting these beliefs.
that are now running through their mind
and they're automatically pursuing that agenda
and this is happening at many different scales.
A lot of what we saw with Russia
was the building up of tribes
and they do that
not by making you hate other people
but by making you have very strong points of view
about your identity
and your identity as a member of this group.
So what's an example?
So for black women,
for the content targeting black women,
A lot of it was just focused on family, what it means to be a black woman in America.
Inspirational images of aspirational black marriages, black fatherhood was a really big theme.
And these are Russian trolls pushing names on black fatherhood.
Yeah, there's black hair, a lot of beauty fashion.
Black Don't Crack was a, you know, which is a sort of phrase that I would reach out to to black women researchers who study trolls.
Also, my data set was NDAID, so I could not send them the memes, but I would say, like, so what do you know about this hashtag?
Because this is not content that I am regularly pushed as a white woman.
So I wanted to get some sense of like, is this something that they made up or is this something that they appropriated from actual black culture pages?
And what we would see, as we got more and more into the data set over, this was a six-month study or so eight-month study, ways in which they were just taking and repurposing hashtags and phrases.
and memes and visuals, including that real black women had posted themselves.
So this is my Tumblr account with my picture of me.
And then they would take that and they would share it and they would say, you know, look at this queen or something.
And so one of the reasons the data that's not public actually is because there are so many images of people and they're real people.
Real people.
And that's because they would put their pictures on Tumblr or they would put up their photos and then that content was seen as something that could be appropriated.
By the Russian trolls.
Yeah, by the Internet Research Agency.
So the pages that were targeting black women, the pages that were targeting, you know, and this wasn't just, of course, it wasn't just the black community that was the recipient of this.
The black community was the majority, I would say.
Most of the content really did, they leaned very hard into the black community.
Right.
And I notice, we're not saying something about like, oh, they got duped or something like that.
It's not this at all.
It's actually just they're playing to pride and identity.
You would ever know.
I mean, we see images all the time of different kinds of pride.
But they, so why did they go after?
same thing with the, with Southerners, actually. So the narratives about the Confederacy were not, they were not rooted in hate. It was like, we are proud descendants of this group of people who fought this war and this is our flag. And so it was very much like a rally around that pride. It was very rarely was it positioned in opposition. They began to position it strongly in opposition when the Confederate monuments were coming down. And then even then that framing was about your identity. This is an attack on your identity. This is a, this is an affront to you as a,
a southerner. They had a nuanced view of how the right operated too in the sense that pages
targeting older people leaned more into narratives of security, Ronald Reagan, lots of images
of flags. Just imagine the history of Russia pushing Ronald Reagan memes. I know, there's irony there.
The younger leaning Russian stuff was much more of like the kind of snarky. You know,
there was a meme that was, are you team conservative or team croxervative?
This is younger conservative, you're saying.
Yeah, so, yeah, so it would lean more into the, you know, they had pages targeting the Tea Party.
They also had pages targeting more like the kind of like pro-Trump, right?
So they did have that segmentation.
And they would decide how to, they wanted to erode support for institutional Republicans as well.
So there was a ton that was anti-McCain, ton that was anti-Lindsay Graham, particularly when Lindsay Graham was kind of at loggerheads with President Trump or then candidate Trump.
There were anti Ted Cruz, anti-Marco Rubio, during the primaries when they wanted to kind of bolster support for then-candidate Trump.
On the left, the political content took the form of anti-Hillary.
A bunch of stuff that was pro-Jill Stein, a bunch of, you know, when Bernie Sanders was still in the ring, pro-Bernie Sanders.
When Bernie was no longer in the ring, the conversations about the ways in which the Democratic Party had, you know, had wounded Bernie voters.
This is all rooted in real grievances.
Like there is some truth to a lot of this, and that's what makes it insidious because the hardest thing to respond to is always, yes, but we hold this point of view.
Who cares if the Russians said it because we hold this point of view also?
You're deepening a sense of an identity that someone already holds, right?
So when you're looking at that original ad targeting, if they're targeting an ad for a Christian page, you're targeting it to people who are receptive to that point of view already because they're,
Christians, and that is a perfectly normal thing to be. And if you are a Christian, you should
want to find Christian content. Exactly. And the same thing with if you are a Muslim, that same
thing. If you are a Southerner and you want to find your, you know, your Texas Pride page,
I mean, I'm a New Yorker. I have New York Pride, right? So there's no that sense of who you are,
your identity, your place in the world, you find people who share that identity, you deepen that
affiliation, and then you add on the propaganda layer.
Then you add on the call to action.
This is a really strange idea.
Russia is creating propaganda for us that reinforces our worldviews.
We want to stop one more time here and really get into this question.
If Russia is just giving us things that we want, things we already identify with, things that
make us laugh in agreement, then what's the problem with that? Tristan and I talk it over.
Well, I think the question is when the receiver of that persuasion is unaware of the source of it
or the motives behind it, how do they feel? Like, I love getting that deal that says $100 off,
you know, buying this thing. And I'm like, oh, that's amazing. Like, I'll take that. But I don't
really know the economics or who's paying for it or why they want that to happen or is there something
that's going to happen afterwards or maybe I'm signed up for something or I just gave a
away some piece of information about me or now the vood at all about me is way more accurate
and I didn't know how and that's going to cause me trouble like five years from now.
I mean, let's take this example that just happened.
There's this app called, I mean, it's so funny, people say, oh, persuasion, what a conspiracy theory.
Like hijacking our minds, isn't Tristan and Aza, they're just exaggerating this whole thing.
This isn't really real.
Just yesterday, this app called Face app, I think has been downloaded 150 million times.
And all it does is you take a photo.
of your face, I think you do like a 3D scan type thing. And then it does a deep fake style
like CSS, almost like a CSS style sheet over your face to make you look older. But it's
really accurate. So it really makes you look like, this is what you will look like when you're
older. And it plays into just the core, it's like the perfect persuasive cocktail like mix,
mixer shakeup of persuasive ingredients. So it's vanity, like star of the show. It's about you. It's
about what you look like. People love that. Yeah. What am I going to look like when I'm older?
Two, social validation. So, like, what do all of you think of what my old face looks like? What do
you think of that? Isn't that kind of funny? Don't I look actually kind of attractive when I'm
older? And then the third thing, which is social proof. Hey, everybody else is doing it must be okay.
And guess where this app was built? Russia. Was it really? I think the company is called
Wireless Labs. Wow. Yeah. So there's this thing. And people say, oh, man, like, all those
dumb gullible people over there that got influenced by Russian propaganda. Wow, how
vulnerable they are. Like, good thing I, the smart one over here, I would never be
influenced by that. I have like at least 100 friends in my news feed on Facebook who have
actually installed this thing. And I have a lot of smart friends. I mean, it's not correlated
to intelligence, right? Or even critical thinking, right? It's about some core nature about,
you know, we had this, you know, Aza, you and I think we're at this dinner and someone we know
CEO of major tech company who knew Zuckerberg in the early days said that Zuckerberg said
this fascinating thing, that every human being responds to social validation. Not one human
being does not respond to social validation. It is a universal. And if you own that, you own that
fulcrum of what motivates people on a daily basis. That's why likes are so powerful. That's why having
your profile photo different tomorrow and having that visible for other people to see or respond to
is so powerful. That's why face app is so powerful. But when Russia just basically got the names
and photographs, close up photographs, of 150 million Americans for the 2020 elections if they want
to use it, now I'm not saying this was done by the FSB, although it definitely could have been.
Who knows? But certainly, because this app was built in Russia, it would not be very hard for
Russia to commandeer that database and say, great, what do we want to do now? And how about all
that deep fake stuff we can do? We'll start using, we're calculating who people's friends,
are, we'll make upposts, well, like, there's a whole bunch of stuff you can do now you have
that data set.
And people think, oh, only those gullible people could be influenced by propaganda.
The thing is, you know, in Kenya, Nigeria and South Africa, it was a third of respondents in
this one Neiman lab study said that they themselves had spread false news.
And so we see that the effect is like at the scale of, you know, a third of a population.
And that's just the people that they surveyed.
You know, it makes me think of...
Who admitted, though, they said that they knew that they had done it.
Right, who admitted it, which means the number was almost certainly higher because
who wants to admit that...
Oh, like, at least double if that...
Yeah, yeah, exactly.
And we always get this question.
Renee, you know, gets this question all the time of, okay, but did this stuff actually
influence elections?
And of course, that's a very hard thing to know, in part because every single human being
is in the control group, or rather is not in the control group.
or rather is not in the control group, they're in the experiment.
So how do you, there's not like there's a second earth in which Facebook does not exist and is not doing these things.
You know, I could ask the question, did it swing the election?
And the answer is I have no idea because we just didn't have that granularity.
I can tell you that 500,000 people followed a particular page.
They were likely of a certain demographic, but I don't know where they lived or what their prior position was or anything.
It wasn't part of the data that I had.
But one thing that was very interesting was in the week leading up to the election, the content targeting the right was all about anger.
It was phenomenally angry.
It was we need to get ready to have an armed insurrection of Hillary steals the election.
It was we have to vote to stick it to the elites.
You know, it was constant anger.
It was just constant anger to drive people to go take an action, go vote, go vote, go vote.
even go vote because we love President Trump so much. It was go vote because she can't win. And if she wins, it destroys America. And so that was where you would see this. We have to be ready to riot. You know, meanwhile, on the left and in the black community, it was apathy. So it wasn't anger at all, actually. It was just, why would we get out of bed for this? This isn't for us.
Did you have an example that they were posting photos of like cute black families or something like that during election week?
It was still leaning into, they've posted a lot of inspirational stories about like black youths in particular.
And that was that was always framed.
Actually, I mean, I loved reading them.
I thought it was great.
Like, as somebody who was reading these stories as I was going saying like, oh, that's an interesting story.
I hadn't heard about that.
But it was a, it was being framed as like, this is the narrative.
The media doesn't want you to see.
So all sides got that.
This is the narrative.
The media doesn't want you.
to see. So that erosion of trust in mainstream media, there were constantly memes about CNN,
who was controlling CNN. There were regularly posts about, and it wasn't just targeting
the right, because that's a pretty common narrative. The black community pages that they built
all pretended that they were independent black media telling the stories that mainstream media
wasn't telling. Now, the irony is that they were actually going and grabbing these stories
from American media, cutting and pasting them and repurposing them. But that,
Of course, you know, because I started tracing these stories back because I'm saying, okay, where are these stories coming from? Are they making them up?
Or are these people real? You know, these stories, these pictures. I've got pictures of people's faces. Is this actually the thing?
There was one about African-American kid who ran a go-fund me for a medical device that he wanted to build.
And they, sorry, it was a Kickstarter. It was a Kickstarter. And they were, they wrote about this particular story like three times as he designed.
It was to keep kids from dying in hot cars.
No, no, this was like a little bit earlier than that.
But they kept returning to these stories, and each of their fake black media pages of which there were about 30 would post the story on a different day and would repost the story again.
And so you would see them taking their content that was resonant and reusing it.
So they knew what their wins were.
And like any good social media operator, they would double down on those.
It was interesting to see how they did it.
But a lot of the narrative leading into election week for the African American community,
was very much focused on, this isn't our country.
So there were stories of police brutality incidents,
which were a common theme throughout.
Because, again, this is rooted in real grievances.
Right, these are all real grievances.
Right.
And they, but they had that,
and then the frame that they used for that was,
so we shouldn't vote.
And so this is, you know,
we're second class citizens in this country.
They treat us terribly, so we shouldn't vote.
And so that was where you started to see
a lot of the they language,
a lot of the other, you know,
this is, why would we participate in this process that is not for us? And so it's, you build up
these pride-based groups, people who are proud because they have a deep connection to an
identity. And then you turn that on in an advantageous way when you want to be manipulative. And so
it was not black people shouldn't vote. It was as black people, we shouldn't vote. And it's a subtle
inflection, but it builds on the community that you're in.
It makes it incredibly hard to say, let's turn that off, because how can you, they're
just saying things that people already feel in much the same way, you know, that people say
Trump says things as people already feel when he's saying extreme things.
What is the recourse against this?
What do you do?
Well, this is where the, this is where it gets really hard, right?
So none of the Russian content or very little of it would have come down on any term
of service violation because it wasn't really objectionable, because these are all
positions that real people hold. And so we would get into these interesting conversations,
particularly when we would talk about truth and not wanting to be arbiters of truth.
And this was, again, how the conversation came back to integrity. It's a really weird, nuanced
thing to have to work through, which is what is an authentic Texas secessionist? Who is an
authentic Texas secessionist? So there are people in America who are Texans, who are secessionists,
and that is their sincerely held belief. And under, you know, free
of expression. They have every right to express that sincerely held belief. How does Facebook
decide if the Texas secessionist page is run by a quote-unquote authentic Texas secessionist?
And this is where we get at some really challenging nuances where this is where you've got this
collection of like platforms have access to metadata. This is where you see the changes Facebook's made
where it tells you like the regions where the page managers are from and stuff like that.
But is there an expat, Texas secessionist living abroad?
You know, I mean, it's just, it's just a mind-boggling collection of a really hard.
It's a very hard problem.
So we do try to get back to dissemination patterns.
We do try to get back to account authenticity.
Is this account what it seems to be?
Ultimately, propagandists have to reach an audience.
And so that's one of the things that we look for is when you have to reach mass numbers of people, like, what do you do to do it?
And what makes that action visible?
We're going to end part one of Renee's interview here on this key question.
Senator Mark Warner said that for less than the cost of an F-35,
Russia was able to influence our elections.
This kind of viral propaganda is a new kind of thing.
It's an autoimmune disease that uses our platforms against themselves,
and it uses our values against us.
I thought of a metaphor as I listened to Renee's description of these memes.
In April of this year, there was a paper published in nature of research at Harvard.
They stuck probes into the brains of macaque monkeys and then trained in AI to generate images
whose goal was to get the monkey's neurons to fire.
The AI started with visual noise and it kept iterating and tweaking and dreaming up new images
until the neurons were hyper-stimulated, firing way more than they would for any natural image.
The images that resulted were glitchy and surreal, sort of like a bad trip,
You could see other monkeys and collars and faces with masks,
and I thought, isn't this sort of like
what we are doing to ourselves as a species,
testing hundreds of millions of pieces of content
against the human psyche?
And what emerges are bizarre things
that we might not understand, but that work.
So how can we make this kind of viral propaganda
first more visible?
And then second, how do we keep it
from going viral in the first place?
How do we upgrade our immune system?
Here's a clue for Renee.
This idea that we have a global public square
that's actually ludicrous.
That should never have existed.
The idea of that shouldn't even make sense to people, right?
We don't even have a national public square.
There's no such thing.
And there is something to be said for smaller scales of communication.
Tune in to the next episode of Your Undivided Attention
to hear part two of our interview with Renee DeResta.
Did this interview give you ideas?
Do you want to chime in?
After each episode of the podcast,
we are holding real-time virtual conversations
with members of our community
to react and share solutions.
You can find a link and information
about the next one on our website,
humanetech.com slash podcast.
Your undivided attention is produced
by the Center for Humane Technology.
Our executive producer is Dan Kedmi.
Our associate producer is Natalie Jones.
original music and sound design by Ryan and Hayes Holiday.
Henry Lerner helped with the fact-checking
and a special thanks to Abby Hall,
Brooke Clinton, Randy Fernando,
Colleen Hakes, David J.,
and the whole Center for Humane Technology team
for making this podcast possible.
And a very special thanks to our generous lead supporters
at the Center for Humane Technology
who make all of our work possible,
including the Gerald Schwartz and Heather Reisman Foundation,
the Omidyar Network,
the Patrick J. McGovern Foundation,
Craig Newmark Philanthropies, The Knight Foundation, Evolve Foundation, and Ford Foundation, among many others.