Making Sense with Sam Harris - #145 — The Information War
Episode Date: January 2, 2019Sam Harris speaks with Renee Diresta about Russia's "Internet Research Agency" and its efforts to amplify conspiracy thinking and partisan conflict in the United States. If the Making Sense podcast lo...go in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Transcript
Discussion (0)
Thank you. of the Making Sense podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed
to add to your favorite podcatcher,
along with other subscriber-only content.
We don't run ads on the podcast,
and therefore it's made possible entirely
through the support of our subscribers.
So if you enjoy what we're doing here,
please consider becoming one.
Today I am speaking with Renee DiResta.
Renee is the Director of Research at New Knowledge and the Head of Policy at the nonprofit Data for Democracy.
And she investigates the spread of hyper-partisan
and destructive narratives across social networks.
She's co-authored a recent report on the Russian disinformation campaign,
both before and since the 2016 presidential election,
and we talk about all that.
She's advised politicians and policymakers,
members of Congress, the State Department.
Her work has been featured in the New York Times and the Washington Post and CNN and many other outlets.
She's a member of the Council on Foreign Relations
and a Truman National Security Project Security Fellow.
She also holds degrees in computer science
and political science from SUNY Stony Brook.
As you'll hear, Renee was recommended to me
by my friend and former podcast guest Tristan
Harris, who recommended her as an authority on just what happened with the Russian influence
campaign in recent years. And Renee did not disappoint. So without further ado, I bring you Renee DiResta.
I am here with Renee DiResta. Renee, thanks for coming on the podcast.
Thanks for having me, Sam.
I was introduced to you through our mutual friend Tristan Harris. How do you know Tristan?
Tristan and I met in mid-2017. I had written an essay about bots, and he read it. And he shared it to Facebook, funny enough, and we discovered that we had about 60 mutual friends, even though we'd never met. And we met for breakfast a couple days later, and he wanted to talk about what I was seeing and the things I was writing about, and how they intersected with his vision of social platforms as having profound impacts on individuals. My research into how social platforms are having profound impacts on policy
and society. And we had breakfast, hit it off, and I think had breakfast again a couple days later.
So fast friends. Yeah, well, Tristan is great. So many people will recall he's been on the podcast,
and I think he's actually been described as the conscience of Silicon Valley, just in terms of how he has been sounding the alarm on the toxic
business model of social media in particular. So you touched on it there for a second, but
give us a snapshot of your background and how you come to be thinking about the problem of
bots and also just the specific problem we're come to be thinking about the problem of bots and also
just the specific problem we're going to be talking about of the Russian disinformation
campaign and hacking of democracy.
Yeah, so it's sort of a convoluted way that I got to investigating Russia and disinformation.
It actually started back in 2014.
I became a mom and I just moved to San Francisco a little bit prior and I
had to get my kid onto a preschool waiting list, which is not always easy. Yeah. Not like a nice
preschool, just like a preschool. And I knew California had some anti-vax problems and I
started Googling for the data sets. The California Department of Public Health has public data sets
where they tell you the vaccination rates in schools. Anyway, I looked and I thought,
God, this is a disaster waiting to happen. And lo and behold, a couple months later,
the Disneyland measles outbreak, in fact, did happen. And I reached out to my congressman.
It was the first time I'd ever done that. And I said, hey, you know, we should have a law for
this now. We should eliminate the vaccine opt-outs. And they told me
they were introducing something. So I said, great, I'd love to help. You know, I have a data science
background. I can maybe be useful as an analyst. And what wound up happening was that there was
this extraordinary thing as the bill took shape, which was that the legislators were finding that
polling in their districts was about 85% positive. Like people really liked the idea of eliminating
what were called personal
belief exemptions, the right to just kind of voluntarily opt your kids out. But the social
media conversation was like 99 percent negative. It was very hard to even find a single positive
tweet or positive Facebook post expressing support for this bill. And so I started looking into why
that was and discovered this entire kind of ecosystem of what was this hybrid between
almost activism and manipulation. So there were very real activists who had very real points of
view. And then they were doing things like using automation. So the reason that they were
dominating the Twitter ecosystem was that they were actually turning on automated accounts.
So they were just kind of spamming the hashtags that anytime you search for anything related to the bill in the hashtag, you would find their content. So this is kind of, you know,
this is sort of like a guerrilla marketing tactic. And I thought how interesting that they were using
it. And then realized that there were like fake personas in there. There were people pretending
to be from California who weren't from California. How were you figuring that out? How were you
assessing a fake persona? They were created within days of the bill being introduced, and they existed solely to talk about this bill. And then I discovered these communities on Facebook, things with names like Tweet for Vaccine Freedom, where there were actually moderators in the group who were posting instructions for people from out of state how they could get involved. And the answer was create a persona, change your location ID to somewhere in California, and then start tweeting.
So they sort of, you know, kind of, at the time, it seemed brazen. Now it seems so quaint. But
these tactics to shape consensus, to really create the illusion that there was a mass consensus
in opposition to this bill. And so a very small group of people using social media
as an amplifier were able to achieve dominance to just really own the conversation. And it led
me to think this is fascinating because what we have here is this form of activism where
there is kind of like a real core and then there's some manipulative tactics layered on top of the
real core. But if you're not looking for the manipulation, you don't see it. And most people
aren't going looking, you know, they're not digging into this stuff. So it was a kind of a first
indication that our policy conversations, our social conversations were not necessarily
reflective of, you know, kind of the reality on the ground, the stuff that we were still seeing in the polls. It was an interesting experience. And then a
couple months after that law was all, you know, all done, I got a call from some folks in the
Obama administration in the digital service saying, hey, we've read your research, because
I published about this in Wired. Hey, we've read your research. We'd like you to come down and look at some of the stuff that's going on with ISIS.
And I said, you know, I don't know anything about ISIS or about terrorism, candidly.
When they said, no, no, you have to understand the tactics are identical.
The same kind of, you know, kind of owning the narrative, owning the hashtags,
reaching out to people, pulling them into secret Facebook groups. The idea that terrorists were actually following some of these kind of radicalization pathways,
these efforts to kind of dominate the conversation. Anytime there was a real world event related to
ISIS, they would get things trending on Twitter. And so people in the administration wanted to
understand how this was happening and what they could do about it. So that was how I wound up
getting more involved in this in sort of a more official capacity. It was first kind of how this was happening and what they could do about it. So that was how I wound up getting
more involved in this in sort of a more official capacity. It was first kind of conspiracy
theorists and terrorists and then Russia was following the 2016 election. There was a sense
that, again, there had been these bizarre bot operations and they were far more nefarious
and sophisticated than anyone had realized,
and we had to do a formal investigation.
Before we get into the Russia case specifically, how do you view the role of social media in this?
Do you distinguish between the culpability or the negligence of Twitter versus Facebook versus YouTube?
Are there bright lines between how they have misplayed this,
or are they very similar in the role they're playing? I think that they've really evolved a lot since 2015.
In the early conversations about ISIS, there was a, just to kind of take you back to 2015,
the attitude wasn't, oh God, we've got terrorists on our platform, let's a, you know, just to kind of take you back to 2015, the attitude wasn't,
oh God, we've got terrorists on our platform, let's get ahead of this, right? It was,
you know, Facebook, to its credit, took that attitude from day one. It was just,
this is a violation of our terms of service. We take down their content, we fine them,
we shut them down. YouTube would kind of take down the beheading videos as they popped up.
Twitter, if you go back and you read
articles from 2015, as you know, I've been doing a lot of going back and looking at the conversations
from that time, you see a lot of sympathy for Twitter and this idea that if you take down ISIS,
what comes next? This is a slippery slope.
Interesting call on to ponder. Satan.
So, you know, well, I mean, if we take down ISIS, I mean, who knows what we have to take down next?
You know, one man's terrorist is another man's freedom fighter. And, you know, and I would be
sitting there in these rooms hearing these conversations saying, like, these are beheading
videos, you guys. These are terrorist recruiters. These are people who are killing people.
What the hell is this conversation? I can't get my head around it. But that's where we were in 2015. And, you know, go back and read
things that people like the, you know, entities like the EFF were putting out and you'll see that
this was a topic of deep concern. What would, you know, what would happen if we were to
silence ISIS? Would we inadvertently silence things that were tangentially related to ISIS?
And then from there, would we silence, you know, certain types of expression of Islam and so on
and so forth? And it was a very different kind of mindset back then. I think that the context has
changed so much over the last year, in part because of stuff like what Tristan is doing
and the tech hearings. And I think that 2016 was almost like this sort of, you know, Pearl Harbor that made people realize
that, you know, holy shit, this actually does have an impact. And maybe we do have to do something
to get ahead of this because everybody's doing it now.
Reading recent articles specifically about Facebook makes me think that there is just
an insuperable problem here. You can't put enough people on it to
appropriately vet the content, and the algorithms don't seem to be up to it. And the mistakes that
people plus algorithms are making are so flagrant. I mean, they're preserving the accounts of
known terrorist organizations. They're deleting the accounts of
Muslim reformers or ex-Muslims who simply say something critical about the faith. I mean,
people can't figure out which end is up, apparently. And once you view these platforms as
publishing platforms that are responsible for their content, it's understandable that you would
want to given the kinds of things we're going to talk about, but I don't know how they solve this.
There's a lot of, you know, Tristan and others have done a lot of work on
changing the conversation around culpability and accountability. And I think that, again, in 2015, 2016, you know, there would be references
to things like the CDA 230, the Communication Decency Act Section 230, that gives them the
right to moderate, which they chose to use as their right to not moderate. And the norms,
I would say, that evolved in the industry around not wanting to be seen as being censors in
any way at the time, which meant that they left a whole lot of stuff up and didn't really do very
much digging. And then now the shift, kind of the pendulum swinging hard in the other direction,
which is leading to allegations that conservatives are being censored and allegations that, per your point, unsophisticated moderation. I think there was an article about this in the New York Times over the weekend has led to some disasters where they take down people fighting extremists in the attitudes of the public has led them to start to
try to take more responsibility. And right now it's being done in something of kind of a ham-handed
way. Yeah, well, they're certainly culpable for the business model that have kind of a less of a
view of Twitter here because Twitter doesn't seem to have its business model together in the way that Facebook does. But clearly Facebook, you know, per Tristan's point, that their business model
promotes outrage and sensationalism preferentially. And the fact that they continue to do that is
just selecting for these crazy conspiratorial divisive voices. And then they're trying to kind of curate against those,
but they're still amplifying those because it's their business model.
And at least that's the way it seems as of my recent reading of the New York Times.
Is that still your understanding of the bad geometry over there?
Yeah, I would say that's accurate.
So I see a lot of, you know, I try to focus on the disinformation piece.
There are some
people who work on privacy, some who think about monopoly, you know, a lot of different grievances
with tech platforms these days. But I see a lot of the manipulation specifically, I would say,
comes from a combination of three things. There's this mass consolidation of audiences on a handful
of very few platforms. And that's just because as the web moved from these kind of, you know,
decentralization, where there's always been manipulation and disinformation and lies on
the internet, right? But the mass consolidation of audiences onto a very small handful of platforms
meant that if you were going to run a manipulative campaign, much like if you were going to run a
campaign for, you know, Pepsi, you only had to really blanket five different sites. And then
the second piece was the precision
targeting, right? So the ads business model, the thing that you're referring to, these are
attention brokers, which means they make money if you spend time on the platform. So they gather
information about the user in order to show the user things that they want to see so that they
stay on the platform. And then also as they're gathering that information, it does double duty in that they can use it to help advertisers target them.
And then I would say the last piece of this is the algorithms that you're describing and the
fact that for a very, very long time now, they've been very easy to game. And when we think about
what you're describing, the idea that outrage gets clicks, that's true. And the algorithm,
particularly things like the recommendation engines, they're not sophisticated enough to know what they're
showing. So there's no sense of downstream harm or psychological harm or any other type of harm.
All they know is this content gets clicks and this content drives engagement. And if I show
this content to this person, they're going to stay on the platform longer. I can, you know, mine them for more data. I can show them more ads.
So it's beneficial to them to do this. And I think one of the interesting challenges here is as we
think about recommendation engines, that's where there is, in my opinion, a greater sense of
culpability and a greater requirement for responsibility on the part of the platforms.
And that's because they've moved into acting as a curator, right? They're saying,
you should see this. And the recommendation engines in particular often surface things that are
not necessarily, you know, what we would necessarily want them to be showing.
This is how you get at things like, you know, my anti-vaxx them to be showing. This is how you get at
things like, you know, my anti-vaxxers, right? I had an anti-vax account, an account that was
active in anti-vax groups, and it didn't engage with any of the people. It just sort of sat in
the groups and, you know, kind of observed. And it was being referred into Pizzagate groups.
So long before Pizzagate was a matter of national conversation, long before that guy showed up with a gun and shot up a pizza place thinking that Hillary Clinton was running a sex dungeon out of the basement,
these personas that were prone to conspiratorial thinking, the recommendation engine recognized that there was a correlation and people who were prone to conspiracy, you know, conspiracy type A would be interested in Pizzagate, which we can call
conspiracy type B. And then soon enough, QAnon started to show up in the recommendation engine.
And so the question becomes, you know, where is the line? You know, the platform is actively
making a recommendation here. These accounts have never gone and proactively searched for
Pizzagate and QAnon. They're being suggested to them. So where is the responsibility? Should we have the recommendation engine not surface that type of content or is even making that suggestion a form of censorship? These are the kinds of conversations I think we'll start to see more of in 2019. Let's focus on the topic at hand, which is Russian interference in, I guess, democracies
everywhere, but specifically the U.S. presidential election in 2016 and the recent report that
you helped produce on this, which runs to 100 pages.
And I'll put a link to that where I post this on my blog.
First, I just got a big picture, sort of political partisan question. It seems to me that many people, certainly most Trump supporters, continue to doubt whether Russia interfered in anything in 2016.
And this is just, you know, this is fake news.
Is there any basis for doubt about that at this point?
Nope.
about that at this point? Nope. This is just crystal clear as a matter of what our intelligence services tell us and as a matter of what people like you can ascertain by just studying online
behavior. It happened. There's really nothing else to say about it. The intelligence agencies know
it happened. Foreign governments know it happened. Researchers know it happened. The platforms
acknowledge it happened. I mean, sure, there can be some small group of people who continues to live like ostriches, but that doesn't mean that it didn't
happen. And what do you do with the charge that we do the same thing all the time everywhere
ourselves? So there's really nothing to complain about here. Well, I mean, we probably do it to
each other at this point, right? There's evidence of that as far back as 2016, you know, some things that insinuations about Alabama.
There's a whole lot of a lot of evidence that domestic groups can and do do this as well.
And that's why what I what I keep going to when I talk about this topic publicly is that this is not a partisan issue.
This is not a one, you know, one state, you know, one foreign actor interfering in one moment issue.
This is sort of just an ongoing global challenge at this point. If we're speaking specifically about Russia and whether that
happened, I think that it's incontrovertible truth at this point. Yeah. And the other thing
that seems incontrovertible is that it happened to favor the election of Trump in many obvious ways and in many surprising ways that
we'll go into. But they were not playing both sides of this. This was not a pro-Clinton campaign.
And in your report, you break down three ways which their meddling influenced things and or
attempted to influence things. We're going to be talking
about one of them, but I'll just run through those three quickly and then we'll focus on one.
The first is there were attempts to actually hack online voting systems. And, you know,
that's been reported on elsewhere. Secondly, there was just this very well-known and consequential
cyber attack on the Democratic National Committee and the leaking
of that material through WikiLeaks. And that was obviously to the great disadvantage of the
Clinton campaign. Then finally, and this is what we're going to focus on, there was just this
social influence based on the disinformation campaign of the sort that you've just described, using bots and fake personas and
targeting various groups. This was surprising. When you get into the details of who was targeted
and the kinds of messages that were spread, it's fairly sophisticated and amazingly cynical.
There's a kind of morbid fun you can imagine these people were having at our expense in how they played one community against another in American society.
So let's focus on this third method.
And this was coming from something called the Internet Research Agency.
What, we'll call them the IRA as you do in your report.
What is the IRA and what were they doing to us?
So the IRA is, you can think of them a little bit as a social media marketing agency meets
intelligence agency. So what they did to a large extent was they kind of built these pages,
they built these communities, they built these personas, and they pretended to be Americans.
Americans of all stripes.
So some were Southern Confederates, some were Texas Secessionists, some were Black Liberationists.
They had all of these personas, they really ran the gamut.
What they were doing was they were creating pages to appeal to tribalism.
So a lot of the conversation about the IRA
over the last two years has referred to this idea
that they were exploiting divisions in society.
And that's true.
But the data set that I had access to,
which was provided by the tech platforms
to the Senate Intelligence Committee,
was the first time that anybody saw the full scope,
you know, through the full two and a half years.
And what we saw there was not a media marketing,
you know, meme shit poster type agency that was just throwing out memes haphazardly and trying to
exploit divisions. What they were trying to do was grow tribes. So a little bit different.
The IRA originally started as a entity that was designed to propagandize to Russian citizens, to Ukrainian
citizens, to people who were in Russia's sphere of influence. And the early stuff in the data set,
Twitter provided the earliest possible information of the material the companies gave us,
was actually Russian language tweets talking about the invasion of Crimea. It was talking about,
you know, it was creating conspiracy
theories about the downing of the Malaysia Airlines flight MH17. So the early activities
of the IRA were very much focused inward, focused domestically. And then around 2015,
they turned their energy to the United States in what the Mueller and some of the Eastern
District Court indictments have been referring to as Project Lakta.
So Project Lakta was when the effort to grow these American tribes really started.
This precedes the election, right? So this precedes Trump's plausible candidacy. And there was still this goal of amplifying tribalism in the U.S.
Yeah. So the goal was to create these. So this was a long game. This was
not a short-term social media operation to screw around with an election. This was a long game to
develop extended relationships, trusted relationships with Americans. And what they
did was they created these pages. So an example would be Heart of Texas was a page that really amplified
notions of Texas pride. Almost all of their pages, an LGBT page, pages targeting the Black community,
pages targeting Confederate aficionados, all of these pages were designed around the idea of pride
and pride in whatever particular tribe they were targeting. So the
vast majority of the content, particularly in 2015, in the early days was, you know,
we are LGBT and proud, we are Texans and proud, we are proud descendants of Confederates. And so
this idea that you should have pride in your tribe was what they reinforced over and over and over and over again. And then you would see
them periodically slide in content that was either political or divisive. Sometimes that would be
about othering another group. So we are, you know, some of the content targeting the Black community
in particular did this. This country is not for us.
We're not really part of America. We exist outside of America. And so a lot of exploitation
of real grievances tied to real news events. So constant drumbeat of pride plus leveraging
real harms to exploit feelings of alienation. Sometimes you would see them do this with
political content. So as the primaries heated up, that was where you started to see them
weaving in their support for candidate Trump, weaving in their opposition to candidate Clinton.
I'm looking at your report now, and I'm seeing this list of themes. I'll just tick off some of
these because it's, again, rather diabolical and clever
how they were playing both sides of the board here. So they would focus on the black community
and Black Lives Matter and issues of police brutality, but also they would amplify pro-police,
Blue Lives Matter pages. It had anti-refugee messages and immigration border issues,
Texas culture, as you said,
Southern culture, Confederate history, various separatist movements, Muslim issues, LGBT issues,
meme culture, red pill culture, gun rights in the Second Amendment, pro-Trump and anti-Clinton,
and more anti-Clinton in the form of pro-Bernie Sanders and Jill Stein, Tea Party stuff,
religious rights, Native American issues. And all of this is just sowing divisiveness and conflict.
Although it really does seem there was, to a surprising degree, a focus on the Black community.
Do you have more information about or just an opinion about why
that was such an emphasis for them? Yeah, so there were about, there were 81 Facebook pages,
133 Instagram accounts. Of the 81 Facebook pages, 30 focused on the black community.
Now there were other pages that focused on other kind of traditionally left
leaning groups, as you mentioned, Muslims, Native Americans, Latinos. So there was, you know,
there were other kind of non-black lefty pages. Before we go on, Renee, those numbers don't sound
very large. So 81 Facebook pages sounds like not even a drop in the ocean. I think we should give some sense of the scale of what happened here.
Yes. So there were 81 Facebook pages.
I think there were about 62,000 posts across them.
There were 133 Instagram accounts, 116,000 posts across them.
There were about 187 million engagements on the Instagram content
and another 75 million engagements on the Instagram content and another 75 million engagements on the Facebook
content. And an engagement is like a like or a share or a comment. The pages, to be totally,
totally clear, they had what I would call like a long tail, like 20 of them were successful enough
that they had, you know, in the hundreds of thousands of followers.
And then a lot of the remainder, the long tail was just crap. They were just failed pages.
And so one of the things that was actually interesting was you could see them in the
data set pivoting those pages. So pivoting their failures, going in there and actually
and saying like, okay, well, one example is the Army of Jesus page. A lot of people have seen
some of the memes of like Hillary fighting Satan. There are about 900 posts by that account before it found Jesus.
It started as a Kermit the Frog meme page, you know, memes of like Kermit sipping tea and stuff,
and they didn't seem to get enough traction there. They pivoted it to a Simpsons meme page.
And it was, you know, sharing these kind of ridiculous Homer Simpson memes,
again, just like messing around with American culture, seeing what stuck. When that didn't
stick, all of a sudden it became a religious page devoted to Jesus. They seem to have then
kind of like nailed it. You start to see the memes doing things like like for Jesus.
When you do something like say like like for Jesus, share for Jesus, they're getting people to share
their content organically. So you actually see them kind of hitting their stride with standard
kind of tactics of social media audience growth with examples like this, this Army of Jesus
account. So it's absolutely true that many of their pages were complete failures that had no
lift. But then some of their pages were actually, if you go and you look at the audience reach using things like CrowdTangle,
and you look at their engagements versus the engagements for other conservative pages or
other black media, you do see them kind of popping up in the top 20, top 50 in terms of engagement
overall. So when, you know, am I saying this, these were like the best possible pages for this
content for these audiences? No. But what they did do was they achieved substantial success with some
of them and they used their successful pages to direct people to their other pages. So the black
community was particularly, they did this particularly uh this was a i can't say
effectively necessarily because i can't see the conversion data i know that they showed people
these other memes i don't know if people converted to the page for these other memes but what they
were doing was they were saying if you like this content from our page blackstagram that you're
following here's some other you know hey look at this other group called will and Calvin. Now, of course, there's no disclosure that the Internet Research
Agency is also running Williams and Calvin. And then they're saying, look at this other content
from this page called Blacktivist. Look at this other content from this page called Nefertiti's
Community. So a lot of this kind of cross-pollination of audiences in an attempt to
push people so that if they're following one of their accounts, one of their pages,
they're inundated with posts from the others. Right. And they're also amplifying legitimate pages that are,
you know, highly polarized in their message. So what's cagey here is that not only creating their
own fake partisan accounts. If you'd like to continue listening to this conversation,
you'll need to subscribe at
SamHarris.org. Once you do, you'll get access to all full-length episodes of the Making Sense
podcast, along with other subscriber-only content, including bonus episodes and AMAs,
and the conversations I've been having on the Waking Up app. The Making Sense podcast is ad-free
and relies entirely on listener support, and you can subscribe now at SamHarris.org.