Your Undivided Attention - Disinformation Then and Now — with Camille François
Episode Date: March 18, 2021Disinformation researchers have been fighting two battles over the last decade: one to combat and contain harmful information, and one to convince the world that these manipulations have an offline im...pact that requires complex, nuanced solutions. Camille François, Chief Information Officer at the cybersecurity company Graphika and an affiliate of the Harvard Berkman Klein Center for Internet & Society, believes that our common understanding of the problem has recently reached a new level. In this interview, she catalogues the key changes she observed between studying Russian interference in the 2016 U.S. election and helping convene and operate the Election Integrity Partnership watchdog group before, during and after the 2020 election. “I'm optimistic, because I think that things that have taken quite a long time to land are finally landing, and because I think that we do have a diverse set of expertise at the table,” she says. Camille and Tristan Harris dissect the challenges and talk about the path forward to a healthy information ecosystem.
Transcript
Discussion (0)
To see these major social media giants saying, hey, this is what we're doing.
This is why we're doing it.
These are the detailed rationale for how we've made this decision.
And this is how many accounts are impacted.
These are unprecedented levels of transparency.
And she would know.
That's disinformation researcher Camille Francois, and she's been trying to see behind the curtain of social media for years.
But they're not unprecedented levels of enforcement.
What we need is to keep the long.
view here. And because we don't have that much transparency from the platforms on the many
decisions that they've made in the past doesn't mean that there are not a lot of precedents here
that are really important. While what grabs the headlines are cases of influential people
being de-platformed, and that's what's steering our public conversations, Kamey pointed out that
for years, social media platforms have been continually making decisions, both directly and indirectly,
intentionally and unintentionally, about who can use these tools.
It's hard enough to manage the situation just in the U.S.,
but it's even more convoluted, uneven, and incomplete
in many other parts of the world.
There are many other groups that have been de-platformed in the past.
Some have been completely driven out of the platforms by abuse.
A lot of discussion around sex workers, for instance,
who have been de-platformed in many ways, right?
And so why we have this limited and unprecedented transparency
on recent decisions, we also have a very, I wouldn't say very long because it's still,
it's a short history of social media, but there's a much longer set of important precedence and
decisions that have been made in the past and other communities that have been affected.
And as we think through who gets to be online, who gets to express their voice,
we have to think about the full set of decisions that have been made by these platforms
around the globe and across the years.
Today on the show, Camille Frantzal will walk us through some of the factors informing these decisions
and tell us why she's optimistic about tackling disinformation at this particular chaotic moment.
She is chief innovation officer at the Cybersecurity Company, Graphica,
and an affiliate at the Harvard-Burkman-Kline Center for Internet and Society.
I originally met her when she was studying global security technology as a researcher in Google's jigsaw division.
In the middle of last year, she, with Graphica, helped form the election integrity partnership,
which monitored online efforts to undermine the 2020 U.S. elections in real time.
She's also featured in a new HBO documentary by Alex Gibney called Agents of Chaos
about Russian hacking in the 2016 U.S. election.
There are no easy answers to the question of who gets to be on the internet doing what.
But Kemi is one of the few people with expertise and knowledge to help us dissect the issues,
add context, and guide us forward.
I'm Tristan Harris.
I'm Hazer Raskin, and this is your undivided attention.
Camille, welcome to your undivided attention.
Thank you, Tristan, for having me and for giving me your individed attention
for this fun conversation together.
Camille, for those who don't know about your background,
would you just tell our listeners a little bit about how you got into the space of
disinformation?
Sure.
I have been working on disinformation for a few years.
And I think I came to this topic because originally I was focused on how government
use digital technologies to control, to oppress.
And I was focused, for instance, on questions related to cyber conflict and cyber warfare,
the types of government who would hack into journal.
phones, for instance, and very quickly talking to vulnerable users who are often targeted by
well, resource government, they would say, we're also worried about disinformation.
We're worried about coordinated harassment campaigns.
We're worried about the butts and trolls that are deployed to harass us.
And so back in 2015 or so, we started really doubling down this topic and trying to understand
the space and understand meaningfully how government.
I don't particularly like the term weaponized social media, but yeah, that's a little bit of that.
So I think that's how I fell down the rabbit hole and then disinformation became a bigger and bigger topic.
And yeah, it stuck with me for a bit.
And Graphica and your team was one of the two teams given access to the Russian data set from 2016 by the social media
companies, along with Renee DiResta's team, analyzing all of the Russian memes, posts, comments, et cetera,
that were basically shared on Facebook and Twitter, I believe, through, through the social media.
the 2016 election. That's right. In 2017, the Senate Select Intelligence Committee tried to get to the
bottom of what had really happened with Russian interference in the 2016 election. And I think the
Senate recognized that some of these answers were in the data and that social media companies
had that data. It was really important moment because up until then, a lot of this data had
never seen the light of day. There was no sort of expectations that social media platforms needed to
share publicly when they detected foreign interference campaign. And so that project with the Senate,
which had the Senate ask Google, Twitter, and Facebook to all hand over the data that they had
on what had happened with Russian interference to the Senate for us to analyze, also established
a precedent that evolved into a practice in the field where we now see these social media platforms
continue to share the data that they are continuing to find on some of these foreign actors
and foreign interference campaign. So it was a pivotal moment in many ways. It was a moment of
reckoning with this topic of foreign interference. It was a moment of forcing transparency.
And for me, it was also a confirmation that if you really wanted to get to the bottom of these
issues, you needed to do it in a cross-platform way at the scale of the internet. And you couldn't just look
at what's happening on Facebook or what's happening on YouTube or what's happening on Twitter,
right? All of these campaigns really span multiple platforms. And if you want to meaningfully
understand them, you have to look at them at the internet scale. And in this case, this is sort
of the black box model, flight recorder model of after the plane goes down, after the events have
happened. Here are the records from the different platforms so you can do them an internet scale.
And then what's interesting as we enter 2021 and look back at the 2020 election is how we start
to look at these different platforms in real time.
Right. Sharing this information publicly, well, first sharing this information with the Senate
and then publishing this information also enabled people to better understand what these
campaigns look like. What are the hallmarks of this disinformation campaign?
And when you do that, you meaningfully empower people to get better at detection so that the
next time over you detect it not, you know, month after the facts, but maybe a few weeks
after it started, which of course makes the entire difference in making sure it doesn't negatively
impact an election or a democratic dialogue. And it's interesting because after many high-profile
decisions, including the removal of Trump's account, we can see that there's actually little
agreement on who needs to do what, right? What are the set of responsibilities of social media
companies? What are the set of responsibilities of web hosting providers, app stores? We can see that
Facebook is now referring the Trump decision to the oversight board. So I think there's an
acknowledgement in general that the solution set goes beyond remove or let it be online and that
those are actually difficult decision that may not scale the way we want to. Why is it hard to make
some kind of definitive statements right now? I mean, even just saying we're still processing,
that's actually I think worth digging into because why is it the case that we're still processing?
I feel the same way. I mean, right, some huge technical.
Actonic plate shifting actions have taken place.
A platform on the internet, Twitter, banned the president of the United States and took
unilateral action.
Maybe it would be helpful also, though, to take people back to this election 2020 partnership
that you were a part of along with several other organizations.
There you are, and there's this kind of virtual war room where you guys, I think of like the
I Love Lucy Chocolate Factory, where she's there and there's all the chocolates going by
and she's, you're trying to catch the chocolates as they go by.
And at some point, it just goes by too fast.
It does not end well for Lucy, but in this case, how.
would you judge the success of this partnership? What was it, you know, the situation of how this
came about and when did it start? So yes, essentially it's a war room with four different institutions
we're coming together and who all decide to do rapid response monitoring together for the
duration of the election. I'm in general against the overuse of military language when we talk
about peacetime. And so we have called it the peace room. And essentially, people were doing
24-7 shifts together with our colleagues, the four institutions. The four institutions,
where Stanford, the University of Washington, the Atlantic Council, and Graphicum.
And this really came together quite quickly out of necessity and was really an ad hoc partnership.
And so we're doing a lot of work right now to share the conclusions of everything we've seen
and shares lessons learned from what could be replicated in this model.
The field of researchers who look at disinformation is growing, but it's still a small field.
And we all agreed that it would be quite convenient if we were not all going down the same rabbit
holes for the election and if we could de-duplicate a little bit more and divide and conquer together.
The second thing is while we wanted to deduplicate, we all think that rigor is extraordinarily important
and the ability to check each other's findings. To do that as quickly as NB is something that we
really wanted to do. And when you have a few hours to come up with an analysis, you're not
going to go through a peer review process. So the next best thing is to have a trusted set of
colleagues who can all look at it together and poke holes at it.
thing that we wanted to do is a little bit of collective bargaining. Being a handful of researchers
is great, but we know we wanted to interface with the platforms, and we know we wanted to interface
with election officials who are a really important part of this ecosystem when you're trying
to tackle disinformation targeting an election. And so we created this gigantic system where we could
all talk to one another and investigate incidents together with a ticket system. And by doing this
together, this foreign institution, we also have a little bit of collective bargaining power, right? When we
investigated an incident and thought that it was
violative of a platform's content policies
we were able to come together and say,
hey, Facebook, Google, TikTok, Pinterest,
or whomever, we're giving you visibility onto something
that we're investigating and we think that it should be taken down.
And so having this collective bargaining power was definitely part of it.
And finally, we wanted to create a little bit more transparency
on all these decisions that we knew we're going to be made in a very ad hoc manner.
platforms have standards and policies. They tend to apply it. But we also know that in moments like an election, they make a lot of ad hoc calls. And we've seen that, I think, more than ever in 2020. And so having this record for us of what we investigated, what we escalated to platforms, how they responded, and what we saw after that was also going to create essentially a database for us to look back onto and say,
hey, what really happened during this election cycle?
I think what's interesting we're able to get here is, you know, in 2016, we had nothing.
I mean, we had almost no researchers looking at this.
We didn't have the platforms viewing it as their responsibility.
We didn't even have governments or three-letter agencies who were actively monitoring, you know,
what anyone was doing.
And by contrast now, it's not as if Facebook and Twitter have these automatic tools
that just will automatically find everything and solve this.
And so however size of this peace room instead of war room is, it's much bigger in the United
States that it is for.
Ethiopia or for Myanmar or other places like that. And yet the platforms are hopefully building out
rules and laws come as a ad hoc on reaction to the things that they're seeing. So it's kind of like
you're building up a constitution for how you want to run your digital country in response to
things that are going wrong. So like you've got these little explosions. You're like,
okay, I guess we need a law for that. I guess we need a law for that. Is that kind of how you would
describe it or? It's exactly the right question, right? Like how does this scale and how can you apply
this fairly across the globe. A lot of people pointed to the fact that the idea that elected
politicians turned to social media to incite violence is not a new idea. It's fun to be a non-American
researcher in that space. I'm French. And so I tend to see when we become like very, you know,
US-centric. It's absolutely something that has happened for many, many years around the world and
that has been thoroughly documented by researchers. And a lot of these researchers rightly so are
asking, why did these platforms took action in the U.S. and turned a blind eye for so many years
when this was happening all around the world. Again, the idea that politicians turned to social
media to incite violence is something that we've been dealing with for many years. And specifically
in places like India or in Brazil or some of these other countries were very similar patterns
were taking place. Turkey. Yeah, we're really, really not lacking examples here, unfortunately.
You know, weirdly enough, one of the first research projects that led me to look into troll farms
were actually focused on this idea of patriotic trolling, the idea that elected leaders turned to
social media to incite hate mobs against their critics, against journalists, against opposition
leaders, against human rights activists, and that some of these hate mobs that they incited
using social media were real people, and some of these hate moms were fake accounts, bots, and trolls.
Those were research that I did with a coalition of really smart people, including global voices,
the International Press Institute.
And that was, I think, back in 2016, that it was very clear already then that this was happening all around the world.
It was not particularly hard to find case studies where elected leaders turned to social media,
to incite violence against their critics.
And this is in what kinds of places in countries, just to give people an example.
We had Maria Ressa on this podcast before.
That's right.
So actually, Maria and I worked hand in hand on some of these case studies.
It was a very disheartening moment because she and I were research partners
and we were studying the case study of this happening to an elected senator, a politician.
And I remember the day where she called me to say, hey, it's,
happening to me now. I was very worried for my friend, Maria. You know, her optimism, you know,
her courage. The only thing she was excited about is being a case study herself and the data
collection and research opportunities that this offered. So, yeah, that was back in the day,
Maria starting to be herself, the target of these relentless campaigns of online harassment.
Again, some of them with fake accounts, some of them with foreign.
interference campaigns too, and then some of them eventually with real communities of people.
I think we don't talk nearly enough about the fact that when you use fake accounts, when you use
inauthentic amplification, the end goal is to create and curate the community of real people
that will continue the fake movement that you have ceded. And because for many years, Silicon Valley
was not particularly paying attention to this threat. In many people,
places this has succeeded. We see many targeted harassment campaign that started very inorganic,
very manipulated. But because they were relentless and because they were allowed to carry on for
years, at the end of the day, you do find that organic communities picked it up. It's kind of like
the trellis. At some point, there's a real plant on it and you can remove it.
But the plant is still there.
And that I think really is what I tend to call
or disinformation depth.
The fact that we turned a blind eye to this for so long
and that at some cases, this has really managed
to create an architect, real communities
that we're willing to continue this harm
and obsessively continue, notably to harass
or to spread disinformation on specific topics.
So we're beginning of 2021.
Loads is happening on many fronts.
It seems that in some way, content moderation has effectively eaten up many of our conversation.
And, you know, reflecting on things on which I think we turned a corner on, I think of four things primarily.
And those are, I think, ideas that were self-evident for the research community who had been looking into these for a very long time,
but hadn't really made it to the mainstream and that I think now are the basis for these discussions.
The first one is that disinformation is not just a problem of fake accounts, of thoughts, of Russians and of trolls, and can absolutely be a problem of authentic actors, real people sharing their genuine opinion.
When we look at the disinformation around the idea that the vote was stolen and that the election was stolen, absolutely this was driven by real domestic actors and absolutely this was driven by sort of verified what we call blue check disinformation, people who genuinely believed what they shared and had a real platform because there were real political influencers.
Sometimes, of course, as we talked about, elected politicians with disinformation is, I think in nobody's mind,
just a problem of fake accounts and butts and trolls in Russian.
That doesn't mean that we didn't see foreign interference in this election cycle.
We've seen at least 12 different foreign interference information operations targeting the
U.S. 2020 election, but we definitely saw the importance of real people engaging in these campaigns.
So we did see 12 different, what, countries or actors?
What kind of actors are we talking about just to make sure, even though they're not the primary focus here?
Yes, absolutely.
at least by our accounts are campaigns that we examined and sort of investigated at least 12 different
instances of operations which were using fake accounts and can be tied to foreign actors in order
to target the U.S. 2020 election. They were primarily coming from Russia, from Iran and from China
in this order. And I think it's important to highlight that none of these campaigns were
particularly effective. They started quite early, the first foreign interference campaign.
that we detected was back in December 2019. It was a Russian campaign. And so it was sort of
scattered over the entire year. But in general, there were quite ineffective campaigns that were
detected early and on which most of the platforms took immediate action. So in other words,
they did not particularly shape or impact the political conversation at all. And how do we know
that? Because people obviously are so used to the narrative that Russia or Saudi Arabia or China
and these big, you know, scary boogeyman of these foreign countries are certainly investing a lot of
money in trying to influence our elections. But this is an interesting claim that we believe that
this was not very influential in the 2020 election. Yeah. So we know that because of good research,
right? While it's still difficult to assess what is and what isn't effective and impactful when we
talk about information operation, we know the scale of these campaigns, right? So we know, for instance,
that in general they had very few engagement, right?
those were accounts that people didn't really follow, didn't really engage with, they didn't
really share these messages. We also know that they have failed to organize offline events.
And we can compare that with the large-scale analysis that we've done, notably with our partners
at the Electoral Integrity Partnership, that look at the disinformation election narratives and how
they spread, and absolutely can see there that it mostly spread by organic domestic actors who
we're sharing and resharing that information.
So we can actually contrast the impact of some of these foreign operations with the domestic
conversation.
So the number of accounts, the fact that they didn't get much engagement, they didn't
organize real world physical events, that kind of thing.
And the fact that they didn't have followers that matters, right?
So something that's, for instance, that was key to the success of some of the foreign interference
campaigns in 2016 is they had managed to build a real audience of actual
domestic communities who were serving as sort of like the relay for their message, right?
The idea of a foreign information for an indifference campaign is whether you want to pass the
baton to real, real users, authentic people who can sort of carry forward your operation.
And this we didn't see.
Okay, so you were getting back to the first thing that's distinct is return a quarter into
2021 is the fact that many of the purveyors or spreaders of misinformation were actually
just regular domestic Americans doing their thing.
so regular, sometimes very high-profile influencers, as we talked about a lot, elected politicians,
and so not so regular, really blue-check influencers too, and the role that they play in spreading
disinformation, I think, was a very, very apparent. What kind of examples of information and also
accounts? Yeah, that's a great question. So let me take a step back and talk about the different
types of disinformation that we saw in the election. There were clearly sort of phases, right? I remember
the day of the election, we saw a lot of scattered incidents. People saying, hey, I'm at this
poll location and I'm seeing this incident. Or, you know, I've heard rumors that people were using
Sharpies in this way and that I've heard whistleblowers say that and X, Y, Y, Y, Y, Y, Y, Y, Y,
and Y, Y, Y, and P. And that also was reshured, right? So, like, fake testimonies, fake whistleblowers
testimony is something that we saw a lot. And then at some point, all these.
sort of like scattershot disparate pieces of disinformation kind of coalesced into broader
narratives. A good example of that is what happened around Sharpie Gate. Initially, you have a few
pieces of content around people saying like, well, I think that people are using Sharpie in this way
or in that way. And then these pieces of content are generally taken down by the platforms or fact
checked by the media. The election officials themselves say, hey, we actually have clear lines
on how we use and don't use felt tip pens.
It was a very interesting, very in the weeds conversation.
And so a lot of this content goes down.
But because it has solidified into a narrative,
all of this gets sort of like a second wind, right?
Gains new traction.
And now there's a thing called Sharpie Gate.
And that sort of like carries it forward again.
That's sort of the second phase.
And in that example, didn't Sharpie Gate.
It started in Chicago and then it took on a totally different meaning
when it jumped to Arizona and was claimed that some kind of same Sharpie situation.
was happening there? Yeah, it became, you know, it became a bit of a meme. People started saying,
like, yes, we're also seeing this anomaly here and there. And this is what I mean by becoming a
narrative. It became more than just single incidents, right? It became almost a conspiracy.
There was this newfound idea that organized actors would use the Sharpies to steal the election in that
way. So that's what I mean by it gives a second win to a lot of content that had already been
analyzed and fact-checked. And the people had already said, well, this is.
this isn't accurate. This did not happen. This video is not true, for instance. And after this
sort of like isolated incidents to narratives moment, you have a third moment where it becomes
broader than the narrative. It's broader than the sharpie gate. It becomes a movement. This is
sort of like where you land in the stub-de-steal territory. And a movement is really people coming together
organizing for action. This is where, for instance, we saw this viral Facebook page that Facebook
ended up taking down fairly quickly, actually, where people were organizing four offline
violent actions saying, we need to take matter in our own hands. We need to go and see on the ground
what's really happening. And at this stage, you're beyond the isolated incidents. You're beyond
the different narratives and the different conspiracies. You're really in a movement and people
who are coming together to really like take matter in their own hands. I remember that during the time of the
election, there was some rising threat indicators inside of Facebook. I figure what they call it.
I think it's called violence unrest indicators. And there was a 45% increase in election-related
violence unrest. However, they measure that. Do you have any insight into kind of how the platforms
were looking at these trends? No, I don't. That's a good question for Facebook. I think that points
to an interesting question, which is in general, there is a big asymmetry of information between what
the platforms you're seeing and what researchers can see. And there's, of course, also this wide
range of categories that platforms are using to talk about these different types of threads.
Each platforms are, is going to use their own categories, right?
They're naming these categories of violative content differently, differently than researchers,
differently than each other.
So no, I really can't tell you what is it that they're looking at.
What does this category mean in their own book and what indicators they're using to measure it?
Now let's turn to number two.
I think it's been difficult sometimes for the research community to convince.
people that disinformation can have real-world impact. It's been the same for sort of meme culture
and hate speech. I think at times people have said, well, those are things that just happen on
the internet. I think we've joined the corner on that. I think that 2020 and the beginning of
2021 have really demonstrated that what happens there can have really deep impact offline and on real
life. Those impact can be targeted harassment campaigns, right? So we've seen firms and people being
caught in conspiracies in a way that really focused hordons and crowds of people against them
posing real questions for their safety. We've seen, of course, that those movements can lead
to large-scale, organized violent action. It's really the story of January 6th. We've also seen
that this can lead to financial impact, stock manipulation. So I think that that's a 2021 thing.
We've turned a corner. I think people now all agree and understand that these phenomena do
shape real life in meaningful and significant ways.
And now if we compare that to what it was like before, so, you know, a lot of people thought
through 2016 that these are just a bunch of people who are having fun on Reddit.
I mean, just posting these memes at Pepe the Frog and this is pretty harmless stuff, right?
These are just memes.
Memes don't hurt people.
These are just the sensitive liberals, the snowflakes who are worried about the communication.
And I think obviously January 6th is kind of evidence that this buildup of people's deep-seated
beliefs, especially with something as significant as the election would genuinely be stolen.
If you genuinely believe that, that your rightful vote and your rightful majority was literally
denied its rightful democratic outcome, if I believe that I might show up outside on the streets,
certainly at least to voice myself, and it wouldn't be so far to say I would show up at the
Capitol. Now, would I storm it and break into it with zip ties? No, but I think it's important to just
expand this out a little bit. I think that all of our colleagues at the Electoral Integrity Partnership
quickly understood that disinformation targeting the U.S. 2020 election wouldn't stop in
November. And so we had all prepared and planned to continue doing this in December and to continue
doing this up until was necessary, even until January 20th, if needed. And so I think in many ways,
this is the scenario that we had planned for and people who were closely tracking the evolution
of this false narratives that the election had been stolen.
knew how violent this narrative was turning, knew that they were efforts to recruit people, knew
that the mobilizations were being organized. I think it's fair to say that you never can really
predict with accuracy how things will evolve. A lot of that depends on the factors on the ground
that nobody really knows, like how things evolve on an hour-to-hour basis, but we definitely
knew that the seed of that was there. So that was part two.
I'm hoping that the third thing we turned a corner on is this idea that design matters in the
interventions that we're going to consider to tackle mis and disinformation.
I think for a long time, the debate on what to do about mis and disinformation was very
dominated by what is the content that must stay up and what is the content that must be removed.
And I think we've turned a corner on that.
We've seen major platforms like Twitter create design changes to their own services to better
tackle disinformation and misinformation in the election. So for instance, they changed the rules
around the retweet button. That's something that we hadn't seen before. Similarly, I think when
people are engaging in the what to do about it question, we see a lot more conversation around
the role of interoperability, the idea that users should be able to play with their own data,
tinker with these platforms, create their own rules and filters, and really go and create alternative
platforms to design other possible futures. I think that's a very welcome corner to have turned and
I'm quite pleased that we can now have debates on how to tackle miss and disinformation that go
beyond what should be up and what should be taken down. So when you say that they change the design
of retweet, what we're talking about here is it used to be you could just one click retweet and then it
would just share it to your feed and then Twitter changed it so that when you hit retweet,
it would force you to actually say and fill in some number of characters of text that I think
you were not allowed, I don't think, to just hit retweet again and just to bypass the what text
do you want to add here. You actually had to type something. Is that right? That's right. The other
thing that's interesting is that after that Twitter released their own study on whether or not this
actually worked. That of course is very welcome, right? Those are design interventions and the platform
itself studies, okay, did this intervention actually accomplish its goal? That's great. The limit
of that, of course, is it should really not be the platform scoring their own homework. So the
better version of that is one in which you can have independent researchers meaningfully audit
the actual impact of these design changes on their intended effect. So I think really fantastic
step forward, design interventions, sharing research on whether or not these design interventions
accomplish their goals, sharing this openly. The next step here is to enable external
researchers to do a meaningful independent audit on whether or not this worked. But again,
like great state forward here with an acknowledgement that design plays an essential role in how we can
tackle myths and disinformation. So many people for a long time have been saying that we need to
add friction into the resharing process. And it's the immediacy of mindless behavior that can
lead to mindless and even harmful results. Are there other examples of design changes that
sit out to you? They don't come to mind. And I think that's why I'm,
particularly excited by the work that's being done on interoperability because I would like to see
more alternatives, more platforms designed with different settings, different filters, different options
for users. And I think that welcoming those design changes has to come hand in hand with
pushing for more interoperability and more possible futures here. Yeah, I agree that one of the
fundamental challenges that we have is that people can clamor for a long time that we need to change
the design, we need to change the design. But if you say, if Twitter or Facebook say, okay, fine,
can you interest on, what do you want us to actually do? What do you want us to try? And the
problem is that we don't have the information that would inform where, I mean, we have insights.
We have views from the outside about where some of the harms are showing up. We could certainly
point attention to certain areas, but it's not like the perfectly designed solution can be
tossed to them from the outside because they're the ones with the unique insight to how this works.
And they have the data on all the experiments they've run in the past. I mean, it's important for
people to know just the number of A-B tests and experiments that are run.
You know, sometimes Facebook or Instagram will run tests in New Zealand for a few months
or in Canada for a few months.
And they'll show, I think they did this with hiding the number of likes that you got
to see what effect I would have on teenage mental health before they'll roll it out for
everyone else.
But we don't really have a SimCity for SIM social media.
And I think that's kind of what we need.
Like in the same way that I remember being a kid and being able to say, okay, what happens
if I increase the tax rate, if I build more these kinds of power generators and then
suddenly you get more wildfires because you didn't see how the generator blows up and it causes
some other problem. There's no way for people to tinker with alternative designs for social media
and social network graphs. And I feel like we need complex systems modeling, agent-based
modeling for, hey, if we were to change the dials, the friction tax rate, how would that
change the sharing of some kind of carrier rate, base rate, of false narratives that are spread
throughout that society? How would false narratives, which then accumulate into false biases,
almost like people are waiting for looking for confirmation on a foundation that's wrong.
How does that change the kind of later foundational perception of false narratives?
And we need better ways of modeling this.
And I feel like there's no attempt to create this kind of SIM social media-like environment,
a sandbox where people can really try.
I mean, if I were Stanford University or MIT,
I know there's many centers that are sort of spitting up on different forms of constructive dialogue
or better social media, humane social media.
We're all in the same kind of project and team here.
But there isn't nearly enough experimentation on allowing us to simulate different features that would do this.
And also have real human behavior be populating what would happen in that network.
Because if you do have a sort of sim social media or sim Facebook, you need real people to be posting about their kids, their grandkids, their baby photos,
and then also some kind of Russian disinformation narrative.
Yeah.
And I think that a lot of that is also because the data is not portable, right?
Like your data is stuck on Facebook or stuck on Twitter.
it's hard to create a layer of experimentation that can meaningfully plug into it
to allow you what another window into social networking would look like.
That's also a feature of the very centralized nature of social media.
But that goes back to something that you were saying a few minutes ago,
which is in this election cycle, we've also seen for the first time
a meaningful mainstream role of alternative social media platforms.
Now, the state of current alternative social media platforms is not particularly enticing for a user like me, right?
Like, I'm not particularly vibing with the proposal that Gab and Polly are putting on the table.
I don't particularly want to join these communities.
But it's interesting to see that we are seeing these alternatives emerge as, again, like alternative to the main social media giants for the first time in a semi-meaningful way.
They have absolutely impacted and shaped some of the online debates that we've seen in 2020.
So let's talk about that.
So now you imagine this, I Love Lucy Chocolate Factory.
We've got all these different narratives that are spinning up in real time.
We've got Twitter.
There's some new false narrative is spread out of Chicago, the Sharpie Gate thing.
It starts to show up on the dashboard of someone in the peace room and you start to see that go by.
I am imagining these sort of meme chocolates that are going by in the conveyor belt and you're trying to spot them.
It's quite apocalyptic, the picture you're painting, really.
It was much nicer than that in the peace room.
Oh, well, I mean, chocolates aren't so apocalyptic, but I mean, I hear what you're saying.
But now you imagine the alternative social networks.
So now I'm imagining there's like, if you imagine the inputs to the chocolate factory,
now we've got parlor inputs, we've got gab inputs, we've got Twitch inputs, we've got all these other
networks.
And we still have those same hundred researchers as a computer scientist and thinking about
this from information theory, you know, it's sort of the classic pigeonhole principle that
as you increase in scale, the number of possible dimensions and inputs of harm,
while only having a certain amount of human judgment and capacity and even expertise
that those who can do the queries or even frankly investigate, as you said, each of these
incidents or possible narratives, it starts to feel like it gets overwhelming.
If we specifically focus on the alternative networks, how did that get incorporated into
the process that you're talking about?
It does get overwhelming and it gets overwhelming for the platforms too.
I mean, it was quite fun, honestly, to see the statements from those alternative platforms
when they started to be cut down by their providers because of their lack of moderation standards.
They all said, yes, but online moderation is hard.
Content moderation is hard.
And they're right, of course.
And you kind of sort of like see through the different statement of maturity on contra moderation.
So initially they said, yes, but this is really hard, which is true.
Then we heard, yes, but we're going to build algorithms, which is laughable, right?
Like a lot of these very difficult questions can't be addressed by algorithms.
Then we hear, well, but doing it right will be very expensive, which is also true.
So I think there's a sort of like general reckoning with the fact that content moderation is a difficult space.
It's almost like the four stages of denial, what is it, of bargaining.
It's exactly the stages of content moderation, discourse maturation.
And specifically just to make sure we have people have the context.
We're talking about parlor and gab.
Was there another one where Apple and the app store and Google's app store, Play Store,
were basically threatening and then later did take down their platforms because parlor specifically
because they did not have the requisite content moderation faculties
to especially deal with some of the known, violent, inciting content.
Now, I want to make sure we play both sides of the perspective here
because those who are fans of Parlor
and believe that Twitter is a big blue political, left-oriented,
secret big tech power grab to suddenly become a tool for the Democrats.
People on the right say, hey, Parlor is the only place that we can be
that actually will not filter our speech.
if I'm trying to steal Man Parlor's perspective here.
So then they would say, also, we do have content moderators because they did.
But then a counter-critique could be if you have content moderators who don't think that the content that's actually inciting violence is in fact violent,
then it's like who regulates your regulator.
Let's imagine some hypothetical social network in which you have white nationalist content that's basically saying we need to build a new neo-Nazi state.
And then the content moderators for that social network are also neo-Nazis.
That's not going to be a relevant content moderation mechanism.
And I'm not trying to make some kind of simplistic boxing in a parlor in that way, although
that is a critique that some people levy.
You're giving me a great segue into the fourth observation, the fourth corner that we definitely
turn in 2021.
I think it is that those decisions are difficult.
What to do about it is not straightforward.
And people meaningfully disagree.
I think there's been really strong and interesting disagreements on whose job is it to participate
in making these decisions?
Is it the App Store's role to go and remove platforms that they don't think have standards
that they like?
Is it web providers job to do this?
What happens when Amazon Web Services gets involved?
Is it simply a social media role?
And so I think that in general, who gets to make those decisions and what are the right
set of decisions is now something that everybody understands to be difficult?
conversations. We're seeing this, of course, with the multiple reactions, including global reactions
to Trump's accounts, removals across platform. And we now know that Facebook will actually
refer the case to the oversight board. Now, this week, we've also seen the first decisions
of the Facebook oversight board. We've learned that they're not actually a rubber stamp court.
They have shared five cases. And on four of these five cases, they're actually overturning Facebook
own decision. And they're saying, like, no, we disagree with the decision that was made here.
So it might be the case that the Trump-related decision gets overturned. In reality, we don't know,
but I think we've all turned a corner on understanding that these are actually complicated
discussions. There's no easy way out, and there are multiple perspectives to be taken into
account here. So, yeah, no, I think this speaks, though, to just a theme that you and I started with,
which is, everyone is sort of asking, okay, Camille, you know, Tristan, what are your opinions on
this in a way? What is the quick answers that we need to solve these problems? And there's this
kind of hesitation and confusion because the amount of things we need to be talking about and
understanding and even keeping up with the news, I mean, Facebook and Twitter are changing
their policies on a daily basis. Twitter launched something called Birdwatch, which is a sort
of bottom-up crowdsourced content moderation platform where instead of their trust and safety team
or their content moderation capacities being leveraged,
they're actually asking individual Twitter users to bottom up,
categorize, and flag the kinds of content that has a problem.
And it's just it's so hard to stay on top of this moving ship,
which is why, at least in this podcast,
we try to focus on the underlying systemic dynamics.
Like, what are the generator functions for the harm that we're talking about?
And so long as you do have the sort of three billion Truman shows
where each one has unchecked virality
and self-confirming biases,
that the more you click on, the more we give you more stuff like that.
And just that that will tend to produce these certain kind of outcomes.
Because otherwise, we're just going to make a list and describe, you know, each
whackamol kind of landscape forever.
And by the time we've described the landscape and then Twitter has updated and Facebook
has updated their policies six more times, we won't have actually changed anything.
How do you stay kind of sane and feel optimistic?
I mean, do you feel optimistic about the kind of changes that we are making and given
and how sufficient they are compared to the first derivative of the growth of the harms that we're
talking about?
I do.
And I think it goes back to the conversation we're having, right?
I do because I think that things that were facts for the research community, things that
we knew to be true, but that were difficult to discuss and that hadn't really landed in the
mainstream conversation now have landed.
And I think that starting 2021 with an understanding that, you know, real people are so
participated disinformation, that design has an important.
important role to play, that those decisions are complicated and subtle and not straightforward and
that all of this really does have real life impact is such a better place to make meaningful
progress. And so I'm optimistic because I think that things that, again, have taken quite a long
time to lend are finally lending. And because I think that we do have a diverse set of expertise
at the table, right? When you say, oh, you know, let's talk about the oversight board, when I have
two colleagues who've spent the last two years really digging into the details of how's that
going to work, whether it's a meaningful court or not, what are the questions that we must
pose? And I'm grateful that we're going to be able to leverage these diverse bodies of expertise.
People have been working on these issues. And I think we're now like ready, ready to see
where the rubber meets the road, really. So I'm optimistic. But that's also like, I think it's
character trade at this stage. I tend to be fairly optimistic in general.
I think I'm optimistic too because things that used to be very difficult are now much easier, right?
We were talking about foreign interference.
I remember last time around the last, you know, U.S. presidential election in 2016,
it was difficult to meaningfully tackle foreign interference.
People didn't agree.
It was a little bit of a taboo.
We didn't have a sophisticated understanding of the different campaigns, of the different actors.
We didn't have the right policies to apply.
We didn't really have a lot of.
of expertise in order to do meaningful detection. We've gone so far. We're now in a much better
place. It doesn't mean that we are in a perfect place, of course, but it does mean that I can see
progress. And I think that we're doing meaningful progress. And again, it doesn't mean that they're
not very complicated problems that continue emerging and that we still have to solve. But in
general, I think that there's room for optimism. Certainly there's much more capacity dealing,
especially with foreign operative threats
and that those are being caught earlier
and being tracked earlier and shut down earlier.
I feel like every week there's some kind of report
from Facebook's integrity team saying we shut down
a Russian information operator.
We shut down a Vietnamese information operation.
We shut down.
And you're just seeing these happen more and more often,
which means that more resources are there.
It's certainly better than a world in which
this was literally not handled at all.
And there was no one shutting it down.
However, you know, we're kind of grading our own homework again
and we don't know what we don't know.
so we don't know how big some of these things are that we're not seeing.
What keeps you up at night about what's not been sufficiently focused on or addressed?
Yeah, I mean, just to go back on the Facebook, but I think it's true that Facebook here did sort of like an industry leading job on information operation.
They're likely, honestly, the ones who put out them the most of these campaigns.
Perhaps it's because they're the most affected.
Perhaps it's because they've done the biggest investment there.
But they're by no means the only ones, right?
I think that what also gives me hope and optimism is I see really good independent researchers
uncovering this.
I see really good investigative journalists uncovering this.
And I think that having a meaningful field of people who are looking at us from different
angles here is really important because otherwise we tend to only focus on a handful of things
that are, again, like very Silicon Valley centric, very US-centric.
And we know that these threats are global and are continuing.
to target civil society around the world. So I think we need to do more work. I think that a lot of
the information operations that we're continuing to detect are by some accounts, the one that we're
looking for. And I think we definitely need more attention to make sure that the standards we set
for ourselves are applying globally. It was actually one thing that in pre-work for this interview
watching a discussion you and Alex Damos had had in some of the earlier conversation he was
talking about, for those that I don't know, Alex Damos, used to be the head of security or head
of security, I believe, at Facebook prior to leaving in 2018. And he was talking about the thing
that he would most like to see is that these policies that Facebook has done such a better job
recently in implementing in the United States and in Europe are not the policies that
govern the global South. They're not the policies that are upheld in the countries that have
much less attention, many fewer researchers, many fewer pro-publicas and those doing the kind of
assessments of these different problems. Yeah, it's not the policies that matter. It's the enforcement,
right? The policies tend to be global, but who is paying attention and making sure they're
meaningfully enforced is really what matters. And here, we really do have a lot of work ahead
to make sure that Silicon Valley actually treats the rest of the world with the same standards
and focus and care. Do you see any way that we're going to get some kind of equal broad
enforcement? Because I think unlike many people who cynically look at Jack Dorsey and say,
he's only doing this for Trump, and it's a political move.
I actually don't think that's the case.
I think that they would like to do equal enforcement,
but there's no technical or even human resources-scaled way to do that.
I'm smiling because I think that what's going to happen to high-level elected politician
is really interesting here.
I think in general, people say it's difficult to enforce global standards at scale for many
reasons.
For elected high-level politicians, it's actually not, right?
There are not that many presidents and country rulers that are on a given social media around the world.
That is actually a very specific set of accounts on which you can have a policy which you enforce globally.
Now, are we going to see that?
I don't know.
We'll see.
But here it's kind of like where this argument stops.
You can say how difficult it is to detect hate speech or dangerous speech at scale and in many different cultural.
context and languages, and that's true. But how difficult it is to keep an eye on country
rulers who are on your social networks at any given moment, that's less true. I think it actually
is fair that you can make a policy here and decide to enforce it. If you go back to the reason
why many of these elected politicians who have used social media to incite hatred or to
share this information had been kept online, it's not actually.
because the platforms didn't know about it, it's because they generally thought that there
was a newsworthy and public interest exception that applies. And I think that it's not a far-fetched
idea to say that someone who is ruling a country who is an elected politician, it's actually
important and meaningful for citizens to know what is it that they're saying and to see what they're
sharing on social media. Now, when this exception stops is the question we're asking, right? Does it stop
at a certain level of hate speech, but what happens after is also something we should be discussing
because all of this content, what elected politicians, what country rulers, what presidents
have said on social media is an important, meaningful political archive. And we actually want
people to be able to research that, look into it, scrutinize that. And so we also need to
think about that transparency and making sure that we don't sort of disappear evidence of,
for instance, human rights violations as we start tackling what happens to these elected politicians.
And so here, again, if you revisit this idea that there was at some point an exception to those
accounts because they were newsworthy and off public interest, that creates the question of,
okay, what happens when you take them down? And how do you make sure that the archive of this
content that is indeniably of public interest remains available for, again, historians and
researchers and media and people around the world? And again, we'll see what social networks do
here. Kamey, thank you so much for coming on your undivided attention.
President, thank you so much for having me today.
what we talked about with Khomegis now sufficient to really dealing with the net aggregate sum
of deranging harms that social media has placed onto society. There you are with Facebook or
Twitter and you can reach 20,000 people. That doesn't feel dangerous. That doesn't feel like a nuclear
button. If you put your hand on a big red button that has a big nuclear radioactive bomb next to it,
you sense that you're about to enact something really big. But when you post a tweet, you don't feel
intuitively, your paleolithic emotions do not wrap their heads around what you're actually engaged
with. I mean, imagine when you click, tweet, you saw a wall of faces of every single human being
that is affected by your words. That would start to wrap around your...
Yeah, exactly. That's a way, that's a humane technology for reach because you would have suddenly
a sense of there are actual human beings that are going to be influenced by this thing that you're
about to say. Like if you were in a stadium and you could see the 60,000 eyeballs that are on you
when you're tapping the mic sitting in the center of a gigantic football stadium and it's quiet
and you see your face in the jumbotron and you see the eyeballs pointed down at you and you tap the
mic twice and hear that subtle sound before you're about to say something, the sense of responsibility
or at least the sense of consequence that is present in that moment before you speak is very
different than like retweet, ha ha, oh, you suck, and then boom, off to two million people.
When I was actually at Google, this never shipped, but there was actually someone working on
the Gmail team who was just testing this with email. So you know how you send an email to a
list? And the list is just like a list. And there could be like 10,000 people in that list.
And you're about to put a message in 10,000 people's pockets or, you know, a design list
even that has 30 people. But what they did is when you put in a name or a list, it actually
showed the faces of all the people who you'd impact right there underneath the two lines.
So you'd hit two, you'd type in the first few characters for the list.
Boom, you know, autocomplete returns, slots in, but then boom, you see all the faces of the people
that you would impact with that message.
If we see all those eyeballs looking at us, we feel responsibility.
If you don't see the eyeballs looking at us, we don't feel responsibility.
So that's actually a small example of, I think, closing that gap and what it could look like.
If you're trying to make a functioning brain, imagine wiring up every neuron to every other
neuron. And what would you expect except epileptic seizures? In order to have a functioning brain,
you need to have locality and specific, like, connections between sites. There's real structure
that's there. It's as if we wired up the current social media networked global brain for mass
social epilepsy, because each brain is literally advertising and broadcasting and overwhelming
all the other neurons all the time. The standard conversation of misinformation and
disinformation and conspiracies and content moderation is important, but it's not nearly deep enough
to the net rewiring of civilization. And what is a humane rewiring that is actually not trying
to rewire so much, maybe? And we've had that in human civilization with radio or with television,
but we had a very small number of people who theoretically were trying to use that power
responsibly and control and gatekeep that channel. And we all know that that's led to certain
consequences and many marginalized voices that have not had a voice because of the fact that it was
gate-kept so conservatively, and that's an important failure mode. But we also don't want a world
where we fail over into social epileptic seizure and chaos. It's a very different problem
statement than content moderation as a problem statement, right? Just notice what your brain
looks for and tries to search for as solutions when we say, wiring up every neuron to every other
neuron gives a social epilepsy. What is the rewiring arrangement that would be humane or even more
empowering for humans.
It's a very, very different frame.
Your undivided attention is produced by the Center for Humane Technology.
Our executive producer is Dan Kedmi and our associate producer is Natalie Jones.
Nor Al Samurai helped with a fact-checking.
Original music and sound design by Ryan and Hayes Holiday.
And a special thanks to the whole Center for Humane Technology team for making this podcast.
possible. A very special thanks goes to our generous lead supporters at the Center for Humane
Technology, including the Omidyar Network, Craig Newmark Philanthropies, Fall Foundation,
and the Patrick J. McGovern Foundation, among many others.