Offline with Jon Favreau - Does Elon Musk Want Free Speech or Attention?
Episode Date: May 1, 2022Renee DiResta is an expert on tech policy, influence operations, and algorithms, managing research at the Stanford Internet Observatory. She joins Jon to break down Elon Musk buying Twitter, explainin...g his envisioned reforms and making the case that Elon Musk fundamentally misunderstands free speech on the internet.For a closed-captioned version of this episode, go to crooked.com and Offline. For a transcript of this episode, please email transcripts@crooked.com and include the name of the podcast.
Transcript
Discussion (0)
And I used to periodically ask people on Twitter, why do you think you're being censored?
This was back in 2018 when we could have these conversations.
And one thing that people said a lot was, my friends don't see everything I post.
So this question of open sourcing the algorithm, it's being processed as like you either love the idea and you're like pro-free speech and pro-transparency or you think the idea is ridiculous, which is where large numbers of people in tech have come down, because they're like, what, they're going to show an ML model
with no training data, no data? I have no idea what it would show.
I was really curious. I was like, what do people think they're going to see?
I'm Jon Favreau. Welcome to Offline.
Hey, everyone. My guest this week is Rene DiResta, I'm Jon Favreau. Elon Musk buying Twitter. Even if you're not extremely online,
you probably know that on Monday, Twitter's board of directors accepted Elon's $44 billion
buyout bid. It remains to be seen if the deal goes through. But on Twitter, he's already acting like
the platform's de facto chief executive, talking about his vague plans to protect free speech while
shitposting and complaining about liberal bias.
This, of course, has generated no shortage of takes. Like, what the fuck does Elon Musk know
about free speech? Will Donald Trump get his account back? Is it even possible to make Twitter
worse? I am well aware of the fact that I promised you fewer offline episodes about Twitter,
but this is a huge story that touches on more than just the platform itself. It affects the future of media,
politics, the tech industry, and more. So I wanted to bring in a real expert. And that's why I reached
out to Renee. Renee is an expert on social media and information ecosystems. She's investigated the
Russian Internet Research Agency efforts to undermine American democracy
and advised Congress, the State Department, and more on ways to combat online propaganda.
When Elon first made his bid, she published a piece in The Atlantic titled,
Elon Musk is fighting for attention, not free speech.
Her analysis was unique in that she focused not on the merits of Elon's stated vision,
but on how he fundamentally misunderstands the debate over free speech, content moderation, and Twitter itself.
She says that the idea of Twitter as a global town square, a favorite term of Musk and Twitter executives before him, places, quote, wholly unrealistic expectations on what social media is or should be. We got together on Friday morning,
two days before air, to make sure this conversation was as up-to-date as possible.
I asked her about her piece and about what she expects of Elon Musk's takeover,
the good, the bad, and the absurd. As always, if you have any questions, comments, or complaints,
feel free to email us at offline at crooked.com.
And please rate, review, and share the show.
Here's Renee DiResta.
Renee DiResta, welcome to Offline.
Thanks for having me.
So I know that we are currently all drowning in Elon Musk takes.
Yes, we are.
But I wanted to have you on because you've spent as much time as anyone researching and writing and speaking about tech policy, media trust, misinformation, and especially algorithms, which are topics that all seem to intersect at this debate about Elon buying Twitter. I'm curious, what was your initial reaction to
the news that, at least as of this recording, the acquisition is moving forward?
I was a little bit surprised. It seemed more like I was just going to be kind of like a
shitposting meme-y type, will he, won't he sort of thing. But then to actually commit
$44 billion to it was pretty remarkable to see it go through or possibly go through.
We don't even know.
I know.
By the time people listen to this, we don't know what's going to happen.
I want to take through a couple of Elon's complaints about Twitter.
And maybe you can talk about whether they're A, warranted, and B, fixable.
And the first is this idea that Twitter isn't sufficiently committed to free speech, which you took on in a recent Atlantic piece where you wrote, what Musk and others portray as a battle over free speech is a proxy American democracy. I don't think that the idea
of free speech is in any way something that we should be fighting about in the way that we are.
We've kind of turned it into a meme. And that's the thing that I was reacting to a little bit in
that essay. I want to, for the sake of conversation, I want to hold aside completely the legalistic
arguments about it doesn't apply to private platforms. I want to really get at the spirit
of what people are asking for and what people feel. And right now, there are a lot of people who feel that they are censored in some
way on Twitter. We can talk about why they feel that way and the sort of policy changes and the
actual research that gets into whether or not that's accurate or inaccurate. But what people
are saying is they want Twitter to be a platform where they can express themselves. And right now that is coded in very partisan, a very partisan perception, right?
The left hates free speech.
The right wants absolute free speech and wants, you know, Nazis shouting on every platform,
right?
And so there is this almost kind of like a caricature of their respective positions of
each side that is not necessarily rooted in any real reality.
But Twitter, ironically, really lends itself to that,
right? Twitter is a place, I called it an arena. And I mean that it's like, it's like for bear
baiting, you know, it's like for factional fighting. That's what it's for. And that's
because that's how it's designed. And I really would at some point love to talk about the design
dynamics that lead to that. But right now, when we talk about free speech, one of the things that
people have realized over time is that Twitter is remarkably powerful. And that's because it captures people's attention.
And one of the things that happens when you capture people's attention, when you develop
relationships of trust, when you develop amplification networks, is you can activate
crowds of people also. One of the ways in which Twitter really rose to the public consciousness
was the way that it intersected with the Arab Spring, right? Literal revolution. This was a tool of
power. This was a tool that could take down governments. And as that began to happen,
you started to see other major entities recognizing that Twitter was a tool of power.
You started to see niche groups trying to use it to make themselves look bigger, to pretend that they were actually
a larger share of the public, you know, a public sentiment than they actually were. You started to
see terrorist organizations come on and use the platform. You started to see foreign governments
use the platform as a tool of infiltration, as a tool of disinformation, right? And so what you
start to see is the recognition that this is a platform that directs people's attention and keeps them quite riled up, quite activated. And that is a
incredibly powerful tool for political actors. And that's one of the things that this conversation,
I think, is really about. Well, I mean, Elon has talked about Twitter as a de facto public
town square, which of course is a phrase that Twitter CEO Dick Costello
first used back in 2013.
You just mentioned, and you said this in the piece as well,
that it's more of a gladiatorial arena.
What do you think happened along the way?
I mean, you also said we could talk about
some of the design choices,
but was it ever possible for a platform like this to be a public square where everyone just goes and exchanges ideas and there's
free expression, everything's fine?
Or was it always destined to become an arena or was that certain design choices that were
made?
What do you think there?
I think design is actually really important here.
I wrote this little essay back in 2016.
I was reading a bunch of crowd psychology and
there's a concept of open versus closed crowds. And in an open crowd, you just have groups of people who come together. It's quite spontaneous, right? Think about a protest movement or something
like that. They come together spontaneously versus a closed crowd, which is the idea more of
people who participate in, for example, like a church group, a group function.
There's a cohesion there. They see each other regularly. There's like a structure to it as
opposed to this like spontaneous activation. And Twitter really began to be this place for that
spontaneous activation because what would happen using design is you would come to the platform,
you'd open up your phone and yeah, you would see the things that you were following, but there was
also this trending feature, right? And you could click into trends,
and you could see what people who were not in your network, you did not follow, but you could see what
they were talking about. You could see kind of where was like the public id in that moment, what
was the thing that was really captivating people. Sometimes it would be a trend around like sports,
you know, there was some big game, and everybody was like communing around the game, and how great
that was. And when the game ended, you know, that trend kind of dissipated. But people also began to realize that you could use those trends for political activation. And one of the is, you could make it trend. And particularly in 2015,
there was a whole gamification process around this. When entities began to realize that having
something in that trending feature would generate a crowd around that trend, this was where you
started to see the bots come into play, right? The bots were actually a tool to try to make
something trend. The bots were not really there to go harass
random people. They were there to get enough critical mass tweeting about a particular topic
to try to make it trend, to drive the crowd, to drive the participants to that trend. And so you
started to see just through this inadvertent design choice, that didn't happen on Facebook.
There was trending stories, but that was really just about some
lame article that was popular and Facebook ultimately wound up killing that feature.
So this dynamic of Twitter, the arena was really a function of all of a sudden you could activate
people around a hashtag. And that dynamic, that behavior became so central to what we use the
platform today and how we think about it.
That's, I hadn't thought about trending topics that way. The other design function that I thought
sort of makes Twitter this arena is, is just the RT is just is just retweets. Because there are
plenty of times in my own Twitter experience where I'm like, you know what, following this person is
making me mad. Why am I following this person? I unfollow the person or I mute the person. But then other
people that I like to follow will start retweeting something. And suddenly that person I didn't want
to follow ends up back in my timeline. And I, but I'm wondering if, and I talked to F Williams about
this a couple of weeks ago. I'm wondering if just the RT function itself is part of the design of Twitter that leads you to
start seeing content that you didn't necessarily want to follow on your own and starts sort of
driving people towards certain trending topics. It is. And retweeting is one way to make something
trend. Even holding aside, you know, does it actually clear trending? Per your point, it does
get into the network that is likely to care about it through the retweet function. There's been a lot
written about quote tweeting also, and the use of quote tweeting as a tool for dunking, right? As a
tool for literally pointing your entire follower base at this account by quote tweeting them,
particularly if you're going to do it in a way that is hostile.
Right. You know, this is the tool weapon dichotomy. Right.
A quote tweet can be really fantastic because you can drive supportive energy towards somebody.
You can say, here's a person with a really interesting thought. I love this thread. Go look at it.
I do think that there's, you know, in this this tool weapon dichotomy, kind of some function of this is design, but also some function of it is user agency, right?
If I want to disagree with somebody and I feel like, you know, intense, like I care enough about it to want to quote tweet, a lot of times what I'll do is I'll screen cap it instead so that there's not like a whole mob directed.
But I think that, again, there's that question of, if you want to direct
a mob, you have the power to direct a mob, right? And that is, I think, one of the areas where
the kind of tool-weapon dichotomy comes into play. And so people who have been on the receiving end
of those mobs, I think, have a kind of a real fear that when we talk about making Twitter more free speechy,
what we're getting at is more of that kind of behavior, that kind of dynamic. Twitter is a
great tool for sharing information, but the harassment angle, the kind of bear baiting
aspect of it is very much core to the platform. And many people remember, particularly back in
2015, when there was less moderation, how that experience played out for many users. I do want to go back to sort of
Elon's sort of definition of free speech on the platform and the legal aspects, only because I
think if you are not familiar with Twitter, or if you are not, and you know, specifically,
if you're not familiar with the design of Twitter,
you might think it sounds sensible. So he said that by free speech, he means that which matches
the law and he's against any censorship that goes far beyond the law. Why do you think that's not a
sufficient Twitter policy? Well, there's a lot of things that are legal. And again, because when we
get to the First Amendment in its legal context, what we're
talking about is what can the government decide to intervene on, right? And there is a very, very
high threshold for that, which is what we want to see in a democratic society, right? We want to
have that very, very high threshold. Where we have the experience on a social platform, though, is
this is a community, right? And people
are coming together. This is a business also, don't forget, right? And it is not good for business for
people to feel that if they take out your app and go tweet something, a horde of angry people is
going to immediately descend upon them, right? That's not a good user experience, it turns out.
And so there's also an interesting, you know, but again, sticking with kind of free speech, the value, not the not not the not not holding the legal stuff aside.
I think there's also a guarantee of freedom of assembly there.
Right. The idea that the purpose of the public square, the purpose of free speech is to express yourself, to make your voice heard.
Yes, there is counter speech. Yes, there should be a debate.
Yes. You know, the antidote to bad ideas is more ideas, better ideas, the marketplace of ideas.
But what happens in the experience on the platform, again, coming back to design, is that a crowd of very, very angry people can push other people out of the virtual town square by using harassment, by using targeted bad speech, if you will.
And it interferes with the ability of that targeted community potentially to participate
in the conversation. So Twitter began to recognize this and to say, okay, that experience, A, is bad
for business, but B, is also using free speech to stifle someone else's free speech by directing a mob of
hate at them is not in the trade-off around that value of free speech, that value of pluralistic
participation. It's not living up to the value. And you also start to see a lot of things that,
you know, content moderators are looking at is, I mean, animal cruelty videos, right? It's pornographies on
there. Again, there's so much stuff that falls under the rubric of speech the government would
not censor, but speech that does not necessarily lead to the creation of a healthy community. And
I believe that's actually Twitter's term for it now is healthy conversations. How do we think
about healthy conversations? Well, and you mentioned some of the stuff that was happening
back in 2015. And even before that, one of the examples that I've heard you raise is, I mean,
Twitter was being used by ISIS for recruitment. And a lot of what ISIS was doing on Twitter to
try to recruit people was technically legal, but certainly dangerous in such a way that Twitter decided to take action, right?
I think some things get memory hold, in part because social media captures our attention and we pay attention to the shiny thing that is now.
And if you haven't been following this conversation or watching these trends for the last seven years, you don't necessarily have that through line.
What was happening on Twitter in 2015, you can actually go back if you go to
Google search, like set the time of results returned and pull up the 2014-2015 timeframe,
you see an organization growing a virtual caliphate. That's what they call it. They call
it the virtual caliphate. This was not secret disinformation. This was overt propaganda. The
black flag was everywhere. If you followed
one jihadi account, it would refer you to other jihadi accounts. Again, inadvertent design,
Twitter's own recommendation engines would push people further into that community if they engaged
with the content and, you know, and participated in that conversation. And so you started to have,
you know, there were the kind of actual jihadis, right? But then there was this cluster of amplification fanboys is what, you know, is what the, I don't know, colloquial term for it is. And they were the people who would not necessarily declare allegiance and do something that was like an active engagement with a terrorist organization. But what they would do is they would, they would serve as like a kind of boosterism for it. Look how cool this is. Like, look how bold this is. Man, they're really kicking XYZ's ass, the U.S.'s ass, et cetera, et cetera.
You had the completely, unfortunately, quite lame response of the U.S. government to this, which was to put out counter tweets with hashtags like think again, turn away.
But because of U.S. law, they had to be clearly attributed to the State Department, which was the entity that was putting them out.
And so they turned into a source of kind of like mass mockery as if the U.S. State Department tweeting at people who were, you know, terrorist adjacent or thought this was like a really cool thing was somehow going to dissuade them.
And so that itself became kind of a whole, you know, sub, sub dynamic. And you just, you had
this network and it was growing. And in the articles about it at the time, you see debates,
one man's terrorist is another man's freedom fighter. This was soon after the Snowden
revelations also, do we want US government deciding who can and can't speak on Twitter?
Do we want Twitter taking takedown notices from US government? And this is a
real conversation that was happening in 2015. I was doing some work on it at the time around
October 2015. And the tone didn't really change, unfortunately, until the Bataclan attack,
until there was literally a massacre in a nightclub in France. And then people started to wonder, maybe this vocal support
is leading more people to glorify this organization, leading more people to downstream
take these actions. We didn't have very good data. It is really hard to connect the dots and say,
this person saw this tweet and then did this thing and then did this other thing and then got here. So the conversation around online radicalization has always suffered from a lack of data access and clarity.
But that perception really began to become the dominant one.
And you did start to see Twitter taking steps to minimize the reach of that kind of lawful but awful type of boosterism.
The other important point that I saw you make is this idea that we could ever have an unmoderated public square just doesn't really fit with reality, even off Twitter, right?
The idea that there's just no rules when everyone just gets together and can say whatever they want, right?
I think you mentioned the example of noise ordinances or imagine a crowd of angry people following someone around.
We usually don't just let that happen in the real world either.
No.
And well, this is the, you know, there's the crowd psychology book I was thinking of when
I was doing the work on the essays by Elias Canetti.
It's called Crowds and Power.
And it's from the 1960s, right?
This predates the internet by decades.
But it was, again, looking at this, this question of how do we think about roles of crowds, formation of crowds, and their real-world impact, particularly when oftentimes there is like
a momentum towards some sort of violence, right? There's like a desire for some sort of emotional
release. What is the thing that they're going to do? Are they going to burn down a building?
What is going to happen? And so that dynamic as it um you know there there has never been this
unmoderated public square it's a fiction there's always been a recognition that
you know behavior in the real world does require crowd control at times and so that question of
how and where and is it too stringent does it st stifle speech unfairly? Are protests stifled when there's
a legitimate grievance, the right of the people to assemble? But there is this series of trade-offs
and local ordinances and laws related to how that is conducted. You do need a permit to have a
protest in New York City and there are rules. How do you think Twitter has handled the
balance between the value of free expression and assembly and content moderation up to this point?
Well, I think one of the challenges with content moderation, the policy is only as good as its
enforcement. And the legitimacy of the enforcement is really tied to transparency around the enforcement.
So what you started to see happen, the belief that people were being shadow banned, the
ways in which policies appeared to be disproportionately impacting conservative audiences was a really
big kind of grievance. One of the ways in which it
was happening was some of the policies did not go after viewpoint-based content, but went after
certain types of behavior. So there are real nuances in Twitter's policy around, for example,
you can express, you know, you can express commentary about a particular group. The trans,
you know, kind of trans rights conversation comes up as an example a lot in this lately. You can express a
belief or a political opinion or commentary, but you can't direct hate at a particular person who's
a member of that community. And so there is this, this delineation, you know, I think it says
targeted at someone where the primary intent is to harass or intimidate.
That's written into the policy.
So there's this, the language is broad.
The language is often vague.
The enforcement is not necessarily uniform.
One content moderator may see a particular type of intent and another content moderator
doesn't.
I think probably most people who've been on the platform long enough and have reported
at least one tweet knows that sometimes things come back and you're like, how the hell did this, you know,
why wasn't this taken down? Or, you know, you're on the receiving end of something that,
you know, where you get a takedown notice that says like, you know, you, you, you know,
you used a colloquial phrase that was interpreted by whoever got the content moderation flag as
something that, you know, meant more than it was.
You see this in like particular, you know, the challenges of moderating particular vernacular
at times. Also one community might use bitch as like a term of endearment and another community
sees that as a, you know, harassment, right? And so the AI moderation is not that great.
When something goes through a first pass with an AI moderator, you get a lot of false positives. When it goes through a second review by a human moderator, sometimes they'll
roll back that decision. So you start to see particularly high profile accounts will then
publish, you know, will tweet about their experience with this, you know, somewhat kind of
their bad experience with the moderation, the moderation process. And it, you know, somewhat kind of their bad experience with the moderation, the moderation process.
And it, you know, this question of is there disproportionate viewpoint based censorship
based on the research that's come out? The answer seems to be no. There has been this was a paper
that actually just came out last week. I think I think it might have been Brendan Nyhans, I hope I'm not misstating that, but
he was tweeting about it at least. In that work, they found that there was a larger percentage of
conservative users who were taken down because they were spreading misinformation. And so it
fell under the misinformation policy. And this is what leads to a lot of questions and debates
around who should be the arbiter of truth truth and is the moderation policy fairly written and you know does the misinformation policy lead
to moderation outcomes that are more significantly impacting one particular group of people so
there's just there's so much that goes into it content moderation is really hard and i've been
kind of um i don't know if amused is the word, but, you know, I see these
tweet storms go by about the, you know, centuries of jurisprudence on free speech law, which is
relevant, but I don't see anything going by on the, you know, 20 years of content moderation
policy evolution. And, you know, across every single social platform, every platform is moderated.
Truth Social has a moderation policy. Parler has a moderation policy. Parler has a moderation policy.
Getter has a moderation policy. And that's because there is a recognition that a free-for-all does
not create the best community experience. And so rather than getting at the nuances of we want this,
we don't want that, we've just kind of reduced it down to a meme. We want free speech. And that's
where the conversation is. I mean, one of the one of the policies for content moderation on Truth Social is that
you can't say anything critical about Donald Trump. So much for like the value of free
expression. It does strike me that Elon himself has not given a lot of thought to content
moderation policy. I mean, he did that. Most of what we've heard from him, aside from on Twitter, is in that TED Talk he gave or that TED Talk interview he gave a couple
weeks ago. And I remember he was asked, you know, there's one tweet that says, I wish this politician
weren't alive anymore. There's another that shows a graphic image of the politician saying,
I wish they weren't alive anymore. There's a third tweet that gives the politician's address and says that I wish, you know, what, which one do you ban?
Which one do you take down? And doesn't this always have to be human judgment at some point?
There's no algorithm that can figure this out. And Elon's answer was just, just sort of nonsense.
Like he, he really hadn't thought about this much at all. This is where, you know, I've had a lot of,
um, a lot of interesting conversations
and also a lot of arguments, actually,
even with people who I think of as good friends about this.
It's become a weird, it's coded in this weird binary.
You either love free speech and support this acquisition
or you hate free speech and don't.
And then there's a vast gulf of potential positions that exist between those two things. But this is Twitter,
you know. One thing that's been interesting is if you consider the context of moderation
in the Facebook oversight board, right? So this is a really interesting thing. So you have another
large platform, different dynamics, much more of the kind of closed crowd group type stuff, but, you know, as opposed to the, you know, the bear baiting free-for-all. But recognizing
that sometimes moderation decisions are bad, right? Enforcement was bad. Or more importantly,
that a policy doesn't adequately take into account the value of maximizing free expression.
The oversight board does these deliberations and
they put out these findings and they're, you know, they're binding for Facebook policy.
But they put, they do these deep assessments saying, should this person have been taken down
over this comment? And they try to have, you know, kind of broader kind of like precedent that comes
out of these decisions, these determinations,
to try to improve the value of free expression on the platform. And so you do see this careful
deliberation. The findings are put out publicly. They're oftentimes on really big pivotal cases,
the oversight board members will do some interviews and talk about what their process was.
So in a way, it's almost like, you know, the Supreme Court kind
of social media moderation. And I do think that there's real value there, because maybe with that
kind of transparency, here's the facts of the case, here's what happened, here's the moderation
decision that was made, here's our finding about whether that moderation decision was good or bad,
and here's what the policy could or should be instead, if the finding is that it was good or bad, and here is what the policy could or should be instead if the finding
is that it was bad or, you know, did not protect freedom of expression. That process and the
transparency and visibility of that process, I think, are actually quite powerful towards creating
a better public understanding and perhaps a better legitimacy in how the public thinks about,
you know, these policies and governance on, you know, on private platforms that are public squares. So, you know, this is not,
this is not to say that the Facebook oversight board is without its faults, but just as a model
for helping people understand what is happening and why I do think that there's actually some
real value there. And, you know, perhaps, you know, something more like this
on Twitter would help to diffuse some of the allegations or refine the policies in such a way
as to, you know, for people to feel that they're free speech maximizing. Well, on that note of
transparency, you know, Elon has also talked a lot about making Twitter's algorithm open source. Is this feasible? Would it matter? And is there any evidence that doing so
would show that, you know, conservatives have been shadow banned or their views have been
suppressed more than liberals or anything like that? You know, first of all, I want to say I am
like strongly pro-transparency and algorithmic auditing. And I've been writing about it for
years. You know, there's a bill called Platform Accountability and Transparency Act. I, you know, with some
colleagues, literally yesterday, published some ideas around like, if we ask for transparency,
what do we mean? What do we want to get at? What are the questions researchers might want to answer?
So much of the perception of what is happening, whether that's anti-conservative bias
or every group that has had a bad moderation experience feels that there is something stacked
against it. Is there a way for us to have data access to have an empirical view to really assess
these questions? So I think that transparency is foundational. I cannot get my head around what
open sourcing the algorithm actually means in
this context. If we're arguing for there should be middleware and users should have greater control
over their experience and get to decide what they will or won't see, I think there has been
so many people who are working on that concept of middleware. Ethan Zuckerman comes to mind.
He had a project called Gobo at
MIT that was like, okay, if you were to tweak sliders, what would your feeds show instead?
Because there is no neutral in what is shown to you in your feed. This is all about hierarchical
ranking in the process of curation. Everything is weighted. It's sort of reverse chronological,
which is just a different weighting that privileges time. But in every other way, when you open your phone and take it
out, it's not that they're trying to suppress or censor, you know, your noisy uncle. It's just that
they think that some other piece of content is going to be more likely to resonate with you.
So this idea, I, you know, I used to periodically ask people on Twitter, why do you think
you're being censored? And this was back in 2018 when we could have these conversations. And one thing that people said a lot was my
friends don't see everything I post. And I thought that was such an interesting, you know,
such an interesting answer because it just shows a complete lack of familiarity. And this is not
a ding on those people who didn't have the familiarity. This is a failure to communicate
of the platform, right? This is how we should be educating people who didn't have the familiarity. This is a failure to communicate of the platform, right?
This is how we should be educating people, not just media literacy and sources, but here
is how a recommendation engine works conceptually.
Here is how a feed ranking works conceptually, just so that you understand a little bit better
why your post is not seen by every one of your friends at that time.
So this question of
open sourcing the algorithm, I think, again, it's being processed as like, you either love the idea
and you're like pro free speech and pro transparency, or you think the idea is ridiculous,
which is where large numbers of people in tech have come down because they're like what they're
going to show, like an ML model with, you know, no training. I no data? I was going to say, I have no idea what it would show.
I was really curious.
What do people think they're going to see?
I mean, I kind of thought just what you were explaining,
which is a transparency in terms of here's how the ranking works.
Here's how we weight certain posts, why we show you certain posts,
what goes into the algorithm that makes
you see something and not something else that makes us think that you're going to engage more
with this particular piece of content and less with this particular piece of content. It does
seem like that kind of transparency would be helpful. I'll be completely honest that I had
no fucking idea what open source algorithm even means. Is it a bunch of like
numbers and code that I would just be like? Open source is a, there's a, actually a fairly robust,
you know, kind of dating back to the earliest days of the internet, you know, open source software
movement where there's a belief that you should, you know, you, you put your code up and other
people can contribute to it. They can see it, they can audit it, they can make it better, they can find bugs, you know, GitHub really kind of is a platform for this. And it's been core to and really
foundational to the culture of the web for, you know, for decades. So in the context of open
source, it's kind of the idea that you're going to like put it out and other people are going to
get to audit it and engage with it and so on and so forth. Again, this question of algorithmic auditing, I think that that makes complete sense provided
that, well, not necessarily open sourcing, but the algorithmic auditing piece makes a
lot of sense.
It just requires like extraordinary technical capabilities to do.
And that's where, you know, again, some of the platform accountability and transparency work that we're starting to think about in the context of what is possible, what would transparency enable? maximize, you know, maintaining trade secrets for platforms, while also having some, you know,
independent third party researchers looking at and, you know, kind of assessing what's going on
under the hood. It also seems like with just about everything else in politics these days,
that we're all sort of playing this game
like it's on the level where we're going to have all this transparency about what's actually
happening to give people a better idea of what's going on, of algorithmic rankings and all that.
And conservatives are just going to, or right-wingers are just going to use that
to point to something and be like, see, we were shadow banned, even if
it doesn't make sense. And it's not true. Well, and that's because ultimately, the the, you know,
this is a conversation that is part of the meta conversation about power, right? And, and, you
know, and attention. And that was this, you know, it's not that you're, you know, I was kind of
joking around about bringing stats to a meme fight on Twitter
last night. But I think that that's actually, you know, to some extent what's happening. You're
assuming that there's like, and Elon very well may be in the realm of the good faith people who
want to know the truth and will, you know, adjust their beliefs and their commentary on it when they
see what is actually happening. I think that's, you know, that used to be the
vision, right? We would have more facts and we would change our minds and we would say, okay,
this was happening, this was not happening here, so we should think about, you know, I was right,
I was wrong, you know, etc. That's not going to happen, I don't think. And, you know, people tell
me that that's a very cynical belief, but since there is such a political power component to the conversation, you may remember
President Trump fundraising on the idea that his supporters were being shadow banned or censored.
And one of the ways in which this manifested, and this I think is actually really interesting,
other thing that maybe got memory hold if you weren't paying attention to content moderation conversations. It happened because a tweet of his was labeled, not taken down, labeled. And there
was a fact check that was appended to some false claim that he made. I don't remember the specifics
right now. But the label was contextualized by his inner circle on Twitter and other places
as censorship. And I thought, wow, this is really something now
we've moved into the realm where contextualization, which you could argue is actually kind of
counter speech in a way, it's just putting the other, you know, the other kind of point saying,
man, this isn't accurate, but we're letting it stand. Here it is, nobody's taking it down.
But here is the here's a fact check on that point. And again, this question of design, right,
you can let something stand, you can of design, right, you can let something
stand, you can let it stay up, you can let that account be searchable, you know, findable, and you
can, you know, enable people to see its content, but you can also put up this, this fact check. And yet,
the fact check itself, the act of informing the labeling was processed and put out to the public,
to the supporters as an egregious act of censorship.
And then this led to a web form asking people to describe a time, you know, the times that
platforms had censored them. And that, of course, you know, this being the internet that turned into
a bunch of people uploading like a bunch of, you know, photos, you can imagine of what.
I do remember the story.
But that was kind of how it played out. And platforms have, for anyone who hasn't been
immersed in content moderation, you have remove, reduce, and inform, right? Those are the sort of
three buckets. That's Facebook's terminology, but roughly speaking, that's what tech platforms have
at their disposal. Remove, it comes down. Reduce, it's algorithmically downranked or
deprecated. Again, you can go and search it and see it on someone's account, but it's not going
to be pushed into the feed, right? It's downranked a bit in the curation process. And then inform,
which is the label goes up alongside it. And we just kind of reached a point largely through
politically motivated conversation, the ability to use this as a real grievance,
again, where there's some instances of moderation that are overreach and bad and shoddy and possibly
biased. Again, we don't really have that data, that finding yet. But all aspects of moderation
were rolled into this narrative of it is anti-free speech, moderation is censorship. And so any and all of
the nuance that was possible in that conversation, in that how do we use design to build better
communities to create healthy conversations, really just became flattened down into who has
the right to decide what is healthy and who is the arbiter of truth and who watches the watchman,
right? And that was where the conversation got kind of reduced to. And that's kind of where it's been stuck for a while
now. Well, and it also feeds into Trump and increasingly the right sort of grievance-based
politics, right? There's plenty of fact checks that I might disagree with. I would say, i don't agree with that fact check i think i
was right and the fact check is wrong i would not say that fact check is censorship but if you say
it's censorship then you make people believe that they have somehow been um suppressed or
prejudiced against and you feed grievance and you feed anger. And so that's sort of what the right
does very well. And so in some ways, it's like, again, we're trying to play on the level here
with these fact checks and you can disagree about facts, but they're playing, like you said,
it's all about power, which is just a different ballgame. Thinking about the bigger picture here,
I really liked and laughed uh, and laughed at the
opening of your Atlantic piece where you wrote, I didn't wake up this morning planning to
write about Twitter and I've never woken up with the intent to write about Elon Musk,
but this is the nature of Twitter.
The spectacle sucks you in.
And that got me thinking like as someone who studies how misinformation and propaganda
shape media and politics and democracy, how much do you think it matters what ultimately happens
to Twitter, which has always been a platform that's had outsized influence relative to its
user base? I think you said the P word, right? Propaganda, which is where this is a whole other
hour long conversation, I think. But that is how I think about it. And I really feel like
misinformation and disinformation, they're terms, they have value, they have meaning, but they have had extraordinary you know, things that are propaganda, they're not
falsifiable, right? It's not something, you know, we usually use disinformation to refer to a
deliberate campaign to, you know, to mislead the public, oftentimes in the context of something
like state actors or people who are not what they seem using inauthentic tactics. That's what
disinformation should mean. That's what it, you know, referred to in its kind of Cold War origin. But propaganda has always been something
else. And these are platforms that are really tailor-made for propaganda. They're made to
persuade. But more importantly, the, you know, kind of the old media theory view of propaganda
as a tool for activation. Again, it is really all about activation. And that I think is, that is the value
of Twitter. And some of the work that we did on trying to understand election related narratives
in 2020. So we did not pay attention to like candidate A lied about candidate B, or this
policy was not adequately truthfully represented. We were only interested in narratives around voter
fraud, right? And allegations of fraud. Because I think that there also has to be a notion of harm, right?
There's always going to be people who are wrong on the internet. There's always going to be
propaganda. There's even always going to be disinformation. So what are the high harm areas
that are worth moderating as opposed to allowing people to kind of fight it out amongst themselves?
There's many different opinions on that, but in the work that we did, we scoped it towards election delegitimization. And what you start to see is this
dynamic in which people see something, they feel uncertain about what they're seeing. I see a
suitcase outside of this polling place and I am concerned.
I have heard that there's going to be massive fraud. So I process that suitcase as somebody,
they are taking ballots away or moving ballots in, you know, as the case may be. And, you know,
everybody has a camera phone in their pocket. They take the photo, they tweet it, they tag in a
couple of influencers and their sort of political sphere, their aligned, you know, politically aligned people. And then those people have massive followings oftentimes.
And so they blast it out, big if true. So again, there's no attempt to find out if it's true. No
one knows if it's true or not, but you've just created an environment of suspicion. You've
created, you know, an accusation. And what happens on Twitter does not stay on Twitter. So this dynamic
then makes it to Facebook where it's debated in those closed crowds. It makes it to YouTube where
somebody makes a video looking at the photo and spending 30 minutes discussing like what may or
may not be happening. But then that video is pushed back out to Twitter, right? And so this is an information environment. This is a system. And so interestingly, the moderation and policies of one
platform do have impact across, you know, as it kind of cascades across the system.
And so Twitter is important because of that amplification function, because people with
very, very large followings are on it because hyper-partisan media
is on it because mainstream media is on it with, you know, massive broadcast audiences. And that
is this, this kind of interesting pivotal role that, that Twitter takes is if you make something
trend, you know, you make it true. People believe it, they see it, they engage with it, they amplify
it. And so it's a really directly participatory platform in a way that
very few things are. You might leave a comment on YouTube, but not everybody is engaging in quite
the same way. TikTok in some ways is maybe a close second in that duet function or people
playing and building off of each other and that content and that kind of collaborative creation kind of model. But Twitter is really distinct. People feel that this is the platform where they can speak
to the powerful, to the media, they can bypass the gatekeepers. And so it really occupies
a really kind of central place in our understanding of what it means to be
a participatory citizen in American politics today.
So I've seen people argue both the optimistic and pessimistic cases for what becomes of the
platform if Elon closes the deal. I think that's really hard to predict. But just to do this here,
what do you think the most pessimistic possibility might be? I think the most pessimistic
outcome would be like a rollback to 2015. That's sort of, you know, does it turn back into
harassment mobs and, you know, really kind of lawful but awful? Like, is there a proliferation
of lawful but awful, which is what they've tried to minimize at this point?
And on the optimistic side, if Elon were to call you tomorrow and ask for advice on what
would actually improve the platform from where it is now, or if the deal doesn't go through,
which is always a possibility, what would you tell him or how would you improve the platform
from where it is now? I think that the transparency piece is really, really
foundationally important, actually. I think that, you know, the same way there have been really great books written over the years explaining how the sausage is made in media,
I think the real opportunities to do that, to explain how it's made on social media platforms,
just to, you know, we talk a lot about media literacy in the context of lateral reading or trusting a source or so on
and so forth. It's more like, how do you process the media content that you get on social media,
as opposed to kind of a foundational understanding of, in the broadest terms,
here is how curation works. Here is how recommendations work. Here is how something
trends. It is like an adversarial environment.
You don't want to necessarily put out the full, like, here's the weighting that you can manipulate
if you want to have the greatest impact in the shortest amount of time, you know, to get your
thing trending. But there is, there is like a, again, a pretty broad area, I think, where we can
help people understand how this works. And then more importantly, for interrogating this question
of is there disproportionate censorship? Is there viewpoint based censorship? Is moderation fairly applied? I do think that there's a lot of work that we can be doing on that front to get it out of the realm of like, you know, memes and vibes and bring it into the realm of actually understanding how these systems work, because they're so central to our lives at this point. It is not like a,
just a thing that some extremely online people pay attention to. We've talked about the US in the,
you know, in the last 45 minutes, we haven't even gotten to what happens in the rest of the world.
You know, these are global platforms and they, the impact of them, the power that they have,
the power to call attention, to activate is profound. And I do think that, you know, the transparency goal that Elon has is a good one. The maximization of freedom of
expression is incredibly powerful, particularly in countries where the media is controlled by
authoritarians and people don't have the right to go stand on their corner with a bullhorn, right? And so that is, there is, again, I am not negative on the idea of improving Twitter to be a free speech maximizing platform. I just think that there is this nuance related to moderation that could better be incorporated into the understanding of the problem.
Yeah, and you don't always get that in the meme fights.
You don't get that on Twitter. It turns out
you don't get that. You don't get that from vibes and memes. Um, last two questions I'm asking all
of our guests, what were you doing the last time you realized you needed to put your phone down?
And, uh, what's your favorite way to unplug? Oh, I have, um, I've got three kids, so eight,
five and 18 months and the 18 monthold will just come and take the phone. She'll just pull it out of my hand. Very unfiltered. Mommy, it's playtime now. I actually feel really guilty when she does that. I feel like I've been immersed in the... I am one of these people, Twitter is my, is actually my preferred platform of choice. It's where I spend most of my time. I really do love it actually. And
I always learn something when I, when I go on it or I meet someone interesting and,
but I'm also one of these horrible people who like closes the app and then 30 seconds later,
like it's just the default. So I try to catch myself when I do that. How do I unplug? I mean,
I, again, three kids, I go camping. I take them out.
Oh, nice.
Yeah, yeah, yeah. We really like camping and touch grass, right?
No, I think the, I, you know, I just like, I like being outside. I like walking around. I like,
I just like, like being with the kids. I work a lot. And so it's nice to have that
family time. Yeah. No, I have almost a two-year-old. So I'm in the same boat. I'm in the same boat.
Renee DiResta, thank you so much. I could talk for hours about all this stuff. And perhaps we will
again soon. But thank you so much for joining Offline. I really appreciate it.
Thank you. It's great to chat.
Offline is a Crooked Media production.
It's written and hosted by me, Jon Favreau.
It's produced by Austin Fisher.
Andrew Chadwick is our audio editor.
Kyle Seglin and Charlotte Landis, sound engineer of the show.
Jordan Katz and Kenny Siegel take care of our music.
Thanks to Tanya Sominator, Michael Martinez, Andy Gardner-Bernstein, Ari Schwartz, Andy Taft, and Sandy Gerard for production support.
And to our digital team, Elijah Cohn, Nar Melkonian, and Amelia Montooth, who film and share our episodes as videos every week.