Offline with Jon Favreau - How Facebook Changes Us, Influencer Riots, and AI Gets Funny
Episode Date: August 13, 2023According to a series of new studies published in Nature and Science, the way Facebook influences its users isn’t as straight forward as it seems. Does that mean Facebook is off the hook for polariz...ing America? Joshua Tucker, NYU professor and lead researcher on the 2020 Facebook Election Research Project, joins Offline to talk about what his team found, what lessons we learned about Facebook’s role in our world, and what its like to collaborate on a project with Mark Zuckerberg’s company. Plus: Max and Jon talk New York City's Twitch-fueled riot, AI learning to write (good) jokes, and the Zuck v. Musk cage match.  For a closed-captioned version of this episode, click here. For a transcript of this episode, please email transcripts@crooked.com and include the name of the podcast.
Transcript
Discussion (0)
All right, the Republican debates are nearly upon us, starting on August 23rd.
There'll be comedy, there'll be horror, we'll laugh, we'll cry, we'll be terrified.
And that's just Ron DeSantis.
But you don't have to suffer through the debates alone.
Join our group discussion on Discord as we suffer through the debates together.
Subscribe to Friends of the Pod today at crooked.com slash friends.
Also, some merch you might want to get.
This year's hottest accessory for supporters
of abortion access are Bros for Ro merch. Are you a bro for Ro? This merch is for you.
As always, a portion of proceeds from every purchase in the Crooked store go to Vote Save
America's No Off Years Fund to support the work of organizers across the country. Check it out at cricket.com slash store.
Where does that cooperation come from?
Well,
the cooperation can come from the platforms themselves deciding that they're
going to do it.
The cooperation could come because it's mandated that they do it.
Banks don't have the option of whether or not to run stress tests and report
them to the government.
Like our companies don't have the option of whether or not to do emissions tests, right?
I'm Jon Favreau. Welcome to Offline.
Hey, everyone. I'm Max Fisher, sitting in for Jon this week.
How is social media changing us?
This is, to me, one of the most important questions that we face right now.
So much of our culture and politics get routed
through social platforms. We all feel how powerful it can be in shaping what we feel and think.
And with Biden and Trump tied in polls, understanding social media's influence on
our democracy is as important as ever. It's why I was so excited to see a big new project studying
how Facebook influences its users. A team of researchers recently released
the first of their findings in four papers published in the journals Nature and Science.
You may have seen some of the headlines suggesting that the studies show that Facebook isn't so bad.
The Atlantic said, quote, so maybe Facebook didn't ruin politics. But I don't think that's
what the studies actually found. And I don't think it does justice to what are really interesting and really nuanced discoveries about Facebook's impact
on its users. Joshua Tucker helped assemble and lead the team that's been working on this for
three years. He's a political scientist at New York University and co-director of its Center for
Social Media and Politics. He spent 10 years studying social media. Like me, he spent a lot of his career before that
studying international politics,
but then got interested in how platforms like Facebook
might be influencing us.
He's here to talk about what his team found,
what lessons we learned about Facebook's role in our world,
and what it was like to collaborate on the project with the company.
As always, if you have comments, questions, or episode ideas,
please email us at offline at crooked.com. After the break, John returns. We'll break down my
conversation with Josh, discuss how Twitch streamers set off a riot in New York City,
and hear some surprisingly good jokes from none other than ChatGPT. Here's Josh Tucker.
Josh, welcome to Offline.
Thanks, Max. It's great to be here.
Let's talk about your team's new research on mostly Facebook, but also on Instagram.
We'll get into the specifics of these four studies in a moment, but I'm curious if there's a main organizing question or mission that you see is
driving the project. Yeah. I mean, the main organizing question was that in the aftermath
of the 2016 US elections, we had so many unanswered questions and so much speculation
about the impact of the platforms on the US 2016 elections. And it wasn't just the US 2016
elections. It was also things like Brexit
in the UK. There was just so much speculation and so much discussion of how social media was
changing electoral politics and was changing things around elections. And so the motivating
question was, could we have an opportunity to, in 2020, have better understanding of the role
of Facebook and Instagram in the context of the US 2020
elections as a first step to beginning to understand how social media impacts electoral
politics, but also the entire, you know, sort of politics around elections.
Was there a kind of starting hypothesis, again, not for the individual studies,
but for the broader project about what that impact might be or might have been in 2020? Yeah, no, there wasn't a starting hypothesis. What there was were starting
questions in the sense that they were big scientific questions that actually dovetailed
very well with what the public was interested in knowing that animated what we were doing.
So we were originally approached by people at Meta to see if we wanted to be involved with
this collaborative project. And the context of the project was going to be the US 2020 elections, Facebook and Instagram
in the US 2020 elections. And once Talia Stroud, my co-lead of the academic team on this project,
and I put together the team that would go ahead and implement all of this research on the academic
side, the very first thing we did was we sat down and said, okay,
what are the questions we want to answer? What are the topics we want to be exploring?
Before we got into the question of research design and what particular studies would look like and who would work on which studies. And the four big questions that we wanted to answer were,
what was the impact of social media on polarization in the context of the 2020 elections?
What was the impact on participation in the context of the 2020 elections? What was the impact on participation
in the context of the 2020 election? What was the impact on people's access to and understanding of
information in the context of the 2020 elections, which would also include misinformation and
disinformation? And then finally, we were interested in the impact on beliefs and legitimacy
of the electoral process, which we started this in early
2020. Obviously, we didn't know where the world was going to go in that regard as the elections
unrolled, but we thought that this would be a big question here. It's such a wide view of the
platform's possible roles. I feel like that really says something about by the time this project
started, the enormous scope of what we thought social media's
impact could be. And you mentioned that the project started in early 2020. I think that February,
you know, just as the pandemic was kicking off, and then you guys actually conducted it
starting late summer and through that fall. And that's a pretty tumultuous period to be thinking
about social media and democracy. So I'm curious how
events in the news over the course of that year might have shaped either your thinking
on what to look for, or maybe just your expectations for what you might find.
It's a, I mean, it's a great question. And, you know, you think back to what that time
was like in everybody's lives. I mean, in one way, it was the fact that
everybody moved to remote work made working on this project easier because this was being done
with people who were at universities all over the country. The meta researchers we were working with
were not all in the same place. And obviously because of the pandemic, then everybody was at
home, but everyone was transitioning to a world where you were doing
really important research in Zoom or whatever online platform it was that you were working.
And so I think there was a sense that like, this is what we were doing here. This is how we were
going to do it. There wasn't, you know, there weren't larger questions about it was going to
be, you're going to hop online and do this. Now, obviously, in March of 2020, as we were first starting all these things up, it was,
you know, tumultuous. And, you know, and, and one of the reasons that, you know, there are a lot of
reasons why this has taken so much longer than any of us thought it was going to take when we
agreed to get involved with it. And when we when we started on on this project, but one of which
was obviously that was, you know, that was a crazy time for people figuring out
what was going on with their families,
what was going on with their lives.
So it did put, you know, extra hurdles
into the process of doing this research.
On the other hand, the fact that everything was on Zoom
made the way we put these research teams together
and the fact that we were doing all these weekly meetings,
you know, all the time for these studies
and it was all on Zoom,
that kind of felt a little more normal in a weird way. Yeah, that makes sense. I have to say, I wasn't surprised that it took
three years given that the scale of these studies is really unprecedented. I mean,
the amount of research that you all did is, I mean, it's fascinating to read, but it's also just
a very impressive project of this size. I'm going to go through quickly the four studies that you all
just released. Three of these are based on experiments where you made some specific
change to the platform for a few thousand users over three months in late 2020, and then tracked
certain consequences for those users as a result of that change. In one, you switched a bunch of users from the usual news feed,
where the content you see is selected and sorted by Facebook's algorithm
or Instagram's algorithm, to a simpler news feed that simply shows
the newest content from your network first.
It's sometimes called reverse chronological.
In another, you tweaked the algorithm to show users less content
from politically like-minded friends, groups, and pages.
In the third experiment, you blocked users from seeing reshares. So those users did not see posts
that their friends had clicked share on to reshare into other people's feeds. And then you released a
fourth separate study that analyzed the political valence of all news articles that existed on
Facebook and then measured how this differed from the political valence of all news articles that existed on Facebook and then measured how this
differed from the political valence of what the algorithm showed US users as well as what those
users actually tended to engage with, sometimes called the funnel of engagement from all the
content to what you see to what you engage with. And rather than ask you to go through all of the
results because you found so many fascinating things. Is there a particular finding that jumps out to you as kind of most surprising? And then
maybe separately, is there a particular finding that struck you as most important for either
advancing or altering our understanding of Facebook's effects on our politics?
So first of all, thanks.
Those were great summaries of what we did.
And I think it's important for listeners and for those who are sort of just jumping into
this research for the first time to really, and you said this super nicely in your introduction,
right, that there are two types of studies that we were able to do here.
One set of studies, which is designed to get at causal impact.
So what is the impact of the platform?
Are these experimental studies where users volunteered to participate in the studies and gave consent to participate and were told that some aspect of their Facebook experience
might be altered as part of this study. And that allows us to get at these causal questions.
What's the effect of viral content? What's the effect of being exposed to less content from
politically like-minded sources?
The other types of studies that we were able to do were these observational studies, where in order to protect people's privacy, we were able to look at aggregated data, but aggregated
data from across the entire US adult users of Facebook or Instagram.
And it's the combination of the two of these, right?
There's lots of, in social science, there's lots of debate about the relative strengths and weaknesses of
these different types of approach. But I think it's the combination of the two of these
that actually is one of the strengths of the overall project that we've done here.
So the three big findings from across the study is that first, we did find that the algorithm and
these different platform affordances that you were talking about, they really did have a big impact on people's on-platform experience.
So that was the first big finding across the papers. that there's a good deal of ideological segregation on Facebook when it comes to
consumption of political news. So when we're talking about URLs here, the links that people
click on that they see in their feeds that are political news, we find lots of them that are
primarily seen by liberals, and we find lots of them that are primarily or exclusively seen by
conservatives. And that effect was especially strong for conservatives, wasn't it?
Yeah, it was asymmetrical.
There was definitely there is a section of news on Facebook that is almost entirely seen by conservatives and not seen at all by other people.
The third finding, though, was that despite the fact that these affordances of the platform that people think are so important,
right, had these big impacts on what happened to people on the platform,
we didn't find that any of these three changes that we did here, right, reducing exposure to
virality through reshares, putting people back in reverse chronological feeds so they don't have
the engagement algorithm, or reducing the echo chamber that people are in by showing them
less content from politically like-minded sources, we didn't find that any of them seemed to have
much of an effect at all on downstream attitudes. And in particular, we didn't find effects on
things like political polarization over issues, but also affective polarization, this how much
you dislike the other political party. Changing these things for three months,
and that three months is a big question, right? Three months is really long by the standpoint of an academic study. Maybe it's not that long by the standpoint of people's lifetime experiences
with these platforms. But by changing these things for three months, we didn't see any impact
on these kind of attitudes, these downstream attitudes. So it's a big impact on the on-platform,
but much less in the off-platform attitudes. Well, let's actually, let's hold on that for a
second because a lot has been made in the discussion of these studies of the fact that
none of those three experiments curbing these really core features of the platform caused
affected users to become less polarized. Facebook put out a statement saying
basically that this proved that the platform doesn't cause polarization or
affect users political views. The Atlantic ran a write-up of this with the
headline so maybe Facebook didn't ruin politics and the New York Times said
these studies complicate the idea that social media algorithms are politically
harmful or should be regulated.
And I have to say, I think that to me, what the study said was a little bit more nuanced. And
even if they had said that, I think it's maybe a bit of a misunderstanding of how social science
works to suggest that these experiments disprove or somehow undercut all of the prior research
showing a link between social media
algorithms and polarization or other political effects. But what lessons do you take from the
fact or what lessons, I guess, should we take from the fact that these three experiments did not lead
affected users to become less polarized? Right. So Max, you said when you reached out to me,
you were excited to nerd out
about the science here. So I'm going to go in that direction just a little bit. Right. So my take from
a very kind of social science-y nerd out perspective here is these should update our priors a little
bit, but they should not in any way, shape or form be interpreted in some of the broad ways that
people are talking about here is saying, oh, well, of course, now this shows that social media has no impact on political polarization.
So let me explain why. We did these studies. Obviously, we designed these experiments because
we thought we were going to test and capture these effects that everybody thinks, or that
there was a lot of literature, there's a lot of theoretical literature, certainly a lot of
speculation in public discussion and public discourse about these different aspects. That's why we were picking
these different aspects of the platform experience. And what was incredible about
this study was the access we had to be able to look at each of these different aspects
independently. We had been able to do like we had done in our lab. Matt Gansko and Han Alcott
had previously done deactivation experiments. We had done some at the Center for Social Media and Politics. Those are sort of blunt
tools where you can say all of Facebook or no Facebook, all of WhatsApp or no WhatsApp.
Here in this study, yeah, we had the ability to go ahead and say, okay, we're just going to try
to get at virality. We're just going to try to get at the engagement algorithm. And we try,
obviously, we were somewhat
constrained by time in terms of how long we were going to run these things. But from the point of
view of social science studies, three months is actually a really long intervention to get people,
you know, to do these kinds of changes. And so obviously, we ran the studies because we had
theoretical reason to think that changing these aspects of the platform would have these impacts
on things like polarization. And so the fact that
we found over three months that they didn't have the impact, it should update our prior,
but it should update our prior a little bit. Why a little bit? Well, because there are a lot of
reasons that we want to caveat why our research doing this for three months cannot answer the
question of does social media cause political polarization, right? The first thing is, we did this for three months. Maybe if we'd done this for four years,
maybe if people had had reverse chronological feed for the entire administration since the 2016
election, you would have seen a different effect than what we found in the three months leading up
to the election. So maybe it was just too short a period of time. We also did it during the election
campaign. Now, we were
interested in the election campaign because in part, this is when politics is most heated and
most front of center in people's minds. Maybe it's the case that people were being overwhelmed
by information they were getting about the election and they were getting it from TV
and radio and their friends and other social media platforms. And so making a surgical change to
their Facebook experience, you know, that just wasn't strong enough to actually have the kind
of change in this period of time. So maybe if we had done these experiments in a less
politically heated time, we might have seen a different effect. You and I go way back in
international politics. You know, my background is in comparative politics, right? It's possible
that if we ran this in a multi-party system or if we ran it, you know, in a more
single party system, we might see different things.
So I think we want to be really careful about extrapolating across space as well as extrapolating
across time.
And then finally, the question that we can absolutely not answer here is the counterfactual
of if we had a world without social media, would there be lower
levels of political polarization? This thing never designed to be able to answer that question.
We were impacting a very small portion of the population for a particular time at a particular
moment. What we learned though, is that there are not simple fixes to really complex problems. So what would have been
great if we had found some sort of silver bullet that, oh, if you make this change to the platform
in the two months leading up to an election, it lowers tensions in the country and people
don't hate each other. They don't have as high levels of affective polarization.
What we found is that these theoretically driven potential interventions
to try to make changes to things like political polarization, it doesn't seem that these kind of
simple solutions are able to address what are these kind of complex societal-wide political
phenomenon. I'm really glad that you raised that because I have to say that one, I think at least one potential reading of these findings that I found myself going towards, which is not to say it's the only or definitive reading, but that is not that, you know, Facebook's effect is so neutral that even these drastic changes don't affect polarization, so therefore Facebook is fine, but rather that the Facebook platform in
its totality tends to be quite polarizing in a way that means that just removing one feature
for one subset of users is not going to change that kind of overall ecosystem enough to
not polarize people. And there was an entirely different reading, or not entirely different,
but something that speaks to, I think, really the wealth of the findings in here and also the
difficulty of reaching conclusions from it is that Science, the academic journal that published
three of these studies, ran them with the cover line wired to split. And an executive director
at the journal told Facebook, quote,
the findings of the research suggest meta-algorithms are an important part
of what is keeping people divided. And I just thought it was so striking that so many people
could look at these same conclusions and reach these kind of different hypotheses for what it
tells us about social media and our world. Yeah. And I mean, I think it speaks a bit, Max, to just when you open the session today,
the volume of findings that we have across these four papers. And as you noted, these are only the
first four papers. There are more studies to come. So these are super complex questions. And I do
think we want to reduce these to really simple explanations, the algorithm. Well, there's not
one, the algorithm. There's lots of different affordances, right? And again, another caveat that I didn't mention before is like,
we went in this to do the most, you know, scientifically rigorous attempt to get at
virality or to get at echo chambers, right? And we adjusted one of these things, right?
We adjusted one of these things on one platforms. The people who were in our Instagram studies were
not in our Facebook studies. We were doing this all to get really clear causal estimates on what's going on. But as you correctly note, right, like people aren't just living on Facebook, right? They're living on Instagram. They're But that doesn't speak to the overall information ecosystem that we have here, right? And I think
that, you know, we might also think about what happens when you change multiple things on this
platform. What I would love to see in future research is collaboration across the platforms,
where we change these kinds of experiences. You know, it might be the fact that people were
getting chronological feed on Facebook,
but they're still getting all sorts of engagement algorithms on TikTok and on Reels while they're
doing it and stuff like that.
So it points to the enormous complexity of it.
That being said, there is absolutely evidence across some of these papers, right, about
the role that Facebook plays in having these ideologically segregated communities.
And we found that very much in the ideological segregation paper. Now, there's been a lot of
discussion afterwards and arguments, and I think it's a big tangle here, and we tried to sort it
out. Other people sorted it out. We're going to keep looking at it. How much of this is socially
driven by people's choices? How much of it is algorithmically driven by what Facebook is doing.
But what we absolutely find is that if you're getting your news about politics from Facebook,
which we know a lot of people are, conservatives are getting different news than liberals are
getting, or at least they're going to different URLs. We didn't do anything in this study,
I want to be super clear about, about the content. We just looked at the audiences,
but the audiences are different. And we also found that the way the platform is set up, the like-minded paper about
where we were, which is being described always as an experimental paper, it's actually both an
experimental and an observational paper. And there's really interesting observational findings
in it because the experimental work looked at changing it so that people got less content from politically like
minded pages, groups, and friends. But we also looked at the level of content that the US users,
again, this is aggregated data, so we don't have any individual data here, but aggregated,
we were able to look at the amount of content that comes from politically like minded sources.
And across the platform, it's about over 50% of the content that you're
seeing in your feed on average or seeing in their feed is from politically like-minded.
About a third of it or so is from neutral, like things that are neutral, and only about
a sixth of it or so looks like it comes from cross-cutting, maybe a little more than that
comes from cross-cutting. So there's something about
again, being on Facebook. And this is again, there's tons to sort out here in terms of the
extent to which it's algorithmically driven or socially driven. And if you want to talk about
that, we can talk about the complexity of all of that. But it is a place where people are seeing
more content from politically like-minded sources. That being said, only a small amount of that
content is actually political, right? Like most of that content is about pictures and birthdays and,
you know, all sorts of other things as well. But, you know, there's definitely research in there
that talks about the way in which this platform allows people to have these kinds of experiences
online where, you know, politics or the political proclivities, where conservatives are
going one place for news, liberals are going someplace else for news, and people are seeing
much more content coming from people who are similar to them in terms of politics than they
are from people who are different. Well, let me pull up one finding in particular that I think speaks to the way that Facebook,
being on social media, generally Facebook specifically, interacts with our politics.
And also, I just thought it was like a fascinating finding and is why I think it is
really interesting to try to engage with studying how being on the Internet might affect our behavior and our attitudes in politics.
It's from the experiment that turned the algorithm off for some users.
Those users saw once the algorithm was turned off, they saw more politically themed content, but they became less likely to talk
about politics or to interact with political posts. And so what that means, in other words,
is that it seems that Facebook's algorithm is doing something to make users more inclined to
engage in political discussion, even if it shows them, even as it shows them fewer political posts.
So what do you make of that?
I mean, there's another interesting finding deep in the weeds in the nature paper and the like-minded one, which I found super interesting, which is that when we ran the study and we
actually, as the experimental manipulation decreased the amount of content that people
were seeing from politically like-minded sources, we found the rate at which they engaged with that content from politically
like minded sources went up.
It's super counterintuitive, right?
Well, on the other hand, I mean, maybe if people like engaging with content more from
politically like minded sources.
I mean, if you think about you have your friends who are, you know, from work and from crooked
media and have similar political views from you, and then you have your crazy uncle who,
you know, who has a similar political views from you. And then you have your crazy uncle who, you know,
who has a totally different view from you.
And all of a sudden your feed experience changes and you're seeing less of your friends from crooked media and you're seeing, you know,
more of your friends from high school and stuff like that.
Then when, you know, someone from crooked media pops up, you're like,
oh yeah, I like that. You know, you're more inclined to do it.
I mean, it's interesting. I mean,
I think the big takeaway point from all of this is that, and this, this again, points to just the importance of doing
this kind of research is that until you run these changes, you don't know what the implications are
going to be, right? Like, and some of the stuff, some of the things are things you like think is
going to happen, but some of the things you think are not going to happen. And all of these kinds of changes, everything that we did here, these are
things that you can spin in a really positive sense. Who doesn't think we should be in less
likely to be in echo chambers and who doesn't think, you know, that we should be, you know,
not as much exposed to viral content. But another one of these interesting, like trade-off things
that we found was in the reshares experiment, which was designed to reduce exposure to viral content. We found that it did actually reduce the amount everything else. So this is a category of untrustworthy sources. And it's a small amount of content that people
see in their feeds. It's like less than 3% of what people see in their feeds. But when we made
it so that people were no longer being exposed to re-shared content, so decreasing the amount
of things that was going viral, but basically made it so they only saw original posts, they
didn't see posts that people were sharing, That amount of untrustworthy content went down.
Okay, so that seems really good.
But what also happened at the same time is that the proportion of political content went
down by 20% and the proportion of political news went down by 50%.
And actually, it turned out that people who were in this treatment had lower levels of knowledge of political news than people who were not in the treatment.
So you have these kind of weird tradeoffs here, or not weird tradeoffs, maybe they're expected when you think about them. But the tradeoff is when you, it turns out what we've learned from this is that when people, one of the ways, important ways people get exposed to politics and political information and political news is through these reshares. Now, that's also the way they get exposed to this lower quality
content. So if you put in the intervention to try to reduce the amount of the untrustworthy content
that they're getting exposed to, right, but just by getting rid of reshares, you also have this
unintended side effect of like reducing the amount of political news to which they get exposed to. So there are these trade-offs that you only
discover when you kind of do these kind of careful, rigorous scientific studies.
Well, I feel like there have been a couple of different interventions or studies that have found
that methods to reduce people's exposure to misinformation or to untrustworthy sources
on social media also tends to make them less aware or knowledgeable of accurate information
about news. And to me, the lesson from that was always just that social media is designed
in a way that if it's going to deliver news, it's also going to deliver misinformation.
Yeah. And I mean, and then you get into what the trade-offs here are. And I will say like the science on this, I don't think is settled. There are some studies that show,
right, warm people about misinformation, they get more suspicious of true news,
but what there are some, there are a number of very interesting studies out there. One in
particular from David Rothschild and Duncan Watts, where they sort of look at the level of
misinformation as opposed to information,
as opposed to what people are seeing on TV news, as opposed to what people...
And misinformation often ends up being a fraction of people's diets.
And we find that too in these studies, right?
When we look at these posts that were flagged by Meta's third-party fact-checkers as including
misinformation, it's a tiny proportion of what people are seeing, right?
And it does receive an outsized amount of attention, right, from everybody who's worried about social media.
So I often worry about this question looking forward about whether, you know, whether the
biggest worry is that people are exposed to misinformation, or the biggest worry is that
people think everything is misinformation, and they stop believing true news, right? And that's
not kind of what we got in the studies. But I think we did get in in these studies, you know, was into this
question of some of the trade-offs of what people see from these different, you know, features of
the platform design. Right. And it can also be true, this is a debate I've had a few times with
a guy named Brendan Nyhan, who of course also worked in these studies. It can be true that
the median user is getting CNN and Washington Post links, but you might also have some kind of people who are more
politically engaged or further to the ends of the political spectrum who might be getting more
extreme information or more misinformation. I feel like a lesson that I took from 2020 is that
even if most of us were getting accurate information about the election,
the fraction of users who were getting bad information turned out to be pretty, pretty
fateful for the future of American democracy. Yeah, I was just gonna say, I think that's a
super important point. And actually, you know, and not just at the level of like, what are people's
experiences, but also how interventions work. So just to really quickly tell you about a different
paper that's not part of this project, but that's from one of the papers
at the Center for Social Media and Politics, we looked at what happened to people's media diets
and their ability to identify factually correct information and non-factually correct information
when we installed one of these browser warning systems that like flashes green if it's a
trustworthy link that you're going to and flashes red if it's a non-trustworthy link
and across the entire study we found no effect for this when we looked at the average treatment
effect about people participating but when we broke them down by decile of the people who had
been most likely to go to these low quality news sites before the treatment effect for the highest
decile for the people who went to the most of those sites, the treatment actually did have an effect on the average user. But I think we also know that
a lot of stuff that happens on social media, we may need to be looking in the tails more, right?
Because when you have 100 million users, even if 1% of people are getting exposed to something,
and then 1% of those people decide to do something about it, those are still pretty big numbers.
Right. And again, that was to your point, a big lesson of 2020 is that 1% or whatever it is,
those are the people who are storming, you know, state capitol buildings over the course of the
summer over COVID restrictions. Those are the people who were, you know, or may have been the
people who were spreading QAnon, which of course didn't start on Facebook, but was popularized,
I think, by Facebook, or the people who ended up sieging the capital in January 2021.
So I'm really excited to see what you guys do looking into that.
I want to ask you about the experience of collaborating with Meta, the company that
owns Facebook, Instagram, and WhatsApp.
It was Meta that initially broached doing a joint research project.
They provided the data, the research questions and methods were set by
mutual agreement between Meta and your team, but your team had final control over interpreting the
results and reaching conclusions. And an academic named Michael Wagner served as a kind of independent
observer for this whole process. And he concluded that the research is, quote, rigorous, carefully
checked, transparent, ethical, and pathbreaking.
But he said that it should not be a model for other future research projects because it gave
the company too much influence. And I assume you agree with the first of his two conclusions,
but I'm curious what you make of the second of those. Yeah, thanks. I'm super happy to hear
about the first of his conclusions. Yeah, so I do agree with that. And I want to just clarify one thing you said.
The academics also had the final say over the research designs.
So just to kind of go into the way that, yeah, the way that this was set up.
So when we first started, nothing like this had been done before in the social sciences.
So we thought about ahead of time all sorts of what we called informally guardrails
that we could put in place to ensure the integrity of the research and that when we publish these
findings, people would trust what we're doing here. And so we're thrilled to hear what Mike
had to say about the trustworthiness of the research. And those included a lot of things,
and we can talk more about them, but just to hit some of the most important ones. One,
all of the designs were pre-registered, which means that we said what we were going
to report before we actually did the data analysis.
And this is sort of cutting edge in terms of transparency and open science generally
in scientific research right now.
But the second thing we did was we had academics were the lead authors of all the papers with
control rights. And what that control
rights mean is that the academics had final say on, as you said, the interpretation, what was
actually written in the papers, but also final say on what was going to go into these pre-analysis
plans to be the research. Now, we worked closely with researchers at Meta who gave us tremendous
amount of information about how we could design these
studies, what we could do. But the sort of ground rules were, you know, Meta set the boundaries.
This was about the US 2020 election in there. We set the big questions we were going to answer.
And as we went through these research designs, Meta could push back on the research designs if
there were legal problems, if it would cause us to violate their legal, if it would cause us to
cause privacy problems, which is obviously entangled with legal problems. And also there were questions of
feasibility, right? Even with the very large budget we had on this, we didn't have unlimited
budget constraints. And there might be some things that we thought we could do that turns out you
couldn't do within the platform. But within those feasibility, once we had decided on what the
studies would look like, it was the academics that would have the final say on what the research designs look like.
And then, of course, the academics will have the final say on what is actually written in the papers in these cases.
You wanted to know what I thought about the second point he's made.
Yeah, the idea that this shouldn't be a model for future research.
Yeah.
So there's a part of that that I agree with, right? In the sense that the reason this research took place, and I am incredibly grateful that we've been able to get all this information that we've gotten out so far and that we will get out throughout the project. analysis and extension for academics to work with. And so I think there might be lots, lots more
papers that come out of this that go far beyond anything that we've done in this study. So I'm
incredibly grateful for this. But obviously, this project only happened because people at Meta
decided it could happen. And from that perspective, I don't think that that's where we want to be
as a society, where we get these
incredible views into what's happening in the platforms, only when the platforms decide that
we should do that, right? Like, so in that sense, I agree with Mike's part of it, where I might push
back a little bit is I do think this model of having teams of external researchers who get
expertise and access from the platforms and support
from the platforms and who set very, very careful ground rules about what is going to
be done in terms of reporting the results of that research.
I think that is a model.
I mean, I agree with Mike.
I hope that it's a model that we can use in a way when it's not just the platforms deciding
it, but it's society deciding this kind of research needs to take place. But I do think a lot of the innovations we've come up with in terms of how
to guide these collaborations, I hope very much, I mean, my dream is that we not only get this study
out of the work that we've done here, but we also get more of these types of studies being done
in other times, on other platforms, in other countries? There's so
many unanswered questions here that we as society should have the answers to.
Well, something that Michael Wagner, that independent observer said was that
meta, he said, pushed your team to prioritize the experiment switching users from the algorithmic feed to the chronological feed.
Not that it was their idea,
but just that Meta wanted to kind of push that up
in the priority list.
And he said that this was because Meta's people
thought that those results would look good for Facebook.
And I think this was the one detail
about the collaboration that did give me a little bit of pause, not because it suggests that
Facebook had any undue influence over the work, but because it makes me think that Facebook,
which is, of course, constantly adjusting how its algorithms work, knew that those algorithms
would be relatively well-behaved during that study window. And that
was why they wanted that study to come out. Max, you'll have to ask Mike how he knows that
to be the case. And he may have information that I don't have, but I will stick by what we've said
and what we've said publicly. The academic researchers on these teams came up with the
research questions and the research designs that we wanted to do.
So you did not experience meta pushing you to move up on the priority list, the experiment switching the algorithm.
You have to ask Mike what his, you know, where that comes from.
He may be privy to conversations that I wasn't privy to, but that is not my, you know, we,
we, I mean, I think the question of which studies were published first, there's a lot of idiosyncrasies that came into that.
They had to do with when data was available, which researchers were available, what people's personal schedules were going to be.
There are aspects of the peer review process that are completely out of our control.
From my perspective, those were the papers that were ready first, and we moved to send them to publication. And as other papers become ready, we will move to send them to publication. The desire to run a chronological feed experiment, that came from the researchers. collaborating with Meta. Without asking you to betray any confidences of your conversations
with people at the company, can you speak to what sense you got for their maybe starting
assumptions or kind of starting priors for whether or how the platforms might be influencing
people's politics? And of course, Meta is a huge company and you're interfacing with
specific people, but it's people who I don't get a chance to talk to. So I'm really curious what
your sense of them was. Yeah. So this is, I mean, it's a good opportunity here. Like when we,
when we talk about Meta, you know, I'm a political scientist, right? Like, and we, you know, we often
recoil at when people say Russia thinks X, we're like, no, actually we know the ministry of defense
thinks this and the FSB thinks this and the anti-Corruption Task Force, you know, guys think this.
And it's, you know, Meta is a big corporation.
And the thing I think that's really I would love for people to understand is that our interactions were with researchers.
They have, I mean, one of the reasons this is possible is because Meta has hired a team of social science researchers to conduct social science research internally.
Those were the people we were interacting with for 99% of the time on this project.
We occasionally, early on when we were putting these guardrails in place, so another one
of the guardrails is that we said, we would not do this and the academics would have walked
if this had been part of it.
And we agreed to that as a team.
We would not do this if Meta had right of pre-publication refusal, which is the right it has when its own employees
do research, is to say, we don't like the results of this, you can't publish it. And so in order to
get guarantees on that, we met with people who are higher up at the company because we wanted to hear
from people higher up at the company that those guarantees would be met. But the rest of the time,
when we're talking about the research, we're talking with researchers. And so the conversations that we were having
about the research that we were going to do were the same kind of conversations that we have with
researchers who work in the academy. What are the key scientific questions? What are the answers we
want to try to get? What are the methods that we feel comfortable using? And we had a big group,
17 people on the academic side, and there were a lot of meta researchers
as well.
In terms of what was meta's priors about what the outcome of this research would be, you'd
have to ask people who work there.
I can't, you know, I don't want to be in the business of speculating what other people
were thinking.
The researchers that we had conversations with, they were very similar to the conversations
we have with everybody.
What does the literature say?
What are the unanswered questions? And what could we do here to answer those questions that we were not previously able to do? understandable. The academic job market is really tough right now. And tech is hiring, why not?
Who, in my experience, tend to have views similar to yours and mine, or at least a very open,
thoughtful, nuanced mind about it. Whereas the executives who I've talked to, who of course, were not involved in the research, but of course, greenlit the project, tend to be
very hostile to the idea that there is anything to the academic
research on social media's influence in our politics. It makes me very interested in the
kind of inner dynamics of the company that led them to choose to do this in 2020. So I want to
end by kind of taking a step back to consider the totality of research into social media. What do you think are the most important
points of scholarly consensus today on how social media affects our politics or doesn't,
or are there any? That's a big question. I think the most important point of scholarly consensus
is that we still continue to only be able to answer a
fraction of the questions that we want to be able to answer in this regard. So where there is
scholarly consensus is we need a mechanism to ensure that there's data access and that researchers
are able to access the kind of data to answer these kinds of questions so that we can inform
the public. And there's all sorts of different levels of this. And we've been watching over the last, you know, six months as very recently, you know, there's
been, I wrote a report for the Hewlett Foundation or with a team of people where we said, you know,
there's too much research being done on Twitter because it's too easy to get access to Twitter
data, right? Be careful what you wish for, right? Like everything is like one step forward,
two steps back in this regard. And so I think there's huge consensus among the academic community. And if you look at the project we did
here, there's so many things that we did here that we wouldn't have been able to do previously.
Like one thing is for all the Twitter research that's out there, and we've written a ton of
papers at the NYU Center for Social Media and Politics with Twitter data, right? But we never
had viewer impressions. We didn't know impressions
data, what had actually appeared in people's feeds, right? And so for all the studies that we
do about people's exposure to things on Twitter, it's always estimates. We know what people they
follow, tweeted, and so we say, this is what could have appeared in their feeds, and we use that
proxy. In this project, because meta was part of the project, we were able to get impressions
data.
That allows the entire ideological segregation paper to be written, we've been talking about
here today.
The other thing we were able to do, and we talked about this earlier in the conversation,
yeah, we can do activation experiments or deactivation experiments, which are kind of
like all or nothing.
But in this study, by collaborating with the platforms, we were able to turn on and off
individual pieces that were really important from a theoretical perspective to try to see
what the impact of those things.
You need to do that with the cooperation of the platforms.
Now, back to Mike's point, right?
Where does that cooperation come from?
Well, the cooperation can come from the platforms themselves deciding that they're going to do it. The cooperation could come because it's mandated that they do it. Banks
don't have the option of whether or not to run stress tests and report them to the government,
right? Like our companies don't have the option of whether or not to do emissions tests, right?
So I think there is consensus of this. The other thing I would argue with consensus is go back to the point I made beforehand.
Like, we now have more information about social media, about Facebook and Instagram's impact
on the US 2020 election than we will ever have on the US 2016 election.
But we got nothing about the British elections.
We've got nothing about Nigerian elections.
We've got nothing about the recent Brazilian elections, right?
Like, there is a need to think about how we ensure that the momentum from this continues
to go forward.
And I think that's a point of consensus here.
Now, in terms of other questions, I think, you know, there remain these, you know, sort
of huge questions that people are interested in, which is like, what types of things make
people more able to discern the veracity of news when they encounter
it online? And there's big questions about like what kind of studies that we do in these regards.
These questions that we talked about earlier here, right? Like are people becoming more polarized
because of echo chambers online or because of virality? These are questions that we'll be
answering. Now, I would say if I had to take a giant picture of you on this from the political science community, I think there is a consensus
that in the aftermath of 2016, because the shock of what seemed to have happened on social media
was so dramatic and it attracted so much attention, right?
That we may have over-prescribed the ability of experiences on social media to have impacts on big picture political attitudes.
You know, there's a whole literature in political science about whether like campaign advertising
matters at all, right?
People spend billions of dollars on it, but does it just at the end of the day kind of balance each other out? Going into the 2024
election, I could probably tell you how 90% of the people in the country are going to vote right now,
and I don't even know who the candidates are. We've done other research. We had a big paper
that came out earlier this year about exposure to tweets from Russian trolls during the 2016
elections. And when that happened, you know,
everybody said, oh, my God, the Russians came in, they had these trolls. I mean, it was like,
from a national security perspective, putting on our old hats, it was terrible, right? Like,
the Russians were trying to hack our election. But from a political behavior perspective,
the idea that you were going to see a few more tweets than you otherwise would,
coming from these other sources when the campaigns and
the politicians and the media were all tweeting at you to say nothing that you were doing on
television and your friends and your neighbors and to say nothing of the fact that like most
people never change who they're going to vote for anyway they just vote for their party the idea
that that could have had a big impact on the election kind of doesn't make sense and so i
think we're in a period of recalibrating where I think we're trying to think about, you know, everything that we really know about political behavior, how fixed people's attitudes are about things, and how much social media can really impact it. discourse about this. And these are real corporations that sometimes do really dumb
things, right? And give people lots of reasons to be suspicious of them because they're large
corporations that are, you know, acting in particular interest. And then the social science
research, which is maybe suggesting a recalibration a little bit of this and pointing to other
factors, right? Like some of the most exciting research to me going forward is thinking about
the interaction of what happens online and offline.
Like does your online environment matter differently based on your offline environment and vice
versa?
And then the other thing I think that there is a growing consensus, I mean, in researchers
in this area is that this need to sort of focus research, not just on the average treatment
effect, which is really what we're doing in this US 2020 project, because this struck us as an opportunity to ask the questions we couldn't answer in 2016.
But I think going forward, a lot of people realize we need to be focusing in on the tails,
that even if there aren't these average level effects, what is happening in smaller groups?
And that's very difficult research to do for a lot of different reasons. I think there's a
good degree of consensus that people think the field needs to move
in that direction a bit as well.
Well, Josh, thank you so much for coming on Offline.
Thanks, Max.
It's been great talking to you and hopefully, I'll look forward to talking more as more
of these papers come out for the project.
Yeah, can't wait. Hey, John.
Hey.
So what did you make of all of this research into Facebook, Josh's comments on it?
Is it changing things for you?
Do you think Facebook is good now?
Yes.
No, I do.
I think Facebook is good now.
I mean, I am going to be betting on zuck absolutely
zuck versus musk i mean coliseum unironically i think it's just no question right before this
recording uh no so first of all great conversation great conversation you guys really nerded out oh
my god i love to nerd out we had another four and a half hours off camera going through the
methodology of us no we didn't but we absolutely could have yeah you know i i approached this by first reading the coverage of the studies
and then you and i talked about it and then i uh listened to your interview and like i came away
feeling better about the quality of the research despite the fact that meta was involved
it sounds like they were very rigorous that they had a sufficient level of independence right i
think they did and then i ended up feeling worse about the coverage of the studies um and you know
but it's the the coverage was a problem problem of media culture in general right now.
Yeah, I was going to ask you why you think that is.
Lack of context, which ironically is a bigger problem with social media.
Right. It is funny to see that.
It gets to the core of, you know, they just they have to distill it down.
And I do think that to be fair to some of the reporters who wrote those stories, as you dig into the stories, you know, there's appropriate caveats and nuance. But I
think some of the bigger takeaways and headlines and the subheads are just, they're sort of silly.
I was actually going to ask you if you think that is this just a case of like,
writing about social science and the nuances therein is hard and like,
you know, the media is going to flatten it into a single take,
even if that's oversimplifying?
Or if you feel like maybe the mainstream media is going a little soft on the big social media companies lately?
So I do think it's just a problem of media.
I don't know if they're going soft, like something's changing.
I think there has always been a bit of a bias in the mainstream media and among reporters that the argument that social media is to blame feels a little too convenient for them it feels oh i also think that people and especially
reporters like this too because they're quite skeptical rightly so uh part of their job of um
explanations that involve us being brainwashed sure you know that the idea that propaganda works
right there is this resistance to the belief that propaganda works. And I get that because I think it is a simple explanation to just say, oh, you know, Fox beams out propaganda.
Facebook beams out propaganda.
The algorithm is controlling our belief.
Like, I get that. is this three-month period. Because I think our political beliefs
are the product of so many different factors
that develop over such a long time
that three months in the middle of an election year
is a tough period to measure effects on political views
and to expect political views would change
in any significant measurable way. Yeah, I agree with that. And three months, and especially when you are keeping 90% of the
Facebook experience the same for people in the experiments, you're just changing like one
facet of it. And like, Josh was very like open about this. Like, I don't think they thought
that this was necessarily going to be like, super transformative or that this is a way to test the totality of Facebook's effects. And it, you know, he was very
open about like, we're just testing is turning off the algorithm for a few months going to change
things for people. And that doesn't mean that Facebook is good now if it doesn't. But I think
you're right that so many aspects of Facebook are all kind of pointed in the same direction to do the same thing to you,
that if you turn off one part of it, I think it tracks that the other parts of the platform are
going to overwhelm, you know, you turn off the algorithm, you turn off reshares. And the thing
that Josh actually mentioned was that people on their team who worked on these studies worked on
a separate experiment a couple of years ago that you and I have talked about before, this guy named Hunt Alcott, where they turned off Facebook entirely for some number of people for
like four weeks, so even less than three months, but they like didn't use the platform at all.
And those people saw huge drops in their level of polarization. They became much happier. They
became much less anxious. And something that like Josh and I discussed a little bit before the interview that we ended up not getting into was like, both of those things can be true. It can be true that Facebook as a whole is extremely polarizing, and they see that in one experiment. And then in another experiment, they find that, okay, well, just adjusting one part of it doesn't fix the polarization in itself, which is something that has not come through at all, I think, in the media coverage of these studies. The other point that Josh made during your interview that I found compelling
and hadn't really thought about is you've really got to take into account the totality of your
information environment in order to understand how your political views are developed. And so
first he talks about even online, there's all these tweaks that they
did in the study what kind of information are you getting from tiktok what are you getting from
other platforms and then offline right what are you getting from friends what are you getting
from colleagues like there's so i'd be interested in and i don't i don't know if these experiments
exist in like controlling the totality of somebody's information environment
yeah as a social media experiment and then playing with it in different ways right because i just
it is common i saw someone there's a couple political scientists who were saying this
online the other day i can't remember who it was but it was like you know political attitudes
are just much more rigid than we think and the idea that the your information ecosystem really has
an effect on your political attitudes does not really bear out because they come from you know
your parents the way peers the way you were raised all that kind of stuff and it's like i i get that
but it's just common sense that your political views will be shaped by the information that you receive and whether
that's from people yeah whether that's from news whatever it's from and so this idea that like
somehow you have these innate political beliefs that aren't going to be shaped by your information
environment just doesn't make any sense right and to like like to josh's credit he raised that too
and he was like look you can turn off Facebook for yourself, but everybody in your community is still affected by if not Facebook, social media,
because it is so prevalent. This is actually something that I found a lot when I was reporting
internationally on social media's effects is that if you go to a lot of like, global South countries,
like, you know, Sri Lanka, Myanmar, India, most people are not actually on social media because, you know, cell phone
ownership rates are not that high. Literacy rates might not be super high every place.
But the people who are on social media spread that information to other people in their
community. So something you would see over and over again, like when I was in Sri Lanka,
a rumor would go viral on Facebook and then everybody in the country would hear about it
within like two hours, even though most people don't have cell phones and are not on social media because that's
how information spreads is socially and I like to your point like how do you test what it would mean
to turn off social media in an entire country is like well that's actually happened a few times
in like you know Sri Lanka turned off all of social media at one point because there were
these riots and the riots stopped immediately.
Myanmar is a place that went from having no social media at all because it was a like weird pariah state to suddenly having super high adoption rates within like two years.
And they had a genocide.
Like the country went crazy overnight. So I think that what these experiments highlight is not, oh, Facebook's not so bad, but that the effect of social media is so pervasive
that just turning off one piece of it won't fix it. And here's just a little experiment anyone
can do, because I've noticed this with myself. If you watch a live news event, whether it's a
debate, especially debate, but also a big speech from a politician or you know i did this
during like trump's cnn town hall watch an event and don't have any social media in front of you
while you watch it don't look at any other reactions don't listen to any of your friends
reactions just watch it and then afterwards like write down what you think your own takes right uninfluenced
by anyone else yeah it will be so much different than watching it while you are scrolling through
a feed what because you are even though you don't want to admit that you are being influenced by
the hive mind take you are at least i find that for me maybe maybe other people are you know
this won't happen to you but i do think i think the influences that social media have on you aren't
just necessarily like oh you see uh too much right-wing shit and suddenly you become a republican
and then you see too much left-wing shit and you become like i don't think it's that simple i agree
but i do think you're of course influenced by just a fire
hose of takes and opinions coming at you all and they're like mono take that gets formed on social
media that everybody in your community that like coincidentally decides that they share yeah and
the other thing and you pointed this out during the conversation i think these studies confirm
one thing that we found on this show from talking through a whole bunch of different people which is
like tweaking the algorithms is not going to do anything anyway that that the problems of social
media social media addiction social media influence on our politics on polarization they are so much
deeper than algorithm fixes right and i remember talking to Alex Damos about this, who had, you
know, worked at Facebook and he made this point to me. And at the, at first I was, I thought it was
like maybe him defending meta a little bit or Facebook. But when I really thought about it,
the more people I talked to, it is true that it's, it's more an issue of, are we really meant
to be connected on this scale?
And in this way, too.
And in this way.
And through like quantified likes, quantified shares, getting social feedback from a thousand people at once, ten times a day.
They're just such bigger problems, I think, than, you know, chronological feeds and reshares and all that kind of stuff can fix.
And I think they can make a difference.
I think that difference we've seen from these studies, those differences are hard to measure,
but I just think there's bigger issues at play.
I think it's also worth noting that it was Facebook that wanted to do,
not these experiments specifically, but that wanted to do, they were the ones who approached
the researchers and said, let's do some big, like, open source research into the platform.
And maybe you heard Josh didn't want to get into too much, like, whether or not Meta pushed them to prioritize the algorithm study.
But we do know, because they've been very open about it, that in the last couple of years, they have tried to deprioritize news and politics on their algorithm.
So I think it stands to reason that they thought that they had made one change that would like make them look better.
Right.
Because, you know, the researchers at these companies generally are like pretty like smart, thoughtful people who, in my experience, are trying to do the right thing. But the people making the decisions about whether these studies happens are executives.
And we know what their priorities are.
And it's not the public good. The other, the other point you guys touched on that it stuck with me is the social media
just tends to deliver more disinformation.
Like if it's your primary news delivery service,
and that's because there are no editors or gatekeepers.
Like that goes back to the problem of if you're just getting a bunch of shit
and no one's monitoring it,
no one's editing it no one's editing
and no one's deciding what should what should be in front of you or what shouldn't be in front of
you and obviously we've talked about all the you know the challenges of having gatekeepers and how
that can create its own set of problems but if it's just the wild west out there then of course
you're going to get a bunch of misinformation and i don't really know how you solve that problem
i really think sometimes that this is the like maybe the single biggest change in like political life in the last 10, 20 years is the death of gatekeepers.
And that's in the media where now we're going to social media where it's all crowdsourced.
It's the like incredible weakening of political parties.
It's happening in all these different ways.
And it's a tough conversation to have because like sometimes the gatekeepers are bad and sometimes they're wrong. So it's like, it's tough to hold both ideas in your head at once
that in some ways abolishing gatekeepers is good, but in other ways, losing them, like we are seeing
some of the downsides of like, you know, when the Republican party doesn't have control over it. So
nominating process, like you get a president, Donald Trump. and i think if if you think about losing gatekeepers
in terms of trust it becomes a different issue right because i think that's the reason i mean
there's a there's a narrative about losing gatekeepers where it's like this was just there's
an elite that were homogeneous right and they were holding all the information and now the people have
risen up and everyone has a voice right like sure but then there's also like the reason we don't trust gatekeepers is because we don't trust anyone anymore and there are pernicious
forces out there right mostly on the right encouraging that are telling us don't forget
don't even don't even you know believe our truth don't believe any truth don't trust anyone and if
none of us can trust anyone except like people who we think are believe what we believe then we're in a bit
then democracy becomes impossible right so speaking of the collapse of gatekeepers and
rise of chaos good segue this is professional podcasting here strap in your seat belt the
twitch riot in new york city holy shit did you watch the videos from this i did watch the videos okay should i i'll recap as
quickly um the videos i thought were crazy uh so there's this guy named kai sanat who is a big
social media influencer mostly on twitch instagram youtube um i've seen him described sometimes as a
video game influencer but i think he's more just like generally a like super online guy
and he announced about a week
and a half ago, as of when you're hearing this, that as of a week before Friday, he would be
doing a big giveaway of gaming systems, $100 gift cards, and a bunch of other tech stuff in Union
Square in New York. And thousands of people showed up, mostly young men and boys. It spun wildly out
of control.
The crowd started tearing apart parked cars, setting off fireworks.
They were attacking cops who showed up and also fighting with each other.
Sanat got charged with inciting a riot and riot causing public injury, which I believe is a felony. And about 65 others got arrested, about half of them minors.
So, John, what do you make of this exciting moment in our dumbass internet era dystopia?
So my first reaction was, oh, I'm fucking old.
Because, like, what is happening out there?
What is going on with the kids?
That was my first old person reaction. But it's also like,
I don't think enough of us are,
especially in the
older generations,
are fully aware of like
how people are
communicating,
interacting with information.
This guy has 6 million
followers.
I think it's 6 million
on it might,
whatever his biggest
platform is,
is Twitch.
But if you add up
all of his followers on all the platforms, which is a weird way to do math because it might be some people overlap, it's like almost 20 million.
I just want everyone to sit with 20 million for a second.
It's an insane number.
It's an insane number.
When you take that and then you think about the people who have power in this country, right?
And the elites in this country, right?
Political elites, media elites,
and like where are they getting their information from?
They're getting their information from CNN and MSNBC and Fox, right?
And, you know, New York Times, Washington Post, Politico.
You could like add the entire audience of all those outlets I just mentioned,
and it would be like a fraction of that many people.
And we don't know who these people are really.
We found out last Friday.
We found out last Friday. how people are getting their information, how people are interacting with each other, how people are meeting offline with each other
is just so diffuse and fractured and decentralized.
And I think that we don't really have a good hold on it.
And I think that the implications for politics
and democracy are vast.
And we have not even begun to scratch the surface.
My first thought was like, oh God,
it got out of control
and there were arrests and all this stuff.
But I thought about January 6th
and I thought about like, if, you know,
of one person to get mass riots.
We've talked about this a couple of times
that post January 6th, because of all the arrests,
there's been sort of an underground thing
where all of these you know
q anon maga people are like oh we don't want to do that again because they're going to get us
terrified of going out yeah the trip but i didn't think about a twitch stream saying like hey go to
this rally right now right meet me at this political event go to the debate and suddenly
a bunch of people show up and it only takes like it doesn't take months of planning right it takes like this was
very quick right so it makes me like if the 2024 vote certification someone's going to say there's
a ps5 giveaway at the capitol i do think it's actually it's a like really interesting contrast
with the like conversation with josh and the facebook studies because on the one hand like
the platforms i think not for benevolent reasons for like, we don't want to get regulated reasons are trying to select hard away from like
politics and political voices and political topics. But to your point, like we are kind of
just now learning because it arrived in Union Square and started tearing cars apart that one
of the big things they're selecting for now are these like
Mr. Beast style influencer
guys who do these like big stunts
do these big giveaways which like doesn't
sound so bad on its own but because
it's so completely unregulated
because there's like no
checks on it there's no gatekeepers on it and because
the numbers are so huge
you can have a 21 year old guy
who like I'm sure Kai Sanat didn't,
I mean, he doesn't know any better.
Yeah, no, it doesn't sound like,
yeah, it doesn't sound like he had
like malicious intent.
No, right.
But he was just like chasing the incentives
that the platforms created for him,
which are like right now,
this is what it's selecting for.
And he did the natural thing,
which is like come to Union Square,
I'm going to give you a bunch of stuff,
which is like,
this is a kind of influencer stunt that is happening all over these platforms. So
it's like kind of inevitable we were going to arrive here. And we keep having the cycle over
and over again where the platform starts selecting for something because it's just like the thing
that they end up at that's going to make them the most money. And then we find out months or years
down the line, like, oh, you can't go to Union Square this Friday because there's a riot of 15-year-old boys there.
And, you know, to get into the parasocial relationship aspect of this, these influencers, the relationships they have with their audiences are so much more intimate than I think some like political figures, celebrities, musicians, right?
Like, because there's the separation you have with people who are like public figures, celebrities, musicians, right? Like, because there's the separation you have
with people who are like public figures, right?
And there's a way that public figures talk and act
that seems separate from all of us.
When it's some guy who's just sitting there talking
and opening up about every aspect of his life.
He feels like your bro.
Feels like a close friend.
And you see the same kind of people who are commenting they feel like a community and so this desire for connection
offline then manifests itself in oh well he said told us to all show up let's all go show up yeah
it's hard to imagine if like rachel maddow went on the air and were like, hey, every angry 60-year-old Democrat, like, we're going to storm Union Square.
I mean, she could test it.
I was also really-
Best of luck, Rachel.
I mean, the fact that it is like these like boys, basically, that is like a really recurring
thing with chaos in the internet, going back to like the 4chan days in the late 2000s. And I like, I feel like I can say this because I used to be a adolescent or teenage boy that like, we are a malevolent force, like individually ungovernable and like, in large numbers, like truly like a hurricane.
Yeah.
And I don't want to say like, let's lock up all the, you know, 15 year old boys, but it does. That's a great title for this episode.
We are terrified of the boys.
It does go to show that when you have like them being organized in this way, like they don't know any better.
It's easy for them to like get out of control.
Yeah.
Well, I do think that's been the case throughout history long before there was ever technology or electricity.
Yeah.
Put a bunch of men together. not a lot of good things happen uh so we've got a big ai dystopia update cool but a slightly funny one for once uh so the screenwriter simon rich took a
spin with one of open ai's lesser known language models um at the top of the show I said it was ChatGPT. I was mistaken.
It's actually a different one called
CodeDeFingy002.
Cool. One of my favorite
LLMs. Can we start naming these things
simpler names?
What would you name your language?
I'll have to think about that.
They're too complicated.
Anyway. And so Simon,
who is the screener, got very freaked out by this AI's ability to do something that he thought and I would have thought too would have been really hard for an AI to mimic, which is to write jokes.
He asked it to do onion style headlines.
Here are a few of them.
And I have to admit, I think they're pretty good.
Experts warn that war in Ukraine could become even more boring.
Pitching that one to Tommy and Ben.
Budget of new Batman movie swells to 200 million as director insists on using real Batman.
That's pretty good.
And this is my personal favorite.
Story of woman who rescues shelter dog with severely matted fur will inspire you to open a new tab and visit another website.
You're the one about the town rural town up in arms over to picture in summer blockbuster cow fuckers
these are good i feel like i feel like i would actually uh read i would like i would laugh at
these if i saw these i thought they were human uh and simon wrote in an article for time about this
i'm not sure i could personally beat these jokes quality and certainly not instantaneously for free so did
this did this change your view at all of the like yes threat that he a okay i i had always been a
little chat gpt i've used it a bunch i don't think it's that impressive and um i think that's sort of
been the consensus.
I think that has tipped over into like,
oh, why are we all worried about this?
This can't do anything.
Simon wrote this piece,
not just to give us these funny onion headlines,
but to say that like there are,
and I've now heard this from a couple other people
who are like, have been studying AI and involved with this.
There are large language models that some of
these companies have that they have not released yet that are far, far more advanced than what we
have publicly right now. And I think that, I mean, my first, he wrote about this partly because of
the writer's strike, mainly because of the writer's strike. And I noticed it because, you know,
John Mulaney, a comedian, tweeted about it and he tweeted about it in the context of the writer's strike. And I noticed it because, you know, John Mulaney, a comedian, tweeted about it
and he tweeted about it in the context of the writer's strike.
And it made me think that when I talked to Adam Conover
a couple episodes ago,
Adam was, you know, somewhat dismissive
of the threat of AI in terms of like,
I don't, he's like, well,
I think that the studios will try to use it,
but I don't think audiences will really embrace it. And you know at the time i mostly agreed with him i thought
it was like a little sanguine over the the threats but i think this is this is a real i think this i
think studios production like they'll use this yeah and like i really hope that the the writers
get some real solid protections from this but i think our
track record historically of trying to hold back large technological advances is poor and especially
when they move this quickly and this is they're moving very quickly uh we have a government that
is still trying to figure out how to regulate uh social media when the social media era is uh
almost past now i know they're they're getting
so close to regulating facebook which has had declining users in the united states so like the
idea that they're going to move on this really quickly and the other problem is like say the
writers get a good deal and get some good protections on ai that's happening in this
industry in this country right like this? Like this is an international issue.
And the idea that we are going to be able
to hold back technology that can churn out
really good jokes that are hard to tell
if they're written by AI or human,
or then premises to movies and television shows
or entire scripts is, I don't know.
I'm doubtful that we'll be able to do that.
And that's pretty scary.
Put it back in the box. Yeah box yeah yeah i think it did not and i know i've been kind of the like contrarian voice on this i don't think
that this changed my skepticism that this kind of ai will ever be able to reproduce like the actual
spark of human creativity because this is like a format of joke that has been around for a long time. And it's like clearly has been reduced to a formula.
But I agree that it reproduced that formula and did original work within that formula so effectively that like I would 100% think that these were real.
And I think it's just it's going to be impossible to not have this be like out in the world.
And I think the danger is not replacing the spark of human creativity because I would still bet on that over AI.
Sure.
It's competing with it.
Yeah.
You know, and now there is going to be, there's already enough competition among humans.
Now there's going to be competition between humans and AI.
Right.
And I think that's going to be incredibly disruptive.
And to Simon Rich's point that it can produce it at enormous scale instantaneously.
Yeah.
So we're going to close out with the most important news story,
certainly in technology, maybe in the world, the Elon Musk, Mark Zuckerberg cage fight,
alleged cage fight. Elon Musk is tweeting that it's definitely happening. And you know,
if that guy says something, you can take it to the bank. That's solid. He's implied in the past that it will be at the Colosseum in Rome.
That's a twist, by the way.
I did not know that.
I didn't know that was an implication that was happening.
He produces, honestly, a lot of content about this cage fight that nobody thinks he's going to go through with.
He tweeted, I think a couple days ago, he said it will be somewhere in Italy but wouldn't say where.
And he said, everything done will pay respect to the past and present of Italy.
So we love to pay respect to the present of Italy.
And all proceeds will go to veterans for some reason.
It's unclear what veterans did to be dragged into this.
What percentage odds do you give to this thing actually happening?
I hate to say this, but I think fairly high.
Really?
Yeah. Wow. Can you put a number to it
i'll do i'll do 70 70 what what uh what pushed you up because um because these are two
really bored rich guys um who have incredibly high opinions of themselves that I would argue are unwarranted,
but they're there.
And they have been shown to have many people around them
who tell them everything they say is a good idea.
That's true.
So everything gets pushed towards it.
Now, what cuts against this is,
I know Elon tweeted a couple,
or X'd a couple weeks ago.
I will never.
I will never.
That he might have to postpone this because he got some back surgery.
Oh, does he have to postpone it?
Yeah, his back surgery.
Is his little backy hurting him?
No, I think they said that he said today this is going to be in March.
So I guess he's got some time.
Okay, sure.
But I don't know.
I think this could happen.
And they love attention. I'm sorry, that should have this could happen and they love attention
they love attention
so that is part of why
they have created the attention economy
they are running the attention economy
because they crave attention
more than anyone particularly Musk
well so this is part of why
I would put the odds at like 7%
oh wow 7
that's right I'm taking the under I like 7%. Oh, wow. Seven.
That's right.
I'm taking the under.
I like it. I mean, first of all, Elon Musk lies about everything.
So the more he says it's going to happen, the more I think it's definitely not going to happen.
I think he knows that if he sustains a direct hit of any kind, his heart will explode.
And all his weird like plastic surgery will pop off of his face.
And I also think if they are doing it all for attention,
which I think is a very good bet,
I think they know that if they're constantly delaying it,
if it's constantly about to happen,
then you always have attention for the thing about to happen.
Once you do it, then it's over.
Especially because I think the fight would last about 18 seconds.
Because you think Musk is going to get his ass kicked.
I hate to give it to Mark Zuckerberg on this this one but i think he is doing actual athleticism whereas
elon musk appears to just be like a weird big guy with a bizarrely shaped chest yeah i hate that i
hate that we're gonna watch it oh i'm i mean if there's something to watch we're gonna be there
live i think we're gonna be howard co-selling it from the sidelines. Yeah, offline special bonus episode live from the Colosseum.
I can't wait.
All for the glory of Mussolini.
For the glory of Rome.
Well, I will see you Colosseum side, John.
Thanks, Max.
Talk to you next week.
All right, buddy. Offline is a Crooked Media production.
It's written and hosted by me, Jon Favreau.
It's produced by Austin Fisher.
Emma Illick-Frank is our associate producer.
Andrew Chadwick is our sound editor.
Kyle Seglin, Charlotte Landis, and Vassilis Fotopoulos sound engineered the show.
Jordan Katz and Kenny Siegel take care of our music. Thank you.