Big Technology Podcast - Okay, Maybe Social Media Isn't That Bad For Us — With Brendan Nyhan
Episode Date: August 3, 2022Brendan Nyhan is a presidential professor at Dartmouth College's department of government. He joins Big Technology Podcast for a discussion that pushes back on the notion that social media is destroyi...ng our society and making us stupid. With this thoughtful analysis, Nyhan adds a bunch of nuance to the discussion. This episode is effectively pt. 2 of our conversation with Prof. Jonathan Haidt a few weeks back. While Haidt believes social media is breaking our society and threatening democracy, Nyhan says hold up just a second. By the way, here's a new thing I did: For a behind-the-scenes look into some of my research for this episode, you check out my Pocket Collection (which is filled with the links) at: getpocket.com/bigtechnology
Transcript
Discussion (0)
LinkedIn Presents
Hello and welcome to the big technology podcast,
a show for cool-headed,
nuance conversation of the tech world and beyond.
Two weeks ago, Jonathan Haidt joined us for a great discussion
about how social media was corroding our society
and making us uniquely stupid.
And this week, we bring you the counterpoint.
Brendan Nyan is a presidential professor at the Department of Government at Dartmouth College,
and he's our guest this week.
He says the jury's out on social media and what it's actually doing to us,
and he has some great responses to the points that Hyde brought up when he was on the show.
Now, before we begin the interview, I'd like to kindly ask you to rate us on Apple Podcasts and Spotify,
or one or the other.
You know, these ratings really, really, really, really, really help us increase the visibility of the show,
letting platforms know that we're worth promoting and helping us bring on more great guests.
It takes just a few seconds and is an immense help to us.
So please rate Big Technology Podcast five stars as you listen.
Also, this episode was supposed to run back to back with Heights episode,
but then I got in the line with Blake Lemoyne right after Google fired him.
It was major news, and so we ran with that show first, it aired last week.
So when you hear me talk about Heights interview last week,
that's why it was really two weeks ago, but we just delayed this one.
by a week because of breaking news.
Okay, and now without further delay,
here's my conversation with Brendan Nyhan.
And please remember to rate big technology podcast, five stars.
Professor Nyhan, welcome to the show.
Thanks for having me.
So let's just start with the core of Jonathan Heights's argument,
and then we can go to some of the, you know,
other bullet points that are made in his story that we spoke about
on the podcast and then go by one,
one by one and talk about what you think.
Look, there's definitely some exaggerations about the way that social media has destroyed everything and that we talk about that on the show all the time.
However, I do think that the core argument that Professor Hyatt makes is pretty reasonable and resonates with me.
And that is, and I'm going to see if I can capture it correctly, but that is that a lot of our discourse happens inside social media platforms.
He identified Twitter in particular, and the discourse there has been dreams of both parties.
And that leaves very little room for the center who consistently gets beaten down every time they, you know, say something out of lockstep with the party orthodoxies.
And that has led us to what he calls a structural stupidity where our discourse, you know, no longer has a give and take, no longer is really able to process things.
You know, in a thoughtful way, it's mostly accusations.
And that seems to make sense to me.
It does seem that that social media is driving our discourse in a very bad direction, largely.
along the lines of what he's saying. What do you think about that? Well, I would separate two
different ideas. One is that social media is changing how elite discourse takes place or how
discourse among people who follow politics closely takes place. I'd separate that idea on the one
hand from the idea that social media is having these massive effects on political and social
attitudes, right, which has been the focus of a lot of the conversation around the pernicious
effects of social media. I think it's undeniable that Twitter is a,
really important place where political discourse happens among elites and people who pay close
attention to politics. It's also true that most Americans aren't on Twitter and even smaller
percentage of them are actually following politics closely via Twitter. So people like you
and me and height will overindex on Twitter political discourse because we have unusual experiences
there all the time, right? This is like Elon Musk being obsessed.
with bots because bots are constantly flooding his Twitter mentions. I think it's easy to lose sight of
how unusual the experience of reading about politics on Twitter is in terms of everyday Americans
way of understanding politics. Does that mean social media isn't having important effects on how
politics takes place among reporters, political elites, and the activist basis of the party? No,
of course it's mattering in those contexts. But to me, the concern that's been the focus of a lot of the
conversation was the focus of the New York article you mentioned wasn't about those kinds of elite
discourse questions, but instead about broad claims of significant negative effects on political
attitude. So social media is polarizing the public, things like that, right? Where the evidence is
much weaker as in your article right now.
And this is core to the discussion here, which is how much influence to these discussions
on Twitter actually have.
Now, of course, like this is definitely, it's likely, you couldn't call it elite.
We still have 217 million people using Twitter every day worldwide.
I understand that, you know, that's even less than the population of the U.S.
So it's not a big group.
However, you know, the influence there is real because it's not only the discussion that's
happening on Twitter, but it's the discussion that the reporters who are on Twitter then bring
to people in their local papers, how the producers on Twitter bring to people on television
and how the politicians end up relating to people because they're just as addicted, you know,
as the rest of us. And you could also say potentially academics, right, who folks who are on
it as well, and then change the way that they teach. So you have like these pretty influential
swaths of society. And it's quite possible, we use the word downstream.
last week. It's possible that, you know, some of this structural stupidity is downstream from
the stuff that happens on these apps because they have such influence. So what do you think about
that argument? Well, I mean, like any technology, social media has its good and it's bad.
And, you know, I don't mean to sound like I'm soft peddling the negative sides of social media.
I've written about them a great length. Yes. You know, I would encourage people to Google all the work
that I've done on these questions. Again, though, I think in this case,
technologies are complicated, and they're used in so many different ways that they really are
difficult to generalize about in these sweeping terms. So the idea that Twitter has made our
discourse stupid, you need to refine and make precise. It's also the case that some of the most
nuanced, sophisticated conversations that I can have in my areas of expertise take place on
Twitter. That is simultaneous with the incredibly dumb stuff, which of course happens to
So then you have to think about how does all this net out? In some cases, people are finding
communities of like-minded folks that are quite beneficial and they're having smart, interesting
conversation. In other cases, there's incredibly stupid things happening, right? How do you net that
out? I don't have a hot take about that because it's a complicated question. It doesn't have one
simple answer. There's no stupidity measure I can throw on every tweet on Twitter and say, okay,
that the net stupidity today is 0.75, and yesterday it was 0.5, so it must be getting worse.
You know, I guess these are almost pre-scientific notions. And I guess I just want to move towards
some claims that we could actually test and engage with. You know, a point that's important
to bring out and for your listeners to consider is the idea that is hard to defend when made
explicit, but is rarely articulated in these conversations, which is the idea that
was some golden age of high-minded discourse in the past. And there's this implicit sense that
we've lost this era when truth mattered more, when people had more serious debates. But there
were very stupid forms of entertainment and very stupid political conversations happening in prior decades
too. And I would happily direct anyone to the archives of talk radio and television and
everything else, going back in time. And one other historical caution, it's also worth
keeping in mind that whenever a new technology, new communication technology, like social media,
like the internet arises, this kind of discourse takes place. It was thought that the exact same
sorts of conversations were happening when television was created, when radio was created,
when the printed press, when the printing press was invented, right? There was this notion that
these technologies are enabling these pernicious kinds of discourse and they're often
kind of sweeping generalizations made about their negative impacts, right? Does that mean that
those technologies didn't have negative impacts? Of course not. But it just means that we should
be suspicious of knee-jerk reactions that lead to sweeping negative generalizations when
it comes to new communications. And I want to get to some of the stuff that we can test because
that's some of things that you've studied and we'll definitely talk about that in the course
of this discussion. I don't want to sweep that away. We definitely want to focus on
However, before we get to it, I think the argument isn't necessarily, so, yes, the argument with, I'm just going to try to make it a little more precise, the argument with like video games and television, was that it was making people dumb.
But I think that the argument here is not that it's a decline in collective IQ, but that it's the discussions have now become, they've become so radical and we've lost the center.
And that's, that's the stupid situation happening here.
I don't think you saw that with TV and maybe it even moderated people a little bit.
because, you know, there was a common set of facts.
So I'm curious what you think about that.
Like our move, it does seem like we're definitely moving much more radical
on both ends of the political spectrum than we had before.
Well, this gets at one of the challenges with social media,
which is its rise and the Internet's rise before it coincides with a trend
towards increasing polarization in America.
And it's very hard to separate those two.
So a key concern that I raised in that New York article and that some of my colleagues
raised is the difficulty of separating those trends, especially when social media is
reflecting what's happening in society in such a direct way, in a more direct way even
than, of course, television, right? So when we're in a society that's polarized, we will
see that polarization reflected back to us on social media. So what we have to think about
is, is social media even more polarized than the discourse that's taking place in other
media or than the polarization that's taking place among the public?
I think it's certainly the case that many people opt out of participating in political discourse
online because of the vehemence of folks who have strong views who often tend to be at one
extreme or the other. I think the evidence for that is strong. There's all kinds of political
discourse that may have that kind of feature online. I wrote a paper with some co-authors where we
looked at uncivil comments and how those could drive people out of online conversations, right?
So there are lots of reasons to think that there may be people who are systematically selecting out of online political discourse.
Again, though, we'd want to think really carefully about what the comparison point is because even though that's true, it's also true that many more people have a voice in the political conversation than was ever true before.
We have to hold that idea in our head at the same times.
And in particular, there were groups that were systematically excluded from the political.
conversation who now have a voice that's registering in a more direct way because of the
affordances of social media that their voices are allowed to be heard both individually
and collectively in ways that weren't previously true. So again, you'd have to think about
the extent to how you net those two factors out. I think it's totally fine to say the way
extreme political viewpoints are incentivized in certain social media spaces, right,
may be bad. It may be kind of the incentive may be for people to have more moderate or
nuance to use to get crowded out. They don't get amplified. They don't get positive feedback the same
way. But we have to weigh that against all the other points I've made, including the greater
openness of this political space compared to some of the ones that have come before it. And I'm
not willing to say that just means it's dumber. And I'm certainly not, and I'll even go further
and say, while it may underrepresent folks in the middle, what?
of the reasons those viewpoints aren't especially common is the true centrist that people think
is lurking out there is actually political scientists have found quite rare. There aren't that
many people like that. So part of the problem of why we're not hearing from those folks is in part
because there aren't really that many of them. There was always this notion that, you know,
if only someone like Michael Bloomberg ran for president, this moderate constituency would suddenly
emerge. But political science has found that consistent moderates are actually quite rare.
A lot of people we think of as moderate actually have a mix of very liberal and very conservative views.
So in the context of any political debate, you'll hear a lot more of those than the people falling right in the middle.
Right. Okay. One last stat to react to them. We can move on from this topic. It's from the hidden tribe study, which was cited in Heights article.
So let's see. It surveyed 8,000 Americans in 2017 and 2018, identified seven groups. Okay. So then the one furthest to the right, I'm just reading from Jonathan's story, known as devoutes.
devoted conservatives comprised 6% of the U.S. population. And the group furthest to the left,
the progressive activists comprised 8% of the population. So we're talking about 14% of the population in total.
The progressive activists were by far the most prolific group on social media.
70% had shared political content over the previous year. And the devoted conservatives followed
at 56%. So there are some numbers that you have the people on the most extremes actually
dominating the conversation. What do you think about that?
Yeah. No, I think it's very real. And I guess what I would say is maybe part of what's happening is people are observing the inequality and political participation. And again, we need some sort of baseline. What are what, you know, the letters to the editor in prior era has probably reflected similar imbalances. Campaign donations in prior areas previously reflected similar imbalances. That's not to say that nothing has changed, but we'd want to think about the extent to which the overrepresentation of,
the extremes you're describing is greater or less than other kinds of political speech and other
kinds of political participation, right? So if you went to, you know, a town meeting, right,
who shows up, right? It's probably going to overrepresent those folks with extreme views
in that particular context, right? And so on. So in some ways, what political media actually is
doing is exposing us to political conversation, which we're not actually exposed to very much,
right? And lots of people don't like that. But, you know,
Again, we've had a, you know, a polarized political conversation in many ways.
It's now, there's a certain way in which is more visible.
Now, again, there are structural features of social media that are going to overrepresent certain voices.
And it's worth reflecting on those.
Those aren't, you know, I certainly think it's valuable to think about the structure of social media platforms and how they work and who gets amplified on them.
And that's something where I've been a strong advocate of greater transparency so that we can,
scrutinize what is being amplified on social media platforms and hold those platforms accountable
for the consequences of their policies. I think that's really important. And actually,
one of the problems with this conversation, one of the ways in which it's, one of the reasons
it's often impoverished when it comes to data is precisely because the platforms aren't sufficiently
transparent and there isn't enough information out there for us to really carefully adjudicate
what's being seen on these platforms and who's seeing that content.
If we had that kind of transparency,
it'd be easier to have a conversation like the one we're having
in a more specific way and think more carefully about
the particular features, those platforms,
and how it's influencing what people see and how it gets amplified.
And if anything, it seems like those platforms
are actually becoming less transparent.
I remember when CrowdTangle,
which is this great tool that you can see to what news is spreading on Facebook,
was acquired by Facebook.
I wrote a story.
It was basically like, you know,
announcing the news,
we broke the story,
wrote up the story of the news.
It was actually the day that Trump got elected,
which was kind of a wild thing.
And then it was basically like,
look,
the thing is that Facebook isn't going to want,
isn't going to really support this.
And lo and behold,
they didn't support this.
And now they're on their way
to shutting it down, which is an issue.
Yeah.
No, I mean, I think the loss of Crown Tangle is,
is distressing.
Now, the company says it's going to continue the work in a different form, but Brandon Silverman, the co-founder of CrowdTangle, has spoken up in an important way in favor of platform transparency in order so that we're not dependent on company policy decisions and budget resource allocation choices, but instead we're going to think about mandatory transparency because these platforms are often so limited.
Twitter is often used in research because not just we academics use it, but because its API is more open than a lot of the other platforms.
But for instance, YouTube, we haven't talked about yet, is a huge source of news and information for people.
My co-authors and I recently released a study of YouTube.
It's just shocking how little research there is on YouTube compared to Twitter and Facebook.
So we need all of these platforms to be more transparent.
And then I think this conversation could really move forward in an effective way.
Yeah, definitely got to ask you some YouTube questions before we're out today.
And also Brandon Silverman, founder of Crown Tagle, who sold it to Facebook.
If you're listening, here's another request for you to come on the show.
I've asked multiple times.
Maybe we'll make some progress at a certain point in the next decade on that one.
So let's talk a little bit.
One of the most surprising things to me was Echo Chambers.
You had this great quote in the beginning of the New Yorker story that said basically like the takes are so off.
and there's one that's been percolating for a long time about this idea of a filter bubble
that you are basically put in a bubble of like-minded peers when you're on social media
and you only hear stuff that reinforces what you believe.
But in actuality, that's not really the case.
And even height admits that the echo chamber problem is a little bit less of an issue
than when he started researching it.
So can you share your perspective on this echo chamber idea on social media?
And, you know, if it's not as bad as people think what's,
the real story there. This is one of those areas where the takes have outrun the data.
The notion of an echo chamber or a filter bubble is one that was worrisome to people.
We didn't quite know what digital technology was going to do to news and information consumption.
People worried about the possibility of an echo chamber or a filter bubble. But pretty quickly,
those concerns about potential risks turned into claims that most Americans were in
echo chambers or filter bubbles themselves. You know, the claims started, the claims started to be
used, the terms started to be used quite loosely to describe the typical person's experience,
as if this was something that was happening to the typical person. There's just no evidence
for that. In fact, the empirical evidence shows again and again and again that the average
American has a relatively balanced information diet.
We use digital behavior data that lets us see what people, what information people actually
encounter, not just asking them, where do you get your news?
But seeing what information they actually consume on their digital devices, we see again
and again, the typical Democrat and the typical Republican and the typical American overall
have relatively balanced information diets.
And don't pay very close attention to politics.
So this idea of an echo chamber, there's itself a kind of, ironically,
a kind of echo chamber about it. It is true when you drill into the behavior data that there are
relatively small minorities of Americans who have more skewed information diets. And in those
smaller groups, we sometimes see quite unusual skewes. So many of the kinds of worrisome,
untrustworthy, potentially harmful content that's out there, those are being heavily consumed by
these relatively small groups of people who have more skewed information diets. But it's not the
typical person. It's not the swing voter. Those folks, in general, their information diets are
relatively more about. So does that mean they're perfect? Does that mean they're always accurate or
anything else? No. But the kind of dystopian fears that were, had that have been expressed for many
years now, aren't well supported by the data. And outside of the U.S., social media algorithms can
actually have a pretty positive impact in terms of introducing people to different perspectives. For
instance, there's this one example of, in Bosnia, for instance, there was, you know, different
ethnic groups being, you know, sort of portrayed in a different fashion than you get in the media,
and that could only happen, you know, if it's disseminated through social media.
Yeah, that's right. I think part of the problem here is that people's notions about
how algorithms work and how people get their news are both too simplistic. So first about algorithms
it is true, of course, that algorithms are trying to get you to spend more time on platforms.
That is often part of what's being optimized.
But as these platforms have developed, what the algorithms do and how they prioritize ranking
content in the kinds of news feeds that they all use now has become much more sophisticated,
to the point that, for instance, one study conducted internally within Facebook that came out
in the Facebook papers actually found that reverse chronological feeds were showing more
dubious content to people than the algorithmic feed, which is precisely the opposite of people's
intuitions. In many cases, the algorithms are now heavily tuned to try to suppress the worst
kinds of content. So when you're thinking about why might people be in echo chambers and filter
bubbles, again, people had this notion that, well, the algorithms are only trying to show people
what will generate engagement, and that will select for the worst content. And the story is just
much more complicated than that. Similarly, when it comes to how people choose news,
It is true. If you ask people, what would you prefer to read? They will tend to select the news that's
congenial to their point of view. But we encounter news in lots of different ways and lots of
different contexts. And when you aggregate over all the ways we encounter news and all the different
kinds of interests we have and who shares information with us and so on, the story ends up looking
a lot less skewed for the typical person than people might fear. It's a very unusual person
who is mainlining only news that's congenial to them. We all know something. We all know
someone like that. But they're relatively unusual in the scope of the population. And thinking of that
as a typical American or the typical news consumers. So, but let's talk about that a typical person,
because that's an important, that's part of the counterpoint here. And it's a point that Hype brought up
last week, which is that, yes, if most people have more balanced information diets, fine. The thing that
you worry about is the people on the extremes, then getting further radicalized by what you said,
mainlining the news that's only congenial of them and continuing to reinforce their extreme
beliefs. What do you think about that? Oh, I share that view and I've made that, I've made that
exact point. That's not the point in dispute in this larger conversation. In fact, if we could
reorient the conversation to think about the extremes, I think it would be a much smarter
conversation. One of Facebook's former executives has talked about how the company should be optimizing
for the 99th percentile in its measurement of exposure to these, to various kinds of harmful
content. Not the 50th. Right? So in other words, it's not the typical, you know, the fact that the
algorithm is driving exposure down to very low levels for the typical Facebook user is not enough
because we have to think about the 99th percent of how much of, for those people,
how much of the worst content are they getting?
And the same on YouTube and the same on TikTok and whatever other platform you want to think
about.
That's a fundamentally different concern than the idea that the average person is getting
tons of so-called fake news, tons of, they're being sent down rabbit holes on YouTube,
et cetera.
Now we're thinking instead about people who already have strong views.
who already have extremist free dispositions, potentially, and those being reinforced on
YouTube, on Facebook, on Twitter, online in general, you know, that's more consistent with the
kinds of data we've seen, right? So in my research with my co-authors, we found, you know,
that 20% of folks with the most conservative information diets overall were consuming almost
60% of the untrustworthy news website content in the period before the 2016 election.
Similarly, we found some of the potentially harmful content on YouTube, around 80% of the consumption was coming from 10% of the population, right? So we're really talking about, and those folks, and we can talk about this more at whatever point you like, had very high levels of gender and racial resentment already. So in other words, it was not the case that YouTube, we find no evidence consistent with the story that people just went on YouTube and suddenly became radicalized. But in
Instead, we worry about people who already have these kinds of strong predispositions being
served content that caters to them and potentially radicalizes them further.
And that's, again, a really different kind of conversation than the prior one we've been
having.
Right.
And I think that point about optimizing for that 99% of users is really important.
Oftentimes, you'll hear from the platforms, oh, well, it was like a very small, it was like 0.02% experience this.
you're like, oh, okay, only 0.02% of the content on your platform was beheadings.
Then you just like multiply that times, you know, three and a half billion.
You're like, oh, shit, that's a lot of beheadings.
That's right.
Well, and let me just add to that.
Right.
So first is, the first point is at scale, right, and even small percentage is a large.
We should absolutely acknowledge that point that you just made.
It's incredibly important.
The other is certain kinds of negative consequences can be quite significant in the real world,
even if the total number of people affected is small, right?
So there's no convincing evidence that, on terms of the website, swung the 2016 election, right?
We take a lot of votes to change the outcome of an election, even a relatively close one,
like 2016.
But, you know, January 6th reminds us that just a few thousand people, right, can threaten the lives of
high-ranking government officials and potentially destabilized a peaceful transfer of power.
That was not many people at all.
But again, that's a different kind of threat model and a different kind of social concern
than the prior stories about the Internet making us more polarized and so forth.
And I think we need to think about these kinds of small groups being radicalized
and the content that's being, that's catering to them being pumped to them more and more
as an important part of what we hold platforms accountable for and not just this kind of typical
American, right? Because again, the conversation, I just want to, let me just hold on this point
because it's really important. The conversation we typically have is exactly what you just
described. Someone says there is bad content on the platform. The platform replies, well,
it's only a small percentage of people. And then the conversation ends.
and no one updates. The public doesn't realize how few people are actually being exposed to this
stuff, but also the platform isn't being held accountable for the potential negative consequences
among that small percentage of people, which as we've just talked about, can be
quite substantial and quite harmful. And we have to get past that kind of conversation
because we've been having it over and over since 2016. And that one is an especially
dumb conversation. We can do better. And it's disturbing to me that six years on
into this social media panic,
we're still having the same kinds of crude conversations
that we had in the immediate aftermath of that election.
We know so much more,
and these conversations can be so much more nuanced.
That transition has to start with the recognition
of how these dynamics work on social media,
which, again, I'm just going to underline,
is very different than the notions people have typically relied on.
Exactly.
And that's why we're here.
So appreciate you.
being here with us. Brendan Nyhan is here with us. He's a professor at Dartmouth College,
the presidential professor at the Department of Government. We've talked a lot about
echo chambers, the discussion about whether social media is bad. I still want to talk about
rabbit holes. I still want to talk about YouTube. Why don't we do that right after the break?
Hey, everyone. Let me tell you about The Hustle Daily Show, a podcast filled with business, tech news,
and original stories to keep you in the loop on what's trending. More than two million
professionals read The Hustle's daily email for its irreverent and informative takes on business and
tech news. Now, they have a daily podcast called The Hustle Daily Show, where their team of writers
break down the biggest business headlines in 15 minutes or less and explain why you should care
about them. So, search for the Hustle Daily Show and your favorite podcast app, like the one you're
using right now. And we're back here with Brendan I on a big technology podcast. He's the
presidential professor in the Department of Government at Dartmouth College. We've been talking a little bit
about whether the filter bubbles exist,
how do we have a smarter conversation about social media?
I want to go into this idea of rabbit holes,
which I think you alluded to in the beginning.
And now we're going to talk about YouTube stuff.
There's a belief that, you know,
you get a video in the recommendations on YouTube.
You're curious, you click on it.
Next thing you know, you're a member of ISIS.
What does the research tell us about what's actually happening?
So there are a lot of anecdotes of that kind.
And unfortunately, we don't,
don't know very much scientifically about how that process worked prior to 2019 when Google
reported making substantial changes to how YouTube's algorithm worked. I can't speak to how YouTube
recommendations work prior to 2019. But in the current manifestation, so in other words, in the
period after those major changes in 2019, my research can speak to that. My research with my co-authors
looks at data from fall 2020, collected after these changes take place. And what we found was
that at least after those changes have been made, recommendations to potentially harmful content,
which we define as channels classified by journalists or subject matter experts as either alternative,
in other words, in a kind of middle ground between the mainstream and the fringe or fully
extremist,
exposure to content,
or sorry,
recommendations to content
from channels of
either type
was extremely rare
for people who were
not already
watching videos from
channels of that type.
So there was very little
evidence, in other words,
of the rabbit hole dynamic
you described,
where people are watching
an innocuous video on topic X
and they're quickly
following recommendations
down in a sequence
that leads to
them to some sort of terrible content that we might worry about YouTube amplify. Does that mean
that kind of content doesn't exist on YouTube? No. And the platform should absolutely be held
accountable for the kinds of content they continue to platform. We still see many channels
that these journalists and subject matter experts have even put into the extremist category
remain on the platform and people getting recommendations to them. But it's usually people
who are already, again, subscribing to a channel of that type,
they're probably often getting recommendations to channels they already subscribe to.
So, in other words, the threat model that we identify is very consistent with the discussion
we had before the break.
It's much more about people who have relatively extreme predispositions, seeking out content
that's consistent with those predispositions, and finding it on YouTube.
And so we often see people, for instance, coming in the external links.
And alternative social media platforms play a disproportionate role in leading people to these problematic videos for obvious reasons.
They're less moderated and they often have more extremist folks on them, right?
So that's consistent with a kind of behavior where instead of people being led down an algorithmic rabbit hole, they're instead choosing to follow information of interest to them.
And because they have these extreme predispositions there, following that link or to that dubious video.
or even finding that channel subscribing to it and continue to watch it in the future.
I want to ask you some more questions about how the platforms have acted here.
But before we do, I feel like these conversations need to address this point, which is there is a
perspective out there that, you know, the people who are talking about the need for platforms
to moderate more or, you know, be careful about the content, you know, they have on there.
Some of the stuff that you've pointed out that the people who want the platforms to crack down
on this stuff are babies who can't handle free speech.
You're academic. You've watched this stuff and, you know, not a political actor. So what's your
thought on it on this debate? Should platforms have like the unfettered free speech or, you know,
is some moderation necessary? I think there's no way to get around the need for moderation.
One of the things that happened when Elon Musk was offering various pronouncements and how
Twitter could easily be fixed is people who'd worked in social media laughing hysterically at how
naive his notions were about free speech on platforms. First, it's important to clarify for folks
that free speech is a confusing term. The First Amendment refers to what governments can do,
not to private companies. There is no obligation for private companies to amplify whatever
speech is put onto their platforms. And these questions end up being really tricky in practice.
I'm of two minds about it. Let me just kind of draw out the tension here. On the one hand, I'm very
nervous about people who think Facebook should take a very, very heavy-handed role in getting
rid of misinformation because I don't think we should be putting so much power over political
speech in the hands of one private company and ultimately one person. I think we should be very
uncomfortable with that notion. On the other hand, it's not at all, and actually I would say,
even, we should be very uncomfortable with the platform companies de-platforming elected officials
in particular, whom citizens should be able to hear from. It's healthy in a democracy to know
what your elected officials are saying and doing. So I think that kind of a step should be
an extreme last resort inciting a violent insurrection to overturn a presidential election
meets that criteria, but I think that's the level it would have to get to before I'd be
confident that it's the right choice. On the other hand, what YouTube is doing is providing an implicit
subsidy free video hosting and subscriptions and access to a vast audience. No one has any right to
that, let alone people pushing hateful content. I don't see why David Duke has any right to be
on YouTube, to get free video hosting from YouTube, to have YouTube help him build a
subscription artists. Now, they eventually kicked him off. But I don't, it's not obvious to me why that
would be a compelling argument. And, you know, like, I'm political scientist. I think it's healthy to have
political disagreement. I think it's healthy to have lots of different kinds of views. And I'm
nervous about people who think it's easy to cleanse the marketplace of ideas in some simple-minded
way. But at the same time, it's pretty obvious what was happening on YouTube.
Merchants of hate were seeking the platform out, using it to build an audience of people who
shared their hateful views and monetizing it. Why should you two be in the business of
subsidizing that? What obligation, what civic benefit do we get from that? Does that mean it's
always an easy call to decide who gets kicked off the platform? No. Those choices are very,
very difficult. But anyone who thinks that those aren't hard choices has not thought about this
carefully. Right. And I think if you were, I've mentioned this in the past, but I think that
there's like a bit of nuance that gets lost also.
It's just that if you're just a forum hosting speech, that's one thing.
But the moment you build, you know, recommendation algorithms that's going to bring that speech
to larger audiences here, you now become an editor and you have more responsibility for what
you push out there.
So let's talk a little bit about some of the measures that the platforms are making.
You know, I saw a really interesting, you know, comment from you that a lot of this discussion
is stuck, you know, maybe five or six years ago, you know, before the platforms realized they had
problems and actually tweaked some of their recommendation algorithms and actually made their
platforms better. They talked about this YouTube algorithm change. Also, you know, Facebook, for instance,
has, you know, not only put restrictions on like political advertising, which I think makes sense,
but also it's actually depreciated the value of political and news content in the newsfeed
by, in a big way. So I'm curious what you think that, you know, what are the underappreciated
changes that the platforms have made recently?
That's an interesting question.
Well, one thing it's important to do in these conversations is to think about the dogs that haven't barked.
And I'll give you an example.
I don't think COVID, for all the problems associated with COVID-19 misinformation,
I don't think, which I do think during a pandemic met the threshold for more aggressive platform intervention.
I don't think the platforms were conduits at the levels they could have been in a more,
in a prior period for that kind of misinformation.
Does that mean, I think that there was no misinformation on the platforms?
Of course not.
Of course there was.
But they were quite aggressive in taking down content that had potentially, you know,
life-threatening consequences for people that would mislead them about the dangers of COVID-19
or the safety and efficacy of the vaccines.
I think, you know, because they, that worked so well, it was a relative.
minor point. I think one thing that drives platforms nuts is they only get blamed,
they never get credited. And that's a case where there were certain kinds of interventions.
Yeah. And by the way, when you have the president of the United States who's saying don't wear a mask,
I mean, at a certain point, you've got to stop saying this is the platform's fault.
You know, the leader with the authority to lead the country during a pandemic says don't wear a mask.
It's like, is that Facebook or is that the people that we put into office?
Very much so. This gets back to the point we started with in the beginning about the, you know, what's happening in society being reflected on social media, if you think about where people are being exposed to harmful, potentially harmful information, it's often political elites, TV news, etc. So take the example, the most salient case besides COVID-19 in our everyday life right now, which is misinformation about the legitimacy of the 2020 election and the legitimacy of our electoral system in general, right? That overwhelming.
came from political elites.
Some of it was transmitted on social media.
Social media played a role in some cases.
But if you had to understand where that came from and why it had such influence, it's
a story that begins and ends with Donald Trump.
It's just not about social media.
So it's really important to underline that point.
Facebook can't fix that problem, nor can any other technology company.
They took extraordinary steps, as we just.
discussed in removing
Donald Trump from their platforms after
he incited
a violent insurrection. But
that, you know, what he
was putting out there every day as the President of the United States,
the most newsworthy person in the world,
is just going to show up
and it's going to be transmitted. I mean, you know,
there's lots of good things we can point to.
You know, there are aspects of
what's happening on the platforms that I'm
encouraged about, you know,
the Facebook third party fact checking
partnership. Twitter is
testing a new crowdsourced fact-checking model called Birdwatch.
YouTube just announced a new research access program, which maybe can help to start to build
better transparency there.
Twitter's academic research data access has lifted some of the restrictions that existed
in the past.
So there are small victories taking place out there.
It's, I think, a more nuanced conversation than, you know, what's often tossed around on social media going back to where we started.
And I'm, you know, I'm of a mixed view.
I think we've made progress.
But I'll just say the business model pressure, the platforms are going to be under in the current economic conditions will really test their commitment to these kinds of efforts because they draw resources out of the bottom line.
where the platforms were willing to make investments when ad revenue is going through the roof
and where they're willing to make investments when things get lean, right? Those may be very
different. It's going to be very interesting to watch Facebook over the next couple of years
as that business struggles and they, you know, are in this messy middle in their pivot to the
metaverse. But why don't we end on on the fake news discussion? Because another thing that I was
really struck by was that this is from, I think, one of the academics who's listed in the Google
Doc, which we talked about last week, Professor Bale, he says only 2% of Twitter users routinely
see fake news. I mean, again, you know, we're talking about scale, so that's a large number,
but it's smaller than some of this discussion would have you believe. What do you think about that?
Yeah, it's consistent with what we found when we looked at web traffic data, that that's
That was exposure to the untramed of the websites was concentrated among a small number of people.
And what I would say then is we have to think about how and why it might matter.
The fact that a small number of people are consuming it doesn't mean there's no concern at all.
In some cases, the misinformation those folks are consuming and amplifying is then reaching political elites who are in turn sharing it.
We've seen a kind of internet fever swamps to Congress.
Congress and Fox News and then back out to the world kind of pipeline being, you know, developed.
So there are ways in which even these kind of fringy information spaces can have larger consequences.
It's also the case of these highly politically engaged, heavy news consumers with skewed information
diets may have disproportionate influence in other kinds of ways. They may be more likely to talk
about politics with other folks in their social service. They may be more likely to donate. They
may be more influential in the party base in terms of who wins primaries. There are lots of
ways in which we can think what those folks do and what they see might matter. But again,
it's a different story than, well, just everyone's becoming more polarized or people are changing
who they're going to vote for. If you think about the kinds of people who actually were
consuming tons and tons of anti-Hillary fake news in 2016, those people were not swing voters.
They were not on the fence about whether to vote for Donald Trump or not, right? But we can imagine
other kinds of ways in which that exposure of that kind of content might have consequences.
Now, many of those are still largely hypothetical.
These are very hard groups to study, in part because they're pretty small percentages
of the population.
So many of the notions we've been discussing are more conjecture than scientific fact.
But it underscores the importance of drilling down on those folks and learning more about
who they are, what they see, and what affects that information exposure has.
If you could shift the discussion about social media that we hear often on social media,
I think you're very astute in pointing out that there's an echo chamber, about echo chambers, et cetera.
What are some of like the more nuanced?
What are some nuances that you think people need to like appreciate?
I guess we've talked about it throughout this conversation.
But like if you could pick like one or two bits of nuance to inject inside, you know, these discussions, what would they be?
I think we talked about many of the key points.
I guess I would suggest that we're very bad as human beings.
And I'll include myself in this.
I think this applies to all of us at fully taking.
into account the extent to which our experiences are unrepresentative. So whatever is happening
in our world seems universal. We're all the star of our own story in a way that I think
profoundly skews our understanding. And that very much applies on social media, which can create
this feeling of cacophony of you being surrounded by particular kinds of pathological discourses.
if you're in certain spaces online.
And it's just easy to forget how few people are actually having those experiences.
How few people are even reading about politics at all?
Most people don't follow politics at all.
They, you know, my field political science has spent decades showing how little people know about politics or care about politics.
Even when they're quite polarized, right, even when they have relatively strong views, if you say,
which party do you support or, you know, which candidate do you support?
by the time we're getting to November of a presidential election year, people have strong views.
But in their everyday lives, they're not thinking about politics very often. They're not paying
very close attention to it. And if you listen to this podcast, right, you're probably extremely
unusual in your level of interest in news. And that probably makes you a very, that probably means
you're having a very unrepresented experience. And I guess I just want to just encourage people to think
in that more nuanced way about how whatever they're observing is a problem that affects people
like them much more than the average person who doesn't have those kinds of extreme
information preferences. There's nothing wrong with that. Of course, I'm the same way, right? I follow
the news at a pathological level, but I try to remind myself how weird it is. I tell my students,
even to be in a political science class with me, I say, look, you're all weird. And I mean that in the
best possible way, but you're in college taking a political science class because you're so
interested in learning more about politics. And that makes you strange. And that to me is the kind
underlying challenge that we face in social media is separating our own experience from the one
the data tells us is more typical and separating what's happening in the world that's being
reflected on social media from what this platform is actually doing to change the world. And if we can
make those two leaves. Inferentially, we can get a lot closer to the truth. And that's
where I'm hoping we can get. Professor Brendan Nyhan, thank you so much for joining. This was
really fun, really interesting. My pleasure. Thank you for having me. Thanks for being here.
Thank you, everybody for listening. Thank you, Nick Guatney, for mastering the audio and doing
the edits. Thank you to LinkedIn for having me as part of your podcast network.
And thanks again, to all of you, the listeners, if this is your first time or second time here.
And you haven't subscribed yet, please hit subscribe. If you've been a long-time listener,
want to rate the podcast, that would go a long way into helping us get.
more visible out there on the platforms, help us, you know, trick those social media algorithms
into getting us, you know, some more reach and keep doing what we do. So I appreciate a rating.
That will do it for us this week on the show. And we hope to see you next time on Big Technology Podcast.