Consider This from NPR - Deplatforming: Not A First Amendment Issue, But Still A Tough Call For Big Tech
Episode Date: January 26, 2021Removing disinformation — and users who spread it — can come at a cost for web hosts and social media platforms. But studies indicate "deplatforming" does stem the flow of disinformation. Kate Sta...rbird with the University of Washington explains why it's easier to see the effects of deplatforming in the short-term. And NPR's Shannon Bond looks at how one growing social media site is dealing with new attention and new challenges. Additional reporting in this episode from NPR's Bobby Allyn, who's reported on the removal of Parler by Amazon Web Services.In participating regions, you'll also hear a local news segment that will help you make sense of what's going on in your community.Email us at considerthis@npr.org.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
First things first, when a social media company removes posts by a specific user, or even
in the case of the former President Donald Trump, removes a user altogether, it is not
a First Amendment issue.
One big reason is because the First Amendment defines your rights against the government.
It doesn't define rights against a private company.
Daphne Keller, who spoke to NPR this past week,
is the platform regulation director at the Stanford Cyber Policy Center.
There have been more than 30 lawsuits in the U.S. by users saying they have a right to speak
on platforms and demanding that platforms reinstate them. And the platforms always win those cases.
Basically, the government can't silence your ability to say almost anything you want on a public street corner.
But a private company can silence your ability to say whatever you want on a platform they created.
Because the platforms themselves have their own First Amendment rights to set editorial policy and decide what to take down.
So, yeah, the legal debate is pretty cut and dry.
But the cultural debate about whether tech companies are engaging in something that could
be called censorship, that's another story.
You know, I think everyone who pays attention is worried about the idea that a very small
number of private companies exercise gatekeeper power over some of the most important forums for
public discussion these days. Consider this. Big tech companies increasingly are under pressure
to make big decisions about how to mitigate the spread of violent extremism on their platforms.
And experts say removing that content and the users who spread it can come at a cost, but it also seems to work.
From NPR, I'm Adi Cornish. It's Tuesday, January 26th.
This message comes from NPR sponsor, BetterHelp. BetterHelp offers licensed professional
counselors who specialize in issues such as isolation, depression, stress, anxiety, and more. We are still in the middle of this pandemic. Visit BetterHelp.com slash consider to learn more and get 10% off your first month.
We are still in the middle of this pandemic.
And right now, having science news you can trust from variants to vaccines is essential.
NPR Shortwave has your back.
About 10 minutes every weekday.
Listen and subscribe to Shortwave, the daily science podcast from NPR.
It's Consider This from NPR. In the week after Donald Trump incited a deadly riot in Washington,
D.C., Twitter banned more than 70,000 users, and included the former president himself.
Trump and some of his supporters also lost accounts on YouTube, Snapchat, Twitch,
Spotify, Shopify, and Facebook. I think that this decision will certainly get a lot of attention.
Columbia Law School professor Jamal Green is also co-chair of an oversight board created by Facebook.
They're charged with evaluating the company's decision to ban Trump. Facebook, of course, we should mention,
is one of NPR's financial supporters. They do not control any content decisions.
Now, as for that oversight board, it wasn't their call to ban Trump in the first place. That was all
Facebook. But the 20-member board will soon make a recommendation as to whether that ban should stay
in place. The company can take it or leave it. Same goes for other advice the board dispenses. So Facebook, as part of this case, has asked for
policy advice as to how their content standards should relate to political leaders. And this is
something that has been a challenge for the company and for other platforms in the past,
given that political leaders are very differently situated than ordinary
citizens.
And so that's going to be one of the things that we look into.
And yeah, social media companies have always been cautious about how they police content
from people with political power.
But it's not just social media companies doing the policing anymore.
There's guts of the web that no one ever wants to see or deal with or think about.
Greg Falco, a security researcher at Stanford, is talking about web hosting companies.
They control levers to vast online infrastructure,
and they have complete discretion to pull those levers as they see fit.
And that means they can decide which websites live or die.
The question becomes tricky of like, when do you actually take someone down?
It's a really gray territory.
The reality is, it comes down to understanding when it reaches some public attention,
when there's actually physical implications.
The most prominent example following the events of January 6th,
the decision by Amazon Web Services,
one of the biggest players in the web hosting world,
to stop hosting the
social media site Parler. Amazon flagged hundreds of posts from pro-Trump extremists calling for
violence before and after the storming of the U.S. Capitol. But that won't stop users from posting
that kind of content. Daphne Keller at Stanford, who you heard from earlier, says there's a risk Parler users could just go elsewhere.
That is a risk in particular as we drive hateful speakers off of mainstream platforms where other people can respond to them and disagree with them into smaller and more marginalized, you know,
echo chambers where they're going to hear only views that agree with theirs
or views that are more radicalizing. That is one of the costs. Parler itself is having no luck
finding a new home. The company approached six web hosting services after it was dropped by Amazon
and all said, no thanks.
Back to those bans put in place by big social media sites.
In the week following the president's widespread deplatforming on January 8th,
a company called Signal Labs found that misinformation across a few big social sites,
including Facebook and Twitter, dropped 73 percent.
I think we just need to add some context to that. That's University of Washington professor Kate Starbird. She
researches misinformation on the internet. What Signal Labs did was they took a measure
of misinformation that was essentially just looking at keywords related to claims of
election fraud. And they looked at one week compared to the week before. And a couple of
things happened that were different from one week to the next, and that the suspension of Donald
Trump's account probably made a difference. But it's hard to attribute all of that difference
to just that one suspension because 70,000 other accounts were taken out of the system.
Now, that doesn't mean she thinks deplatforming isn't effective in slowing the flow of
misinformation, at least in the short term.
We spoke about whether it can work in the long term as well.
I have a sense that it'll have short-term impact for sure.
What happens in the long term, I think, is something we don't yet know the answer to.
My expectation will be that if those suspensions stay in place and if that vacuum isn't filled by others spreading misinformation,
and if the platforms can do a better job of not letting those networks build themselves back in, that there will be a long-term benefit to the platforms that did the deplatforming.
I'm asking because we're also seeing those who promoted the violent uprising and who promoted
QAnon conspiracies flock to other sites, Gab, Telegram, MeWe. Is the solution
to de-platform these people kind of everywhere they go? Or is it whack-a-mole? I mean, is it
too big to get under control using this technique? Well, I think we're going to find that there are
other platforms that don't mind those kinds of conversations, and in fact, are designed for
those conversations. And if you consider sort of our values of freedom of speech and how those
things work, as long as they're within the law and those platforms want to support that kind of
speech, that'll be a choice they make. And perhaps we will see people that are deplatformed elsewhere
find these other platforms as a place where they can move to. But what that also does is it means that
the conversations that are happening on these larger, more popular platforms, where in the last
few years we've seen recruiting into these conversations, that recruiting won't be able
to happen because those conversations won't be happening there. If this is a turning point,
what are you going to be listening for that will give you the sense that it's a meaningful turning point? This is actually a really hard question because the research that we've been doing
historically has been focused on publicly available data. And what would happen if this
is a turning point is our research methods aren't going to be as useful anymore because the content
is going to go into other places. And so for me, it's actually to see,
you know, in our research community, are the folks that are studying these sort of long tail
platforms that are, you know, edgier, the alt tech platforms, are they the ones that are busiest
right now? And when that happens, we can see that a turning point happened. And what that means for
society, I don't think we know yet. You know, we're still going to have a struggle
with some of these technology-based toxicities,
but they're going to shift where they're at.
That's Kate Starbird at the University of Washington.
As we mentioned, new alternatives to Facebook and Twitter,
companies like Gab, Telegram, and MeWe
are gaining new users by the millions.
But those companies have less experience moderating content than their larger, more established competitors.
NPR tech correspondent Shannon Bond has been looking into how one of those platforms is responding to the new attention and the new challenges.
It's a social network called MeWe.
That's me and we, get it?
And in the past few weeks,
millions of people have signed up.
In 2020, we went from 6 million to 12 million,
and now we're already, it's the middle of January,
and we're already over 15 and a half million.
Mark Weinstein launched MeWe back in 2016 as an alternative to Facebook focused on privacy.
That means MeWe doesn't harness users' data to sell ads or decide what content to show them.
But privacy is not the only reason people are flocking to MeWe right now.
Along with other smaller social networks like Gab and messaging apps like Telegram,
it's become popular with Trump supporters who are disillusioned with
Facebook and Twitter. Cindy Otis tracks online disinformation at the Aletheia group.
People are splintering off into these more fringe platforms that essentially
have no content moderation or threat monitoring capability whatsoever.
When Facebook banned groups for spreading false claims about election fraud and organizing Stop the Steal rallies,
some sent their members to MeWe, Gab, and Parler, another alternative social app.
Parler recently went down after Amazon refused to host it because there was too much violent content.
Weinstein says MeWe is not Parler or Gab.
For one thing, he says he's serious about putting limits on what people can
say. I'm a firm believer in moderation. I don't like sites that are anything goes. I've been
quoted saying I think they're disgusting. Good people, right and left and middle, can't handle
anything goes. We don't want to be around hate speech. We don't want to be around violence
insiders. Mimi does have rules, but they're more lax than Facebook and Twitter.
The big platforms have banned the QAnon conspiracy, for example, a step MeWe has not taken.
In fact, Weinstein accuses Facebook and Twitter of political censorship, which the companies deny.
And I should note, Facebook is among NPR's financial supporters.
MeWe says it removes content and accounts that violate its policies. But journalists and researchers have found things like right-wing militias
and discussions of shooting people in a Stop the Steal group on MeWe.
And yes, you know, right now with the influx of people, look, social media is messy.
Some bad actors get in all over the place.
Look at Facebook, look at Twitter.
I think we're much more nimble than they are.
Weinstein is hiring more moderators for his trust and safety team, currently under 100 people. But experts say all social networks have
to get much more serious about addressing harm by setting clear rules and making sure they can
enforce them. Megan Squire of Elon University studies online extremists. I think we all still
treat social media companies like they're this inexpensive startup, but maybe they need to be treated more like starting an airplane company
or a company that makes cars. I mean, you got to think about a seatbelt. She says the risk of not
having strong online protections is clear. Just look at the insurrection at the Capitol.
That's NPR's Shannon Bond. And additional reporting in this episode came from
NPR business reporter Bobby Allen. You're listening to Consider This from NPR. I'm Adi Cornish.