The Daily - A Misinformation Test for Social Media

Episode Date: October 21, 2020

Facebook, Twitter and YouTube have invested a significant amount of time and money trying to avoid the mistakes made during the 2016 election.A test of those new policies came last week, when The New ...York Post published a story that contained supposedly incriminating documents and pictures taken from the laptop of Hunter Biden. The provenance and authenticity of that information is still in question, and Joe Biden’s campaign has rejected the assertions.We speak to Kevin Roose, a technology columnist for The Times, about how the episode reveals the tension between fighting misinformation and protecting free speech.Guest: Kevin Roose, a technology columnist for The New York Times.For more information on today’s episode, visit nytimes.com/thedaily Background reading: Here’s Kevin’s full report on the efforts by Twitter and Facebook to limit the spread of the Hunter Biden story.The New York Post published the piece despite doubts within the paper’s newsroom — some reporters withheld their bylines and questioned the credibility of the article.Joe Biden’s campaign has rejected the assertions made in the story.

Transcript
Discussion (0)
Starting point is 00:00:00 From The New York Times, I'm Michael Barbaro. This is The Daily. Today, the nation's biggest social media companies are determined to avoid the mistakes that they made during the 2016 election. But in the process, they've ignited a different kind of firestorm. My colleague, Kevin Roos, reports from San Francisco. But in the process, they've ignited a different kind of firestorm.
Starting point is 00:00:29 My colleague, Kevin Roos, reports from San Francisco. It's Wednesday, October 21st. You know, Kevin, actually, it's been ages since we had you on the show. Ages? Yeah. And I wonder whose fault that is. You don't call, you don't write. I thought we had something. No, I think that you didn't intersect with the news.
Starting point is 00:00:54 That's true. But here you are, intersecting. There's been a lot going on. So, Kevin, my sense is that the big social media companies, which you have been covering for a really long time, have been preparing very diligently, very carefully, very expensively for the 2020 election and for the possibility of a major moment of misinformation, for some kind of active interference, given how much they failed to do that back in 2016. So I wonder if you can summarize what the preparations that they have been doing have looked like. So since 2016, these three big social media companies, Facebook, Twitter, YouTube, they've spent tons of time and money investing in trying to keep foreign interference from
Starting point is 00:01:41 happening on their platform again. There have been new policies, new teams. They've hired tons of new people and moderators. And basically their goal is to just avoid being played again, to avoid the kind of foreign interference attempt that they were seen as having allowed on their platforms in 2016. Right. And my sense is that beyond just being alert to this, they want to be more responsive, right? They want to be less hands-off.
Starting point is 00:02:11 So if something happens, the goal is not just to notice it, but to actually do something. Is that right? Exactly. They want to act in a way that is consistent with their policies, but also that is fast, that gets to these problems before they spiral out of control and become huge election interference issues. It's not just what they do, it's also how quickly they're able to do it. So I guess in a sense, in the weeks leading up to the election, they are kind of just waiting for the first big test case for these systems.
Starting point is 00:02:47 Yeah, but the kind of fear that they've had is this kind of October surprise, this introduction of something new, some attempt to kind of steer the narrative about the election at the last possible moment. And that came for them last week in the form of this New York Post article that appeared on Wednesday morning. And in broad strokes, what was this? So at a sort of broad level, what the Post publishes is a story alleging that there's this laptop that was used by Hunter Biden, the son of Joe Biden, presidential candidate, that made its way to investigators and ultimately to Rudy Giuliani, the president's lawyer,
Starting point is 00:03:31 and that when they inspected this computer, it had emails on it that the Post described as being incriminating. Now, there are still lots of questions about these emails. We have not been able to verify their authenticity. Now, there are still lots of questions about these emails. We have not been able to verify their authenticity. We don't know who actually got access to them and how they made their way up to Rudy Giuliani. There are still so many questions about the provenance of these materials and whether or not they're real. But that's just the sort of nutshell version of what The Post publishes on Wednesday morning. shell version of what The Post publishes on Wednesday morning.
Starting point is 00:04:11 And so what are the big social media companies, the three you identified, Twitter, Facebook, YouTube, what are they thinking in their very well-prepared offices about this story? Well, I've spoken to some people at the companies, and basically what they were thinking is, here we go again. To them, it seemed to have a lot of the same hallmarks as the 2016 hack and leak campaign that Russia carried out, where they hacked into email inboxes belonging to prominent Democrats and released those emails in a coordinated fashion to try to steer the discussion around that year's election. And so the way that these platforms think about threats like this, they basically have three options for what to do next. They can do nothing. They can let this story run its course and trust that, you know, people will work out the facts and that the systems will work as designed. They can step in really aggressively and essentially ban the story from their platforms and say, we're going to take this down. We're not going to put any links to it. You're not going to
Starting point is 00:05:10 be allowed to link to it. And we're going to lock your account if you do link to it. That's sort of the nuclear option. And then there's all these decisions that they can make in the middle of that between doing nothing and taking it down. And those would include things like putting a label on it or putting a little fact check underneath it or reducing its distribution through their algorithms rather than banning it outright. And these platforms know that they have to do something quickly and that whatever decision they make, the longer they wait, the harder it's going to get to restrain or to reel in this narrative that is already starting to go viral. So how did these companies react in that very short period of time? What did each of them do? So the initial reaction from these platforms is a little bit all over the place. So YouTube basically does nothing. It says, given what we know about it right now, this story is allowed on YouTube and we'll continue to evaluate it.
Starting point is 00:06:10 Next, we have Facebook. And they basically come out several hours after this story is posted. And they say that they are going to demote the story, basically slow its spread in their algorithms until it can be evaluated by third-party fact checkers. So they're basically saying, we're going to pump the brakes on this story until the people that we trust to determine whether or not these things are true or not can look at it.
Starting point is 00:06:38 So they don't block it. They're just kind of throwing a sheet over it. They're not blocking it. They're just kind of putting it on ice for the moment. And then you have Twitter, which makes the most aggressive call in the early hours after this story is posted. And they say that we are not even going to
Starting point is 00:06:55 allow people to link to this story because this violates our rules against sharing private information because there were some sort of private information contained in these emails that the Post had excerpted,
Starting point is 00:07:08 and that also it was a violation of their hacked materials policy. So they were basically treating this as if it were a hacker sharing some passwords that they had gotten from some data dump. So Twitter is taking the kind of nuclear option, as you described it. some data dump. So Twitter is taking the kind of nuclear option, as you described it. Not only is Twitter banning people from linking to this story, but it's locking the accounts of the people who do link to it, including some pretty prominent people like Kayleigh McEnany, the White House press secretary. And for Twitter, like, this is a pretty big enforcement action, and they're basically taking no chances with this.
Starting point is 00:07:48 Right, so they are choosing to do the exact thing that everyone said social media companies did not do in 2016 when information began circulating of dubious origin, for example, John Podesta's emails. They are just clipping its wings. They are making sure it cannot be shared. Right. And their theory on why they're doing this is that it's better for them to act too aggressively and let up later than to let something go and then try to catch up to it.
Starting point is 00:08:18 And so, Kevin, what is the reaction to this decision by these companies? What is the reaction to this decision by these companies? So among many people, including a lot of people on the left, frankly, there was relief that after years of criticism for not having done enough to stop interference in the 2016 election, that they were being attentive and proactive and doing something rather than nothing to avert a potential similar crisis this year. But then this is a dark moment. There were people, many on the right. This was mass censorship on a scale that America has never experienced, not in 245 years. It's both insidious and infuriating. Who were very offended by these decisions by the platforms. Make no mistake, Twitter, Facebook, they are not arbiters of truth.
Starting point is 00:09:11 They're all engaging in censorship, so you're kept in the dark. Cold, calculated political actors. Josh Hawley, the senator from Missouri, one of the sort of most vocal critics of these big tech platforms, came out. If Republicans don't stand up and do something about this, these companies are going to run this country. And said that this was the dawn of a dangerous new era in American history. Silencing the media is a direct violation of the principles of the First Amendment. Ted Cruz, a senator from Texas, also said that this was an affront to free speech. We're seeing Silicon Valley billionaires, frankly, drunk with power. So the entire sort of Republican establishment goes nuts over this.
Starting point is 00:10:03 They are calling for subpoenas against the leaders of Twitter and Facebook. The Senate Republicans need to ask Jack Dorsey, what is your policy so we can decide whether or not we're going to start regulating this? They are calling for legal protections for these platforms to be revoked. You're not a real platform. You're just another liberal editor. And they are essentially treating these companies as if they are themselves interfering in a U.S. election by acting to prevent a possible interference attempt. And let me be very clear. Twitter is interfering in this election. Big tech. They want to run America. We've got to stop them. We've got to do something right now.
Starting point is 00:10:41 We've got to stop them. We've got to do something right now. Hmm. They are interfering by trying to stop interference. It's complicated. Very, yeah. The whole thing is crazy. And, of course, finally, the president himself weighs in.
Starting point is 00:11:07 And it's like a third arm, maybe a first arm of the DNC. And attacks these companies, repeats his call for the repeal of these legal protections that these companies have. But it's going to all end up in a big lawsuit and there are things that can happen that are very severe. And basically says that they are trying to rig the election for Joe Biden by suppressing this story. So this does not go smoothly. No, it's kind of a damned if you do, damned if you don't situation for them, where they're criticized for not doing enough to protect against election interference. But then when they try to act to protect against potential election interference, they're criticized for that too. So all this criticism, all these questions surrounding the story
Starting point is 00:11:49 really leads them to try to figure out like, did we make the right call here? And essentially, Facebook and YouTube sort of stick by their decisions. Like YouTube doesn't take down content. Facebook continues to kind of limit this content without blocking it entirely. But Twitter starts actually reversing its original decision. They sort of land on this position of, we think that this article is being so widely discussed that we no longer think it makes sense to block links to it. But in the future, we will put labels on materials that might have been hacked. So we will essentially take a middle ground position
Starting point is 00:12:32 on stories like this in the future. And I think from the platform's point of view, this story and the issues it raised were actually a little more complicated than they originally thought. We'll be right back. So, Kevin, what made this New York Post story, as you just said, such a uniquely complicated situation for these big social media companies? Facebook, YouTube, Twitter would agree that foreign election interference is bad and that it's part of their job and something they're very committed to doing is stopping foreign interference attempts. But foreign election interference is not always very obvious.
Starting point is 00:13:43 So in 2016, you had trolls in St. Petersburg who were literally buying Facebook ads in rubles. But there are much more subtle ways that a government or an entity could try to influence a U.S. election. And one possibility is that they could use a hack and leak operation where they steal information and then distribute it. But instead of going directly out with it or through an organization like WikiLeaks, they could go through a major American news organization like the New York Post. And these platforms, they don't find it particularly hard
Starting point is 00:14:15 to take action against sort of cyber attacks and things that are, you know, pretty blatantly trying to manipulate their services. But in this case, it's more like trying to manipulate their services. But in this case, it's more like trying to figure out if the New York Post can be trusted or not, which is a very uncomfortable position for them to be in. Right. Because all of a sudden, a company like Twitter or Facebook would suddenly be in the position, perhaps throughout an election, of routinely deciding
Starting point is 00:14:45 be in the position, perhaps, throughout an election, of routinely deciding whose confidential sources are trustworthy and whose articles based on documents should be allowed to have their links shared or blocked. And that could become a pretty slippery slope pretty quickly. Yeah. And after Twitter made its decision to block links to the New York Post story, they heard from a lot of journalists, not just in the U.S., but around the world, who worried that this policy against not allowing links to hacked materials could endanger their ability to report on things involving confidential sources and whistleblowers. And that's part of the reason that they walked back their initial call. Right. I mean, at the end of the day, I think it's safe to say we don't want social media companies to be the gatekeepers of journalism. We want journalists to produce good journalism that applies careful standards and thoughtful judgments to what information gets out and where
Starting point is 00:15:40 it comes from. And I guess this is still an open question. The New York Post may not have done that in this case. There's still a lot we don't know, to be clear. I mean, our colleagues have done some reporting and there seem to be some sort of red flags in the process behind this particular story. Apparently the reporter who wrote most of the story didn't want his byline attached to it, things like that. Not a good sign.
Starting point is 00:16:04 Not a good sign. So very unorthodox process behind this story. But I think it does point to the fact that these platforms are reluctantly being asked to not just sort of keep their platforms from being manipulated, but in cases like this, to kind of referee journalism in a way that they're very uncomfortable doing and that arguably shouldn't be their job at all. But because of the nature of these hack and leak campaigns, they sometimes have to be. Kevin, listening to you, I'm wondering, are these social media companies and their policies on this kind of content, are they making journalists approach this with greater rigor and perhaps with the fear that they would publish something that would be blocked or deplatformed? And would that become an incentive
Starting point is 00:16:55 for everyone in the news media, who of course wants their content to be shared, to behave better? I don't think the platform's set out to improve journalism. I don't think the platform set out to improve journalism. I don't think that's one of their sort of goals here. But I think in this case, they may actually have done reporters a favor by coming out early and aggressively to say there is something fishy about this story and their decision to act on it it changed the tenor of some of the coverage that followed. Instead of just repeating what was in these alleged emails, the story became sort of about process and journalistic rigor and platform governance and all of these sort of meta topics that I think ultimately shed more light
Starting point is 00:17:45 on what happened than just going back through the emails and printing the most salacious parts. Right. And that's very different from what happened in 2016 when the emails stolen from the Democratic Party. And again, we don't know if any emails were stolen from anybody in this case. But back in 2016, those emails were made public and news organizations, including the New York Times, including me as a reporter, went through them and generated stories about them. And it wasn't about where they had come from. It was about what they had revealed about the characters in the emails. Right. And I think that points to one of the major shifts that has happened since 2016. it's not just the platforms that have been worried
Starting point is 00:18:27 about a repeat hack and leak operation and making plans for what they're going to do. It's also news organizations. At The Times, like, I and a group of other disinformation reporters and researchers put together a list of guidelines for how we would handle a hack and leak that resembled the one we saw in 2016. And is there anything you can say about that process?
Starting point is 00:18:49 Yeah, it's a five-step process, and we gave it a silly acronym. It's the email method. Of course. There's evidence, motive, activity, intent, and labels. And it's just the kind of thing that we are considering internally as sort of a checklist for the process that we go through when something like a hack and leak does emerge. Kevin, I'm curious if all the diligence from these news organizations like The Times and the crackdown by the big social media companies on this New York Post story, did it work? Did it, at the end of the day, limit the reach of this story whose origins we are still trying to figure out and which are suspicious? Yes and no.
Starting point is 00:19:38 I think just in pure statistical terms, the story still traveled very widely. It was among the highest performing articles on social media that day. It's been getting wall-to-wall coverage on Fox News and from other right-wing media outlets. So in that sense, if the goal was to reduce the visibility of the story, I think the answer is that no, it didn't stop this story from getting out or being discussed and may have, in fact, drawn more attention to it than it otherwise would have gotten. But I think the thing that did change is the kind of attention that was being paid to the story. And it represents, I think, a real break from 2016 when the story became all about the emails, all about John Podesta and the DNC and Hillary
Starting point is 00:20:28 Clinton, when we were late, frankly, to turn our attention to the bigger story behind that story, which was that Russia was trying to interfere in our election. Hmm. So a bit messily and with some high- reversals, with plenty of angry partisans, and with a story that was supposed to be losing steam that has definitely made its way around the internet, we are seeing some meaningful improvements in this social media system over 2016. in this social media system over 2016? Yeah, I think we are. And all of this is super messy and complicated. And there will no doubt continue to be mistakes and reversals and people claiming partisan bias and threats to free speech. All of that's going to continue.
Starting point is 00:21:22 And, you know, there are still two weeks until the election. So we don't know what could happen between now and then. But right now, I think the big picture is that we are all now much more aware of how we can be manipulated, whether we are executives at a social platform like Facebook, Twitter, or YouTube, whether it's us as journalists at the New York Times, or whether it's just voters, people who consume the news and are trying to make sense of what's happening, I think we are all now much more conscious than we were in 2016 of all of the ways that we could be manipulated or tricked or baited or taken advantage of. And I think that increased awareness, that consciousness of the risks that we face is a good thing,
Starting point is 00:22:09 no matter what happens with any one particular story. Thank you, Kevin. We appreciate it. Thanks for having me. We'll be right back. Here's what else you need to know today. On Tuesday, the U.S. Department of Justice sued Google for allegedly violating antitrust law, accusing it of maintaining an illegal monopoly
Starting point is 00:23:03 over search and search advertising. The lawsuit represents the most significant legal challenge to a tech company's market power in a generation. Google achieved some success in its early years and no one begrudges that. But as the antitrust complaint filed today explains, it has maintained its monopoly power through exclusionary practices that are harmful to competition. The lawsuit accuses Google and its parent company of using exclusive business contracts and agreements to lock out rivals. One such contract paid Apple
Starting point is 00:23:39 billions of dollars to make Google the default search engine for iPhones. As a result, the lawsuit said, both competition and innovation suffered. So the Justice Department has determined that an antitrust response is necessary to benefit consumers. If the government does not enforce the antitrust laws to enable competition, we could lose the next wave of innovation. If that happens, Americans may never get to see the next Google. In a statement, Google called the lawsuit deeply flawed and said that rather than helping consumers, it would hurt them.
Starting point is 00:24:28 That's it for The Daily. I'm Michael Barbaro. See you tomorrow.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.