Your Undivided Attention - Mr. Harris Zooms to Washington

Episode Date: May 10, 2021

Back in January 2020, Tristan Harris went to Washington, D.C. to testify before the U.S. Congress on the harms of social media. A few weeks ago, he returned — virtually — for another hearing, Algo...rithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds. He testified alongside Dr. Joan Donovan, Research Director at the Harvard Kennedy School’s Shorenstein Center on Media Politics and Public Policy and the heads of policy from Facebook, YouTube and Twitter. The senators’ animated questioning demonstrated a deeper understanding of how these companies’ fundamental business models and design properties fuel hate and misinformation, and many of the lawmakers expressed a desire and willingness to take regulatory action. But, there’s still room for a more focused conversation. “It’s not about whether they filter out bad content,” says Tristan, “but really whether the entire business model of capturing human performance is a good way to organize society.” In this episode, a follow-up to last year’s “Mr. Harris Goes to Washington,” Tristan and Aza Raskin debrief about what was different this time, and what work lies ahead to pave the way for effective policy. 

Transcript
Discussion (0)
Starting point is 00:00:00 It's almost like listening to a hostage in a hostage video. Nothing they're saying kind of makes much sense until you realize there's a gun off stage holding, you know, that their business model held to their head, and it's causing them to say the things that they're saying. Again, these are really good people. They're just, we can't talk about the actual underlying issue because the business model is based on this manipulation. So this episode is Mr. Harris zooms to Washington.
Starting point is 00:00:23 Just a little over a year ago, Tristan headed to Congress to testify. And April 27th of 2021, he was back and a lot has changed. We've been through a pandemic, January 6th, a lot has happened. And so Tristan was testifying to the Subcommittee on Privacy, Technology, and the Law on algorithms and amplification, how social media platforms design choices shape our discourses and our minds. So Tristan, what was different this time? Yeah, great question. So this time, it was actually with a similar cast of characters. Joan Donovan, who is at the Harvard Shorenstein's Center, focusing on studying on disinformation, was there.
Starting point is 00:01:07 What I've learned over the last decade of studying the Internet is that everything open will be exploited. Moreover, misinformation at scale is a feature of social media, not a bug. Along with Head of Policy at Facebook, Monica Bickert. This time, though, we had the Head of Policy at YouTube and also at Twitter. alongside. So in a way that Senator's job was to ask the tech company's questions and then to ask Joan and I to pop the balloon of what was just said. Here's an example from Joan Donovan. I think tackling a problem this big will require federal oversight for the long term. We didn't build airports overnight, but tech companies are flying the plane with nowhere to land
Starting point is 00:01:48 at this point. And of course, the cost of doing nothing is nothing short of democracies end. As much as many people are impatient with, or maybe doubtful about the competence of the U.S. government to regulate this space effectively, I think we have to celebrate just how far we've come. I mean, think back to one of the earlier congressional hearings, and famously, one of the senators asked Mr. Zuckerberg, how do you make money? How do you sustain a business model in which users don't pay for your service? Senator, we run ads. I see. And that just became enshrined in memory of people saying, well, that's evidence that the United States does not know how to regulate this. And if you compare that to the things that we heard this time, where you have senators asking fairly smart, comprehensive questions about the fundamental business model being the problem.
Starting point is 00:02:41 And not just the business model of advertising, but the entire model of selling attention at all. Because in a subtle way, it's not about whether they filter out just this bad content. But really whether the entire business model of capturing human performance in the form of a cheap way of getting attention by using our own narcissism, using our own desire for more attention from other people, as the basis for composing an attention economy, is a input that's based on human performativity a good way to organize society, or will that lead to kind of disaster and it kind of collapse because it's misaligned with what, makes society work and function. Now, what I thought was very interesting about the way you did this, your testimony, is that you sort of predicted what the platforms were going to say, like the form of their argument. And then I think it was Senator Sass who said, Mr. Harris, you just made a big argument.
Starting point is 00:03:38 Do you have anything to say that you think Mr. Harris is wrong about? It would be useful. But right now we're not getting much direct engagement with that. He's making a big argument. And I think we're hearing responses that are only around. the margins. I'm curious for you to walk us through what that big argument was and how that sort of pre-responded to how the platforms were going to fight back. Yeah, well, you're right. I mean, rhetorically speaking, my strategy was how can we say exactly what they're going to say?
Starting point is 00:04:07 In fact, I wrote in my testimony, you know, my fellow panelists from these tech companies will say, we catch 90% of hate speech and self-harm and harmful content now using AI. We've hired tens of thousands more content moderators. We've took down billions of fake accounts. That's up, you know, 15% from last year. We also have a set of community standards that says this is, there's certain categories of content that simply aren't allowed on our service. And those are public standards that we've had for years. And we published a quarterly report on how we are doing at finding that content and removing
Starting point is 00:04:40 it. And as the report shows, we've gotten better and better, made significant strides over the past years. In January of 2019, we launched more than 30 changes. to our recommendation systems to limit the spread of harmful misinformation and borderline content, which is content that comes close to, but doesn't cross the line of violating our community guidelines.
Starting point is 00:05:04 As a result, we saw a 70% drop in watch time of such content from non-subscribed recommendations in the US that year. This borderline content is a fraction of 1% of what's watched on YouTube in the US, but we know that is too much, and we are committed to. to reducing this number. Further in line with our commitment to choice and control,
Starting point is 00:05:26 Twitter is funding Blue Sky, an independent team of open source architects, engineers, and designers to develop an open and decentralized standards for social media. It's our hope that Blue Sky will eventually allow Twitter and other companies to contribute to and access open recommendation algorithms that promote healthy conversation and ultimately provide individuals greater choice.
Starting point is 00:05:52 Those are the three heads of the tech platforms policy teams. But you may notice that their arguments sound almost indistinguishable from each other. I mean, if you frame the argument as, OK, well, if we have this bad content on our platform, and now the platforms are telling us, well, we catch 99% of these bad apples. There's only 1% or even less than 1% left. Doesn't that feel good enough? I mean, it does feel pretty convincing in a way. What I want listeners to notice is this is the power of framing.
Starting point is 00:06:18 Is that the problem statement? Is the problem statement that we just have some bad apples? And if all those bad apples were gone, then all the tech that's feeding into the American psyche would be positive, healthy, and strengthening democracy. In YouTube's testimony, they mentioned that they recently added a new metric to the YouTube Community Guidelines Enforcement Report,
Starting point is 00:06:37 known as the Violative View Rate, or VVR. This metric is an estimate of the proportion of video views that violate our community guidelines in a given quarter, including spam. Our data science teams have spent more than two years refining this metric, which we consider to be our North Star in measuring the effectiveness of our efforts to fight and reduce abuse on YouTube. Last quarter, this number was 0.16 to 0.18 percent, meaning that out of every 10,000
Starting point is 00:07:03 views on YouTube, only 16 to 18 come from violative content. This is down by over 70 percent compared to the same quarter of 2017, thanks in large part to our investments in machine learning. So I want you to notice just how persuasive that is, right? It sounds like a total drop in the ocean, this tiny little amount of acid in a huge ocean. It's not going to affect you. And again, this is framing it in terms of how many pieces of content were explicitly violative.
Starting point is 00:07:33 Are they explicitly a bad apple? As opposed to, were they subtly transforming an entire generation into attention-seeking vampires that slowly but surely sensationalize, divide, and outrage us, because that's the entire business model, design model, not the advertising, the design model of what YouTube is fundamentally about. This quarter, they will between 0.16 and 0.18%. Ms. Veach, if I might, I just want to know if you're willing to release the data, I believe you're already collecting about how many times videos that violate your content standards
Starting point is 00:08:06 have been recommended by your recommendation algorithm. Thank you, Senator. So I can't commit to releasing that today, but it's an interesting idea. We want to be more transparent, so let us work with you on that. Notice that Ms. Veach from YouTube wants to highlight all the ways that they're already making other metrics on YouTube transparent. The reason why this one is more difficult for them to make transparent, because it would reveal their responsibility in how often they actively recommended things that we know are the harmful things. So it's almost like, why would I publish a list of all the places where I was responsible?
Starting point is 00:08:40 for something harmful happening. That's a particular kind of transparency that obviously no one wants to reveal or admit, but it actually unlocks the keys to then making companies more liable for how often they were recommended. I think one of the conceptual tools we'll need here is to actually define what does it mean to be recommended.
Starting point is 00:08:57 I mean, if you go to Twitter, you're the one who followed a user, if it shows you in case you missed it over the last 72 hours, here's a tweet that you missed. Well, it did recommend that tweet to you because you weren't actually going to see that tweet in chronological order. So it actually kind of debases the entirety of news feeds and algorithms that are ranking things for us because in some ways those are also just recommendations unless you actively go to a profile
Starting point is 00:09:21 and you're scrolling manually through someone's feed. But I think showing and unlocking that number of how many millions of times you recommended something is actually a key to unlocking responsibility by the industry. Yeah, I can't stress how important I think this is. forcing companies to disclose their own hand and how much they've pushed a piece of content is a necessary prerequisite for the next step. So we go from amplification transparency to amplification liability, that the companies, these platforms become liable for when it is their curatorial decision, whether as an individual by hand or as an algorithm, which they've also
Starting point is 00:10:04 hand-coded in pushing these deleterious pieces of content. So on the one hand, I want to say this is a really important step. The companies are now knowing that they're going to have to be very careful about their recommendation elements because in the future, they will be held liable for all of the toxic foam. This is essentially a shift so that the entirety of that toxic foam that shows up on the balance sheets of society will now make sure that it is a liability on their balance sheets. It doesn't change, though, that the rest of the drink that we are imbibing on a daily basis
Starting point is 00:10:33 that is our informational environment, our human identity environment, our daily conversational environment, is still this sort of human performativity of addicted, outrage, sleepless, anxious, divided, and disinformed? Because culture is upstream from politics and technology. And if we live in a culture that's already been divided into two different or multiple different narrative views of reality, then how do we know that taking down that piece of content was good or bad, they can't ever get it to be perfect because they have billions of videos that are being uploaded all the time, like per month, they have like hundreds of millions of new videos constantly
Starting point is 00:11:06 getting uploaded. And how are they ever going to catch quote unquote all the bad apples? Like they're just not going to do it. More importantly, when they quote unquote catch these bad apples, what they really do is then polarize people because some people think their apple wasn't so bad. They're like, I was just posting honest speech. And then one side gets upset that they're being de-platformed and there's always going to be sort of casualties that are unintended from, you know, good people who's happened to get classed by an algorithm because They mention the word Q&ONN, and it suddenly, there's a lot of miscategorization in that model, and it generates, again, a hyper, even further polarized conversation.
Starting point is 00:11:39 And I just want to name that. So it's an important step. I also want to give credit to Guillaume Chaslow, who has been recommending this particular tiny little change for more than four years now. We interviewed Guillaume in our podcast about a year and a half ago. You can check it out. But this is where, as you said, Asa, Ben Sass noticed, and he said, in my opening statements, I made a very big argument. At the end of the day, a business model that preys on human attention means that we are worth more as human beings and as citizens of this country when we are addicted, outraged, polarized, narcissistic, and disinformed because that means that the business model was successful at steering our attention using automation.
Starting point is 00:12:19 As you and I have said before, I think, on this podcast, we've been domesticated into a different kind of species of human, the kind of human that works well for the extractive model of these tech platforms. In the same way that we don't have wild cows anymore, we have the kind of cows on this planet that are best for their meat and for their milk. We're becoming the kind of society who are best for extracting attention, which means this kind of outraged, fearful, anxious, divided, polarized, tribalized, disinformed audience. because that's the kind of traits that are brought out by this model. I mean, take an example. So YouTube takes down all this bad content now. And I want to really applaud them in all these companies, in fact, and the people who've worked so hard over the last few years to do what they're doing.
Starting point is 00:12:59 The people on the inside who are working on the integrity teams and do this awful work every day where you look at kind of the worst aspects of humanity over and over again, you run these queries and you find the worst stuff on these platforms, and you find better and better ways of trying to deal with it. They've made enormous strides. But take an example on YouTube. There's a recent article in The Guardian about these animal rescue videos where you literally have these cute little animals and then someone films them getting captured by a bigger animal
Starting point is 00:13:24 like a snake and starts squeezing them. But those videos are staged. From the article it says, the controversial videos show animals including cats and dogs placed in contrived dangerous situations in the wild, such as near predators, including snakes and crocodiles, only to be saved, quote unquote, just in time by a human. Now, that's not quote unquote harmful content, but these videos get millions and millions of views, and the whole reason this is happening isn't because it was a bad apple, it's because the incentive was to create a social performance to get attention.
Starting point is 00:13:57 The big change here is that I think the senators really brought it back to, let's not get lost in any of the details of your content policies. The real issue is the entire design model of what you do, of how your system works, and by which we mean not the advertising, but the fundamental reframing of what it means to be a human. Your business model is turning humans into this new domesticated species that is incompatible with a civilization that can survive. I'm reminded of Shoshana Zubrov in the film, The Social Dilemma makes this incredible point, which is it is not a radical idea that we would ban the sale of certain parts of ourselves. We do not find it radical, that we ban
Starting point is 00:14:42 the sale of human organs. We do not find it radical to ban the sale of human orphans. So we shouldn't find it radical to think that we would ban the sale of human behavior. That's really the business model of all of the attention companies, TikTok, Facebook, Google, is selling the slight, imperceptible change to human behavior that adds up over time. It's sort of this insidious effect. And looking at some of the press coverage that came from the hearing, I think we've progressed, but still a lot was getting caught in the addiction frame still. Like the harm is that we are addicted to our phones. And you made a really interesting argument in your testimony, Tristan, sort of going back to the Cold War and saying, you know,
Starting point is 00:15:28 the United States and the Cold War invested heavily in the continuity of government. And we are not doing so now. So I wanted you to just sort of explain that part of your argument. Yeah, what I was really trying to say is that knowing that my audience for this hearing was the Senate, what does the Senate care about? Well, obviously the function of government, like the continuity of government, the decision-making of the government has to work. And in the Cold War, when we face the adversary of the Soviet Union, we invested heavily in these underground bunkers and bases and emergency plans to ensure that the U.S. government could continue to make decisions and maintain our capacity to respond to adversaries. But if you think about that as a metaphor, so, okay, what does it
Starting point is 00:16:09 mean for the U.S. government to have a continuity of decision making? Well, you know, in a way we've been attacked, not by a nuclear missile or by sea, but through the slow, diffuse process by which social media made money from pitting our own citizens and our congressional representatives into this kind of online Hobbesian war of all against all. And that actually disrupts and debases the U.S. government as a functional body, because if you literally in a slow way kind of infect the water supply of your democracy so that now the U.S. government cannot make functional decisions because even the congressional representatives themselves have been drinking from this toxic kind of fire hose of division. And it's really kind of infected all of us.
Starting point is 00:16:55 And as I said, in the hearing, it's almost like having the heads of Exxon, BP, and Shell asking about what are you doing to responsibly stop climate change? Again, their business model is to create a society that is addicted, outraged, polarized, performative, and disinformed. That's just the fundamentals of how it works. And while they can try to skim the major harm off the top and do what they can, and we want to celebrate that. We really do. It's just fundamentally if they're trapped in something that they can't change. And that's the core of, I can think, the accusation that I think Ben Sass rightly put attention on.
Starting point is 00:17:29 In my testimony, I also brought up the threat of the rise of China. This is important because fundamentally above all of these concerns is a question of, is the future going to be run by digital open societies or digital closed societies? Right now, digital closed societies like China are consciously using exponential tech to actually strengthen digital closed societies and make them more effective at running society that way. But right now, digital open societies are not using technology to strengthen digital open societies. Instead, they're allowing market forces between these competing tech companies to degrade digital open societies and make them actually worse and worse compared to digital closed societies.
Starting point is 00:18:10 China is a more autocratic type of nation state, and it is applying exponential technology to be a more effective autocracy. And as a result, it's building high-speed rail and bringing 300 million people out of poverty. But we can notice that the open societies are not employing high-tech to make better open democracies. They're letting market forces ruin democracy through technology. Both President Trump and now President Biden are talking about the threat of China as a real issue for the United States. This is not fear-mongering, and this is not meant to entrench our current digital leviathens. It's meant to ask, what will it take? What transformative changes are we willing to make?
Starting point is 00:18:46 We can't just be aiming for less bad digital open societies when digital autocracies are consciously maximizing their use of technology to create stronger digital autocracies and digital closed societies. We have to consciously use technology to create stronger open societies. And figuring out what that looks like, what Open Society 2.0 looks like in the post-digital age, is the question of our time. One of the inspiring things that came out of this hearing is it wasn't just talking about the problem and setting up the problem. There seemed to be a new bipartisan sense that we have to do something. Is that how it felt to you there? what are the kinds of directions we can actually go?
Starting point is 00:19:29 Yeah, that was another inspiring thing was that the self-awareness of senators saying, look, we're done doing the hand-wringing thing and yelling at these companies and wanting more to happen. We want to know what can we actually do here. That inspiration isn't just happening in the U.S. I was just testifying for the EU Parliament two or three weeks ago, and I actually made a mistake of talking mostly about the problems
Starting point is 00:19:53 and realized after I had spoken that everyone already agreed about the nature of the problems, and it was time to move to solutions. Now, I think one unfortunate thing is there was a lot of focus on just Section 230 of the Communications Decency Act, and for listeners, that's the protection that gives them immunity from being responsible for what bad content appears on their platform. But, again, that is kind of the wrong place to draw attention.
Starting point is 00:20:18 Here's Senator Ben Sass from Nebraska. I would like to just briefly address colleagues on both sides of the aisle, because both Republican and Democratic colleagues today have said a number of things that presumed more precision about the problem than we've actually identified here and then sort of picked up the most ready tool, usually the 230 discussion. And I think I'm a lot more skeptical than maybe most on this committee to push to a regulatory solution at this stage. and I think in particular some of the conversations about Section 230 have been well off point to the actual topic at hand today. And I think much of the zeal to regulate is driven by short-term partisan agendas. And I think it would be more useful for us to stick closer to the topic that the chairman identified from this hearing. Senator Kennedy focused his entire
Starting point is 00:21:14 questioning on Section 230. A couple others did as well. And they were actually pinning us into these kind of black and white frames? Mr. Harris, I'd like a straight answer from you. Would you, I have a bill, others have a similar bill. A bill to say that, that would say that Section 230 immunity will no longer apply to a social media platform that optimizes for engagement. Would you, if you were a senator, would you vote for it? I'd have to see the way that the bill is written.
Starting point is 00:21:53 Don't do that. Don't do that to me, Mr. Harris. Give me a straight answer. We all want to read the bills. Would you vote for it or not? So first of all, we want a world where we actually want to solve the problem. And that's one thing to celebrate, that we're at a point where people are like, I'm tired of going in circles, I'm tired of talking about Section 230.
Starting point is 00:22:13 what's it going to take to solve this problem? Now, we have to get down to what do we mean by optimizing for engagement? Because if you say we're measuring engagement, like we measure the number of clicks, the number of sessions, the number of active users, time spent, those are all engagement metrics. Those are the wrong metrics. And he's right. I mean, that's a directionally correct proposal, which is what I said in my response.
Starting point is 00:22:34 It might be a good idea to remove Section 230 protections for any company that optimizes for engagement. But the reason this is getting more subtle is that it's not just the optimally. in terms of what they measure. It's optimizing in terms of what they have designed already. Because the entire model was designed, like the fact you open up Twitter or Facebook or YouTube, and the big button at the top is usually
Starting point is 00:22:56 share your story, share something, post something. Like even that by itself, that's optimizing for engagement. You could load up YouTube and it could just be, here is no recommended videos. We're just asking you, what do you want to learn today? That's a design decision. That's not a metric decision.
Starting point is 00:23:12 tricks decision. So one of the problems I have with what he's saying of optimizing for engagement is, is it just about the measurement or is it about the design? These are hard questions to answer, especially when you're on the stand and you haven't been able to peel back the sticker label to see really what's underneath. Because one of the problems, or the predicament that we're in is that we're trying to regulate a brain implant into society with not even tweezers. It's sort of like with salad tongs. I wonder, just as we, as I think about it now, would taking away 230 immunity for engagement-backed companies or engagement-based companies, would that be essentially strictly better than the world we have now? Because your argument, Tristan, here is that it's not
Starting point is 00:23:57 enough. But I didn't hear sort of a critique of being like, oh, here's where that would go wrong. It also makes me think about, you know, there are protected classes. You're not allowed to discriminate based on color of skin or gender. But you can often infer, sometimes accidentally the color of someone's skin, say, by looking at income and zip code. And so I wonder if you define, you know, the attention metrics a little more broadly. So it's not just the specific metric you're not allowed to optimize for, but for being an attention company, you don't get these kinds of protections. Is that enough?
Starting point is 00:24:34 That's exactly right. I mean, you could prevent people from targeting advertisements based on race. but if I can get the same result by targeting income and zip code, I haven't stopped people from targeting on race. Similarly, I can stop people from measuring engagement metrics, but if the entire design of the product is still around getting people to engage in these harmful ways, I haven't changed the core problem of the whole service is still optimizing for engagement. Just like you said, Aza, so okay, how do we define engagement?
Starting point is 00:25:01 So if you're measuring time, spend, seven-day act is, whatever. And so then they'll say, okay, great, we'll measure something else, like quote unquote, meaningful social interactions. That's what Facebook did, starting in 2018. But essentially, meaningful social interactions was just a different way of measuring engagement. The new boss is the same as the old boss. The new metric is sort of a different version of the old metric.
Starting point is 00:25:23 And that's why I think metrics are one part of it. It's a huge part of it. We don't want to measure the wrong things and optimize for the wrong things. But the core of it was that we designed for the wrong thing from the beginning. Without any metrics at all, without any numbers at all, without any algorithms at all,
Starting point is 00:25:36 Twitter is still based on human performativity, meaning your worth to Twitter is the worth that you have in how much you perform on a stage for other people to get their attention. It's deeper than just the metrics. It's also the design. So I agree with all of that Tristan, but I didn't actually hear him say the word metrics. He said if you're an attention company, if you're an attention vampire company, an engagement-based company, then you no longer have immunity. You have liability that scales to the harms of the problem. And so I think your critique or your critique is not the right. The distinction you're
Starting point is 00:26:15 making is perfect, but that's not, am I missing something? This goes to a conversation you and I had with an advisor of ours who asked us the question, how expensive would the penalties have to have been to prevent this harm from happening in the first place? If you think about pharmaceutical companies who are then are fined $3 billion, but from a drug that they made $30 billion from. Well, $3 billion at that scale is just not big enough to say we would have never made a drug that would have harmed that many people. So the question is, what is the cost that's big enough? So, like, when I really think about this, our media environment right now is just a toxic
Starting point is 00:26:52 brain implant. It's just the wrong one. And I know that's more existential, that's calling for a more transformational change. But I think that one would be exciting to build because we'd be starting from the right first principles. And that's kind of where I, you know, want people to be thinking about is what are the right first principles? You know, it's not like Plato or these philosophers and Aristotle had a model for what an effective attention economy for three billion people was. It's not like there's an existing framework for that. When speech is cheap and anyone can in a personalized way
Starting point is 00:27:23 reach three million other people instantly around the world. But what is a model for a healthy attention economy that doesn't create civilizational collapse. How much is the current administration proposing for their massive 21st century New Deal infrastructure? A couple trillion dollars, no? Something like that, yeah. Imagine if there is an equivalent bill that said we need to reboot, reconstruct our public digital infrastructure. That is, we're moving from a physical democracy to a digital democracy, we invest in our physical infrastructure, but we have not yet invested in our digital social infrastructure. And then if there was that level of capital coming in,
Starting point is 00:28:08 how would you design it? Who would you bring into the room? What are the preconditions would have to be set so you could build it the right way and not the wrong way? The reason why I bring it up is just it's that scale of movement or a change that I think we're going to need. Well, and I want to be clear for listeners, I don't think the answer is necessarily
Starting point is 00:28:29 to just scrap everything that we have. You know, scrap all iPhones, scrap all, you know, HETP, TCPI. Let's keep whatever we can keep. We want to leverage what we have, but we also need to ask the hard questions of when do we need to transition to something that's fundamentally better
Starting point is 00:28:46 and which layers need to transition. What I wish I had spoken more about during the Senate hearing is if you care about global stability, Like, we're going to have major destabilization around the world when you have this kind of misinformation wreaking havoc on the less fortunate countries. I mean, in the United States, think about how bad these problems are. And we have probably the most attention on these issues out of any other country. These companies probably invest more into English-speaking, content moderation, fact-checking, and election war rooms than in any other country.
Starting point is 00:29:17 And look how bad it is. Now think about the rest of the world. We are still talking about mostly conversations that we've had, you know, four years ago about spread of misinformation, things like that. The rate and acceleration of new kinds of threats, new kinds of issues, the growth rate of that is going far, growing far faster than the growth rate of our capacity to mitigate or respond to those threats. I was speaking with someone in the fact-checking network who said, you know, there's now 200 billion messages a day going through WhatsApp, 15 billion going through Facebook. They get about 100 fact checks per day. If you think about a bank being over leveraged and how much risk are they, how far over their skis are they? We've got about $200 billion to $100 billion in terms of scale of information that's running through a system without moderation.
Starting point is 00:30:00 And the organization of Vaz released a report shortly before this hearing, uncovering Facebook's America First approach to COVID misinformation. They wrote, one year into the pandemic, the platform is still slow in taking action, taking on average 28 days to act on misinformative content. And there's a gap of six days between content in English and other languages. So if you imagine some viral story saying drink bleach in the U.S., well, it takes six days longer to actually deal with that content in other languages. The way that things work, you know, you can only speak for one or two minutes and only if the senator calls on you. So it was very limited what we could share, but there's so many other issues here.
Starting point is 00:30:39 You can make incredible progress in the world of civil rights and tolerance in the physical world and culturally, while in the digital space you can go back 50 years. by creating huge rises in hate speech and online harassment. In national security, you can invest billions and billions of dollars into the Department of Homeland Security and passport controls and physical borders and missile systems that will shoot down incoming aircraft. But when you go into the digital world, you lose all those protections. You can spend billions of dollars developing vaccines to get the world out of this COVID pandemic, but then have a digital information environment that rewards misinformation about vaccines
Starting point is 00:31:13 and generates vaccine hesitancy around the world. difference in this round of hearings is the last time when I went to Washington, we were able to do several pre-briefings for the staff of many different members of Congress who were on the committee back in January 2020. But this time living in a Zoom-based world, it was harder to do those kinds of briefings. So we had less time for preparing members with the kinds of questions that might get more to the structural issues. That said, I think we really should celebrate just how much the focus of, especially chairs Ben Sass and Chris Coons, focused on the deeper structural issues that we're facing here.
Starting point is 00:31:50 And there's much to celebrate. Your undivided attention is produced by the Center for Humane Technology. Our executive producer is Dan Kedmi and our associate producer is Natalie Jones. Nor Al Samurai helped with the fact-checking. Original music and sound design by Ryan and Hayes Holiday. And a special thanks to the whole Center for Humane Technology team for making this podcast possible. A very special thanks goes to our generous lead supporters of the Center for Humane Technology, including the Omidyar Network, Craig Newmark Philanthropies, Fall Foundation, and the Patrick Jay McGovern Foundation, among many others.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.