Tech Won't Save Us - Section 230 Protects Free Expression Online w/ Evan Greer

Episode Date: October 29, 2020

Paris Marx is joined by Evan Greer to discuss Republican and Democratic desires to amend or revoke Section 230, why the proposals won’t solve problems with Big Tech, and the international implicatio...ns of US decisions about moderation.Evan Greer is an activist, musician, and writer. She is the deputy director at Fight for the Future, which is currently running campaigns to protect Section 230 called Save Online Free Speech and to ban facial recognition technology. Follow Evan on Twitter as @evan_greer.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter.Find out more about Harbinger Media Network and follow it on Twitter as @harbingertweets. Also mentioned in this episode:Evan wrote about the problems with algorithmic amplification for WiredThere is no anti-conservative bias on social media and Facebook’s algorithms connected extremists with hate groupsFacebook removes the accounts of anti-government activists internationally and an internal memo by a former employee says it doesn’t care about its impacts if Western media won’t find out about itSESTA/FOSTA made life harder for sex workers, but has also empowered a movement for decriminalizationZoom deleted meetings discussing its own censorshipSupport the show

Transcript
Discussion (0)
Starting point is 00:00:00 The battle over the direction that it goes is a battle to decide whether technology is largely used to empower and liberate or to oppress. Hello and welcome to Tech Won't Save Us. I'm your host, Paris Marks, and this week I'm joined by Evan Greer. Evan is an activist, musician, and deputy director of Fight for the Future. She's also a writer who's written for a number of different publications, including The Washington Post, The Guardian, and Wired. In this week's conversation, we talk about Section 230 of the Communications Decency Act in the United States. And while usually I probably wouldn't do an episode on a specific online, and it's become a political football in the past year or so as Republicans and Democrats have threatened the future of Section 230 and presented different changes that would place limitations on the law. This is a really
Starting point is 00:01:17 interesting conversation where we not only talk about how both of those political parties are approaching Section 230, but also what it means internationally, even for people who are not in the United States, because where we use platforms that are located in the United States, we're all affected when changes are made to how they operate. Tech Won't Save Us is also part of the Harbinger Media Network. You can find more information about that in the show notes. Obviously, if you like this episode, please leave a five-star review on Apple Podcasts and make sure to share it with any friends or colleagues that you think would enjoy it. That kind of social proof when it's shared on social media really helps us to get new
Starting point is 00:01:55 listeners. And if you want to support the work that I put into making this show every week, you can go to patreon.com slash techwontsaveus and become a supporter. Enjoy the conversation. Evan, welcome to Tech Won't Save Us. Hey, thanks for having me on. It's great to speak with you. So there's this big debate that's kind of happening right now about section 230 of the Communications Decency Act, which is kind of this really foundational law or document or whatever about how the internet works and what can kind of happen on the internet and the liability that
Starting point is 00:02:25 different platforms can be held to. And that's often associated with this specific section of Section 230 that says, no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. Pretty legalistic, but I want to start by just getting you to kind of lay out the basics of Section 230. So what is this law and what makes it so important to the way that we use the web today? I'm glad we're talking about it because I feel like Section 230 is in competition with HIPAA to be the most misunderstood law of 2020. And I think it's really important that people understand it because as it's kind of been dragged further and further into this partisan circus that is current US politics, I think
Starting point is 00:03:10 people's understanding of it has decreased. It's sort of like a direct correlation. The more Joe Biden and Trump talk about something, the less people understand it, right? But so to turn those 26 very legalistic words that you said into a concept that maybe most people can understand. Section 230 is basically the law that allows the internet to have user generated content. So I feel like the thing that most of us like about the internet is that it's really different than cable TV, right? On cable TV, you only see the programming that has been bought and paid for by the channels that you watch. On the internet, you know, we have
Starting point is 00:03:45 this just enormous burst of human opinion and creativity, you know, some of the worst stuff in the world, some of the best stuff in the world, but it's a place, you know, where we can post our own videos, our own ideas, upload a podcast, post a meme, share photos, our own opinions. What makes that possible is the fact that websites that host those things that we upload, our content, are not legally liable for the things that we post. And that is what section 230 says. It says, when I post an unpopular opinion on Twitter, no one can go and sue Twitter because of my unpopular opinion. And I think it's important that people recognize that this is not just a protection for companies or platforms. It's also a free speech protection for individuals.
Starting point is 00:04:32 Section 230 is also the law that makes it so that you can't get sued for retweeting something, for example, because you are not the speaker, you are simply amplifying another person's speech. And I think, you know, the reason that that becomes really important is because this is being framed and I see it all the time in the press as, oh, section 230, big tech's prized liability shield. I feel like that's the phrase that I see over and over and over again. And it's sort of being portrayed as if this is a, you know, some kind of magical gift to the Facebooks and Googles of the world.
Starting point is 00:05:06 And frankly, I spend at least as much of my time trying to bring down those big tech companies and hold them accountable as I do defending these kind of foundational laws like Section 230 or like defending that neutrality. And I think what's happening is people have this real legitimate anger toward big tech companies whose business models are fundamentally incompatible with basic human rights and democracy. And politicians are basically exploiting that legitimate anger and trying to redirect people into this kind of partisan pissing match over Section 230, when the reality is blowing up Section 230 won't do anything to address the very real harms that are being caused by big tech companies and their
Starting point is 00:05:50 business models. And in fact, it could actually make the situation worse, because the biggest tech companies are the ones that have the biggest armies of lawyers and lobbyists who can figure out how to navigate a new regulatory framework in a world where they might become more liable for the content that gets posted. And really what will happen if we get rid of Section 230 is it'll open the floodgates for massive internet censorship. Companies like Facebook and Google will be happy to just remove any content that could be seen as controversial in order to protect themselves and their profits. And really who will suffer is individuals like yourself and myself and our ability to speak out
Starting point is 00:06:30 and kind of the smaller and medium sized companies, the Reddits and Etsys and Vimeos of the world who will never have an opportunity to sort of challenge the dominance of big tech companies in a world where they can get sued out of existence. And the Facebooks and Googles of the world are kind of have their army of lawyers to protect them. So that was a long winded attempt to summarize, you know, what is in fact, you know, a really complicated law. And I'm realizing now that I got to the end of it, I also left out an entire piece of it, which is that Section 230 is also what protects companies from getting sued for engaging in good faith moderation practices, right? So Section 230 allows them to host our speech. It also allows them to set rules like you can't use our platform to dox people or to post really harmful content that we all agree
Starting point is 00:07:16 is not what we want to see on this platform. So I think that's particularly interesting because on the left, I've seen this increasing uptick in folks saying we should get rid of Section 230 because that's the law that lets companies host hate speech, for example. And while that is sort of true on its face, Section 230 is also the law that allows companies to moderate hate speech as they see fit and gives us an opportunity to pressure them to do that better, right? Without Section 230, they would either wouldn't do moderation at all, or they would extremely heavily moderate essentially everything that could potentially be controversial, including, for example, videos of police violence that have gone viral on social media. No social media platform would be willing to host something like that if they knew that they could get sued by the FOP for allegedly inciting violence, right? So it's essentially what allows our feed, our news feed to contain a diversity of viewpoints
Starting point is 00:08:10 and content. Without it, our feed would basically be cat videos and like cooking recipes and other things that companies are absolutely sure won't get them sued. And the internet would become more like daytime cable TV, which is exactly what we don't want it to be. I think you make so many good points in your explanation of what Section 230 is. You know, and I would just say that those recipe videos are getting super annoying. So we don't want that to just become the internet.
Starting point is 00:08:36 It's totally true. So you bring up so many important points, and I think that we'll touch on a number of them, you know, throughout our conversation. But obviously I want to start with the proposals that are being made to kind of change or revoke Section 230, right? Because that's really the core of the reason why this is kind of in the news right now and getting so much discussion and as you say, even kind of misleading portrayals, right? So the Republicans and the Democrats and, you know, Donald Trump and Joe Biden both have different perspectives
Starting point is 00:09:06 on Section 230, what it's doing and what should be done to it, right? And so I was hoping that we could kind of dig into what each of those sides, I guess, wants to do with Section 230. So when we're looking at what Donald Trump and Bill Barr and the Republicans, various lawmakers are talking about when it comes to Section 230, what are the things that they're focused on? And why is it that they want to change this law? On the Republican side, and with conservatives, Section 230 has sort of become a convenient boogeyman, or just sort of a way for them to inflame what is really a totally disingenuous talking point where they have claimed that big tech companies exhibit systemic anti-conservative bias, right? So they are kind of playing the victim,
Starting point is 00:09:50 claiming that their speech is consistently being deprioritized or censored or shadow banned. And really, I think it's super important that we not take these, again, like totally disingenuous concerns too seriously. You know, we certainly should be concerned about big tech companies having amassed the amount of power that they have, and the ways that they can and do use that power to shape the public narrative. But the reality is that the claims that we've seen from Republicans are largely baseless, and there's no evidence to support them. So while we totally should be concerned about the fact that as these companies have become more centralized, they are tasked with making moderation decisions at a scale that it's basically impossible to do it at. And that has led to speech, especially speech at the margins, getting silenced and censored. But it's not systematically targeted against conservatives. It's all kinds of speech, all kinds of content that gets caught up in either automated removal algorithms or by individual human moderators.
Starting point is 00:10:53 And in fact, there's a lot of evidence to suggest that it's actually the speech of marginalized people, particularly Black women and trans folks, that is more disproportionately impacted or removed from platforms in ways that go beyond the platform's stated or transparent moderation policies. So all of that is to just kind of set it up to say, I don't think most Republicans actually even believe this. I think it has just become a convenient talking point for them. And in fact, it actually kind of flies in the face of, you know, quote unquote, free market republicanism, if you want to accept their arguments at face value, right. And I think one of the places that that hypocrisy has been most present is at the FCC, the Federal Communications
Starting point is 00:11:35 Commission, where, you know, Ajit Pai, the chairman who notoriously repealed net neutrality, who argued vehemently that, you know that the federal government should not be getting involved in private businesses like AT&T and Verizon. We should just keep our hands off of them. When he was talking about just the most basic protections that said that ISPs shouldn't be able to block websites or throttle people's connection. So he simultaneously is arguing that the federal government shouldn't have anything to do with preventing abuse by ISPs, but they should be micromanaging social media platforms, content moderation policies, essentially deputizing themselves as the online speech police. That just really, I think, illuminates the hypocrisy that's going on within the Republican Party around this issue. But to tackle some of the specific proposals, as you mentioned, there's been, you know, kind of a flurry of bills on this on the Republican side that have pretty much all
Starting point is 00:12:28 centered around this fundamental lie that big tech companies are specifically and actively targeting conservatives for censorship. And they've all sort of revolved around using Section 230 as a sledgehammer or a cudgel to kind of bully tech companies into moderating in ways that are more friendly to Republicans. And so, you know, to me, they've couched a lot of this in frameworks of free speech and diversity of opinion and internet freedom. And, you know, all of those things sound good to me. Those are the things I've fought for my entire adult life. And in fact, and I'll be clear, like, I actually do support broad and limited moderation policies, because I think that big tech platforms tend to get things wrong so
Starting point is 00:13:12 often. So I would actually, I've actually argued against some of my friends on the left who have called for more strict moderation practices, and actually do really fundamentally believe that the internet should be a place where there's a diversity of viewpoints and we can have arguments. But what most of this legislation is basically saying, you have to moderate in ways that we like, or we'll revoke your Section 230 protections. And the funny part about that is that if they actually did that, if any of these proposals were to succeed, it's likely that Donald Trump and Trumpist politicians would be among the first to get booted from platforms if they were to become liable for the speech that they host, because they're the ones that are often out there making the most outrageous claims, threatening people, and engaging
Starting point is 00:13:56 in speech that could be portrayed as incitement, for example. So again, I don't take these bills very seriously. I see them largely as about, you know, creating this victim mentality on the right, convincing people that they're being silenced and oppressed to rally them to kind of support the status quo, rather than any serious attempt to rein in big tech companies, or to limit the amount of monopoly power they have to set things like speech rules, which I think are conversations that we should have. And we could be having, you know, meaningful debates about what the right balance is, or what moderation policy should look like. But instead, we're sort of stuck in this kind of nonsensical debate about Section 230, which politicians on both sides of the aisle just fundamentally misunderstand or, or frankly, they don't even really care what it actually does. It's just become a convenient talking point for them on the campaign trail. I think you make so many important points there, right? When I think about what the Republicans are trying to argue when it comes to the discrimination against conservative voices
Starting point is 00:14:58 or whatever that you see on major social media platforms, it just kind of harkens back to the same things that they've always said about mainstream media and how it doesn't accurately represent conservative voices and how they have Fox News, their major outlet, and then what Fox News does kind of influences the coverage of all of these other major networks to kind of cover more conservative views and have fewer certainly left-wing viewpoints on there, right? So it just seems completely disingenuous. And then when you look further than that, you can see structural things at Facebook where there are a number of executives who are associated with the Republican Party, and it looks like they have made decisions that would actually try to ensure that conservative
Starting point is 00:15:39 news and whatnot does not get hidden as much as, say, left-wing news. And then the other thing I think is important there, as you mentioned, it's often people who are from marginalized communities who are affected far more than conservatives with this. And you mentioned Black women and trans people. But I would also go further and say, if we look internationally, that's also people who are kind of fighting against their government, say, Palestinian activists, activists in Egypt and Syria, who are also having their accounts deleted and things like that, right? Absolutely. And I'm really glad you raised that because I think we in the U.S. tend to just be've seen this big trend on the left of kind of folks pushing for more aggressive moderation as the way to deal with, you know, the very real harms that we are seeing from big tech companies, which have become engines of recruitment for white supremacists,
Starting point is 00:16:36 violent fascists and others. But I think we often, you know, we're just looking at the US context and ignoring the fact that these moderation policies impact people all over the world. And if you look at the data, as you just said, the folks that actually get most of their speech removed, or are most disproportionately impacted by moderation are politically active Muslims who live outside the US whose speech gets caught up in the three big platforms, Twitter, Facebook, and YouTube's automated anti-terrorism removal algorithms, right? So they identify a piece of speech that has been posted by a group that the US government deems to be a terrorist group. And then they sort of automatically using software,
Starting point is 00:17:15 remove similar posts from across the internet, many of which might be, you know, totally legitimate posts that have nothing to do with terrorism. And then of course, you know, that's if you accept the US government's definition of who is and isn't a terrorist, you know, just to begin with, right. But, you know, broadly speaking, the last couple years in the US, and again, I get it, I understand where people are coming from, and that there is a real problem that needs to be addressed. But I think we've increasingly focused almost exclusively on what types of speech does companies like Facebook and Twitter allow, I hear the word allow a lot. And we very rarely talk about the types of speech that they get, you know, this is sort of the like, what's extra hilarious
Starting point is 00:17:56 about the fact that like Republicans and the far right have co-opted this frame of free speech, when in fact, we in the left should be on the front lines calling out these big tech companies for their role in suppressing the speech of marginalized people, and really, in the end, calling to end their monopoly power. That's the kind of fundamental problem here, right? If there were 1000 different platforms like Facebook that had scale and network effect, it wouldn't even be worth talking about what one of their moderation practices were, because if you didn't like it, you could go to another one. But when I hear people say now, like, oh, well, these are private companies and they can do whatever they want. This isn't a First Amendment issue. That's true. It's not a First Amendment issue, but it's silly to pretend that it's not a free expression issue
Starting point is 00:18:37 because you can't just go start a new Facebook when Facebook essentially has a monopoly on human attention in the digital age. And so, you know, it's like saying, oh, just go start your own electrical grid. You know, it's just a ridiculous statement, given the monopoly power that these companies have amassed. And until we take meaningful grassroots and policy action to challenge that monopoly power, we kind of have to engage in these moderation fights. And from my perspective, we should be engaging in them with an eye toward human rights and free expression globally, rather than sort of a navel gazing view of like, I want to get down as much of the speech that I don't like as
Starting point is 00:19:08 possible. I'll come back to more of that kind of international perspective in a bit. But first, I do want to shift our focus to the Democrats and to Joe Biden. And so obviously, they have a slightly different perspective on this, but there is still some critical views of Section 230 within the Democratic Party as well. So what is the Democrats' approach and what is their issue with Section 230? So again, I think underscores just how silly this conversation has become, because the Democrats are basically, not all Democrats, but Joe Biden and several other Democrats have actively called to repeal Section 230 for the exact opposite reason that the Republicans have called to repeal it. They basically feel that big tech platforms aren't moderating enough.
Starting point is 00:19:51 They want them to do more fact checking of advertisements or posts. But really, again, I think they're actually approaching it fairly similarly to the Republicans in the sense that I'm not sure how serious they actually are about any of this, or if they really, when it came down to it, would want to modify or revoke Section 230. It seems like it has become a proxy for anger at big tech companies. And so again, they are kind of recognizing that the general public is increasingly angry and outraged by these companies and their business practices, as we should be. But instead of presenting a vision for what to do about that, for example, passing strong
Starting point is 00:20:30 federal data privacy legislation, which has been tied up in Congress for years, they've sort of singled out Section 230 as almost a proxy for big tech power, right? And so I could say a million different things about, you know, why I think that they're getting that wrong. But on a fundamental level, you know, where we have folks in the center left who are saying that blowing up Section 230 is the way to address hate speech, for example, I mean, that's just wrong. You know, even if you totally agree that, like, you want Facebook and companies like that to aggressively moderate and remove anything that you or I might deem to be hate speech, blowing up Section 230 is exactly the way to make sure that doesn't happen. Because again, Section 230 is the law that allows Facebook to
Starting point is 00:21:15 host our speech. It's also the law that allows them to moderate in good faith. And so without that, they either would moderate not at all, because they would just say, okay, our hands are totally off. Everyone can post whatever they want. And it at all, because they would just say, okay, our hands are totally off. Everyone can post whatever they want. And it's their problem if they get sued for it. Or they would moderate essentially everything that might possibly get them sued, which would include a tremendous amount of totally legitimate speech that you and I wouldn't want to be removed from the internet.
Starting point is 00:21:39 So what I've seen on the Democratic side, too, is just sort of, again, it feels like they have started to see Section 230 as like the only lever of power that we have to like get tech companies to do stuff that we want them to do, right? So we've seen different proposals tinkering around the edges, the most infamous of which is SESTA-FOSTA, which I'm sure many of your listeners might remember. This is the last law that Congress passed that actually did poke a big hole in Section 230. It basically revoked tech companies' Section 230 protections for anything that the US government deemed to be sex trafficking. But really, the impact of it was that it didn't have any impact on actual traffickers or those who are engaged in that
Starting point is 00:22:21 type of practice. It was devastating for consensual sex workers, particularly marginalized sex workers who suddenly saw entire sections of the internet where they had been able to have much more control over their own work and their own safety and kind of set their own rules just evaporated overnight. Tumblr, you know, instantly removed all adult content, which had a tremendous impact, particularly on the LGBTQ community and LGBTQ creators. Craigslist shut down their entire personal section for fear of lawsuits under SESTA-FOSTA. We've actually now seen lawmakers, you know, Senator Elizabeth Warren and others now introduce legislation to study the effects of SESTA-FOSTA based on the growing mountain of evidence that suggests that it actually harmed the very people that it claimed to protect. We now have another fairly similar bill in
Starting point is 00:23:11 Congress called the EARN IT Act, which is bipartisan. It's co-sponsored by Senator Lindsey Graham and Senator Blumenthal in the Senate, and now has been introduced in the House as well. It's essentially SESTA-FOSTA 2.0, but with an added protect the children vibe, which is always sort of authoritarians' favorite go-to rationale for removing human rights or engaging in censorship or surveillance. The Earn It Act is sort of a double whammy where it attacks Section 230 much in the same way as SESTA-FOSTA. It also would criminalize end-to-end encryption, you know, things like Signal, which is really one of the most important technologies protecting human rights and free expression globally right
Starting point is 00:23:51 now. Criminalizing the distribution of end-to-end encryption and end-to-end encrypted messaging would put billions of people's communications at risk, and it's not going to actually do anything to prevent the types of harms that the bill sponsors are talking about. Because you can't actually outlaw encryption. Encryption is just math. And those who want to do horrible things on the internet will always be able to access it, whether it's available in the US or not. Really, what this will do is it'll deter companies like Facebook from doing one of the only good things that Facebook has ever done, which was they're planning to switch on default
Starting point is 00:24:24 end-to-end encryption in Messenger, which is actually great. That instantly protects huge numbers of people's communication. It makes it so that like your friend who sells weed is much less likely to get busted, or your friends are much less likely to have their abusive ex stalking them or various things like that. That's actually awesome. And this bill would basically criminalize that, deter companies from offering end-to-end encryption while doing nothing to actually protect children or address the very real types of harms that we see that need to be addressed more systemically by having a more just and liberated society. Just to go back on SESTA-FOSTA, I was going to ask you about that anyway, but you brought it up,
Starting point is 00:25:02 so that's perfect. But there are other kind kind of carve outs in section 230 when it comes to hosting copyrighted content, child pornography, things like that, where platforms can still be held liable if they allow those things to be posted on their platforms, I guess. So are there arguments for any kind of carve outs for certain types of content to be removed? You know, I think it's pretty clear that what they've done ostensibly with sex trafficking, but which actually affected sex workers has not been the right way to go. But are there arguments where in some cases, maybe that's the right thing to do? I'm glad that you raised that last point, because it is a common misunderstanding that Section 230 somehow gives tech companies protection from liability for doing
Starting point is 00:25:46 things that are actually illegal. And that is not true. Section 230 does not protect them if they are actually hosting something like child sexual abuse material. That is illegal. It is a crime in and of itself. They can't host it and then say, oh, we're protected by Section 230. That's not how it works. So it's not some carte blanche, get out of jail free card to engage in illegal activity or harmful activity. So it's important that folks understand that. I think, again, we totally could be having an interesting and intelligent conversation about Section 230. And maybe there are ways that it could be looked at. One proposal I saw that I think in the end doesn't really work, but was interesting to me was House Member Anna Eshoo has a bill that basically tries to disincentivize companies from engaging in algorithmic amplification, non-transparent algorithmic amplification, which has kind of been something I've written quite a bit about, I wrote a piece in Wired about how that's really at the root of the harm that we see from Facebook. It's not that Facebook allows speech, allowing speech is good. It's that Facebook artificially amplifies the worst speech on the internet and kind of micro
Starting point is 00:26:56 targets it to the people who are most susceptible to it. So for example, the Wall Street Journal found that more than 60% of people who join hate groups on Facebook are joining them through Facebook's recommendations algorithm. So it's one thing to say, hey, we're a message board. People can say what they want. Sometimes people are going to say terrible crap. It's another thing to be like, hey, you seem like a neo-Nazi. Do you want to meet these other neo-Nazis in this neo-Nazi group? I'm going to shove it in your face until you do, because that makes money for us as a platform.
Starting point is 00:27:24 And Facebook is doing the latter while claiming to be the former. All of that said, I just don't think that Section 230 is the right lever to hold big tech companies accountable. And again, the reason for that is I think any tinkering around the edges of it, even if we were to try to kind of use it as a lever for a like actually rational or good purpose is just going to benefit incumbents and the largest companies while hurting all of their competition, and particularly hurting anything that kind of involves more of a community lens, right? So for example, look, you know, Reddit is a fairly big company, and I'm not gonna like sit here and defend all of their practices, but they rely heavily on these teams of volunteer moderators, right? In a world without
Starting point is 00:28:11 Section 230 or in a world where Section 230 has been carved up or has big loopholes in it, those individual volunteer moderators could be getting sued left and right for their individual moderation decisions. So if you don't like Facebook, you don't like Google, you believe that there should be more community-based alternatives, alternatives that are more open source or that are kind of built in more decentralized ways, none of that is going to be possible in a world where every individual person who participates in some way is opening themselves up to a lawsuit. And so I think for me, the question is not, is there like a good way to change Section 230? I just kind of think the
Starting point is 00:28:49 whole Section 230 conversation is largely a distraction from the very real conversation we should be having about the growing centralized power of these big tech companies, their harmful surveillance capitalist business models, which are basically built on harvesting our data and using it to manipulate us. And we need to be attacking the problem at its root, rather than kind who are kind of trying to do roughly what the Republicans are doing, but from the left, where they're also working the refs, they're trying to get Facebook to moderate more. And I guess where I'm sitting, I just see this as working the refs in a game that we always lose. The far right is always going to be willing to be more disingenuous, more outrageous, more ridiculous in their claims. And so we just sort of keep moving the goalposts and we just keep losing ground. And while maybe we'll have some
Starting point is 00:29:50 gains here and there, they can really easily come back to bite us, which we've seen again and again, where policies that seem logical on their face get weaponized and used to silence the speech of marginalized people. We just saw that with Zoom shutting down a series of academic events, one of which was originally going to host Laila Khaled. But later on, there were just events that were literally about Zoom censoring things, and Zoom shut them down. I think when we encourage companies to moderate more aggressively, we're tacitly encouraging that type of behavior. And I think we need to take a couple steps back and recognize that this is an economic problem. It's a political problem. It's a kind of systemic problem. And I think we've fallen a little bit into kind of the same trap that our parents did in the 90s of like, oh, let's blame video games for everything. Let's blame, you know, punk and
Starting point is 00:30:41 rap music for all the stuff that we don't like. The reality is there are systemic injustices in our society, and we can't fix them just by beating up on tech companies. We need to address the harms at their root. And in the end, rather than kind of tinkering around the edges or kind of slapping these companies on the hand by taking away the law that we claim they like, we need to be dismantling them and creating alternatives. The point that you made about alternatives and how removing Section 230 would potentially make it more difficult for these kind of community-oriented platforms or moderation techniques or whatnot is actually really important, you know, for my audience in
Starting point is 00:31:19 particular, you know, if we're thinking about what an alternative internet might look like and how we want to incentivize the creation of that, you know, it seems like it's probably not revoking Section 230, but challenging the power of these massive companies and creating a framework that promotes not let the, you know, kind of far right and or the more institutionalists in the Democratic Party frame this debate, because the reality is big tech companies are bad. They are doing bad things. And we urgently need to do something about it. And it reminds me again of like the earned it actor SESTA-FOSTA, where it's like we're taking what is actually a real problem. And it's almost insulting to be going in the direction that politicians on both sides of the aisle have been going on this because they're taking something that is a real problem. And instead of presenting real and meaningful policies to do something about it, they're
Starting point is 00:32:18 just kind of whipping up their base into a frenzy over a law that actually in many ways has nothing to do with things that they're talking about. And, you know, that is maybe endlessly frustrating for those of us who are kind of close observers to this. But to the general public, it's just a huge disservice. And I think we have a lot of work to do to educate folks and kind of try to undo some of the damage that's already been done by lawmakers and pundits kind of misinforming the public about Section 230 and how it works. But really, in the end, you know, that's sort of a defensive part of the strategy. But what we also need to do is present a real and meaningful strategy for what does need to happen. And I think part of what's hard about that, and part of the reason that I think politicians have
Starting point is 00:33:00 gravitated toward this, like Section 230 as a scapegoat thing, is that there isn't one silver bullet thing that's going to like fix the internet and make it all awesome again. And, you know, nobody really wants to admit that it's more of kind of a host of policies, I think, legislation that bans surveillance capitalist business models by heavily limiting the types of data that companies can collect, legislation that bans micro-targeted advertising. I think finding ways to legislate, heavily disincentivizing the types of non-transparent algorithmic amplification that we were just talking about, like antitrust action is totally on the table.
Starting point is 00:33:37 But in the end, we can't depend on the US government to protect people's freedom or protect our human rights or hold these types of companies accountable. I think we also need to be building our own tools, building our own platforms, and trying to find technical alternatives as well. It's the same in the surveillance fight. We always try to strike down bad legislation or get good legislation passed to rein in government's sprawling surveillance apparatuses. But I also always tell people to like download Signal and have a good password on your phone. There's sort of always this two prong approach, right? And I think the same is true. When we're talking about the harms of big tech companies, we need to be fighting for policies to rein in
Starting point is 00:34:17 their power to start breaking them up and breaking them down and eventually dismantling them. But we also need to be building alternatives because the reality is like social media is good. The fact that more people than ever before in history now have a platform and a voice and marginalized people's voices and experiences can be amplified and shared and heard rather than only us being able to hear from Fox News and CNN, that is good. And I think this brings me back to like, you know, the film, The Social Dilemma that, you know, folks were talking about. And I did a long thread on it, because I feel like there's something in that sector of organizing right now that sort of wants to treat the internet as if it's cigarettes, right? Like, it's just
Starting point is 00:34:58 bad, and it's addictive. And, you know, there's a ways in which that's true in the sense that these products have been built to harm us and to profit, not to amplify marginalized voices. And so they're not built for us. But I think if you look at it, I think it's hard to argue that the world is a less democratic place than it was in the times when, again, Fox News and CNN essentially got to define what reality was. I think part of what's happening now is we're just sort of grappling with the ripple effects of that. And the fact that there isn't quite a centralized controlled narrative has both really good effects and really bad effects, right? So we've been talking a lot about the bad effects, the ways
Starting point is 00:35:41 that harmful ideologies have become more mainstream. I don't think we talk enough about the fact that abolish ICE and defund the police as fairly mainstream calls to action would not have been possible and would not have happened without social media and people being able to have a platform online. And so I think we need to grapple with that complexity to be able to figure out what types of policies we want and what type of internet and world we want to fight for. I think it's a really good point. And I would just extend it and say, you know, without social media, would we have seen the same kind of Bernie Sanders campaign or Alexandria Ocasio-Cortez or someone like that being able to really grab onto the public mood and public attention if they just had to rely on cable media, right? And I think the answer is no. Now, so I want to shift back to kind of the international perspective just for a minute. When we look at the effects of these platforms, they're located in the United States and often regulated in the United States, but the regulations
Starting point is 00:36:38 that are put on them in the United States have global implications, right? And if we look in other countries, the definition of free speech might be a bit different and might not be as, I guess, absolute as you would see in the United States. I think it's fair to say that you would see more restrictions on free speech in, say, Canada or parts of Europe or whatnot. And so when we look at how these laws are interpreted and enforced, you know, as we talked about before, when we look at international speech and how these companies respond to activism and whatnot in other parts of the world, we can look at how they are suppressing activists in Palestine or Egypt. But then we can also look at, say, Facebook contribution to the genocide in Myanmar, right,
Starting point is 00:37:21 as kind of a negative way that they're not moderating properly. And in some countries, we've seen that different laws have been passed to put greater regulations on social media companies, like in Germany, where they have the Network Enforcement Act or NetzDG, which is kind of associated with making Germany this place where you can switch your Twitter account and you won't see any Nazi content, right, Because it all has to be removed in Germany. And so I know I touched on a whole ton of different points there and that it's a big question, but do you think when drafters of these laws or activists hoping to, you know, retain them or tweak them slightly, you know, do they have a responsibility to think about
Starting point is 00:37:59 the global implications of this legislation? And do you think there are instances where restrictions on things that get posted are justified because of, you know, the impacts that that can have on an international scale? Super great question. And, you know, for the second part, I'll be the first to admit, like, I just don't know. Jillian York, who's a good friend and from the Electronic Frontier Foundation has a new book out. And I know she's been kind of deep in this mire of content moderation as it relates to both human rights and racial justice and free expression. And I think that's sort of her take too. It's just like, these discussions are so deeply complicated. And in some ways, there is no single right answer. There's going to be trade-offs in kind of how we
Starting point is 00:38:42 do this. And again, to me, that's why I think it comes back to recognizing the monopoly powers, kind of the root of the problem. If we accept that we're just not sure, and maybe no one human or even group of humans could kind of get it right every single time, that's why we shouldn't centralize so much and have so much power in the hands of so few companies, because they're just always going to get it wrong in some ways, right? But I think to your broader point, I think that also just underscores how dangerous it is when politicians in the US turn these complex tech policy discussions into campaign talking points and kind of bring them into their top level partisan pissing match.
Starting point is 00:39:24 Because as you said, policies that get said in the US have this tremendously outsized impact on the rest of the world. We're kind of as imperialist with our internet policies as we are with our military. So, you know, I think the vast majority of certainly politicians in the US are not thinking about that, or at least not helpfully thinking about that as they get into these debates. But I think, frankly, a lot of organizations in the US, even progressive organizations, are often not thinking about the potential ramifications of the things that they're calling for, or the types of narratives that they're amplifying, again, particularly around moderation.
Starting point is 00:40:00 And I think we tend to just be like, our view on this must be right. And why won't Facebook just take down the stuff that I don't like? And folks just are not thinking about what that looks like when you try to do it for dozens of countries with dozens of different laws around speech, when you're trying to moderate memes and jokes and sarcasm, or reclaimed terms by marginalized groups in dozens of different languages, which again, is not to let the companies off the hook or say like, well, it's just a hard job and like they're doing their best. They're not doing their best. They're doing what they think is best to make
Starting point is 00:40:34 them money and keep their PR as okay as it can be. But we have to contend with that reality because otherwise we end up calling for things that could actually really harm people in places outside the US. And I think we have a responsibility. You know, here we are in the belly of the beast, our government has this tremendous global influence. And I think we need to be responsible and accountable as we pressure them to make sure that we don't push for policies that as you said, end up silencing or harming Palestinians or folks around the world who are fighting back against oppressive governments or oppression in their communities. Because in the end, this isn't just some fight for Democrats and Republicans in the US election. The internet as a whole and this networked infrastructure is one of the most profoundly
Starting point is 00:41:20 transformative tools that we as a human society have created. And the battle over the direction that it goes is a battle to decide whether technology is largely used to empower and liberate or to oppress. And I think that's the kind of crossroads that we're at. And we need to be thoughtful about how we engage, because we might end up pushing ourselves down the path toward technology being largely used for oppression under the name of kind of opposing it. What you described about how, say, these social media platforms look at what is happening internationally reminds me of that internal letter that was released a month or two ago by an employee who quit and was kind of handling those
Starting point is 00:42:00 things and said, like, Facebook is just not concerned with what happens in countries that are not going to get picked up by the US media, you know, like, they don't really care. And so I think that's a really big issue. And now I want to ask one final thing. Obviously, the United States is about to have a really historic election at the beginning of November. Do you think that whatever result comes to that election? Do you think that politicians are going to continue to focus on Section 230 as a problem? Or do you think that possibly the result of the election will cause the focus of tech policy to shift to more important issues than this law that I think is really in our sights because the Republicans really chose to focus on it? I mean, if there's one thing 2020 has taught me, it's that I should never make
Starting point is 00:42:45 predictions about anything, because I'm just going to end up looking like a fool, like everyone else that makes predictions. But that said, I do think, you know, this is actually one thing that I think we can reasonably predict, which is that this debate over Section 230 is not going away, regardless of what happens, you know, next week in the election. You know, Joe Biden has called to revoke it. Trump has called to repeal it. I don't know what the difference between those two words are in an actual policy lens. And I think, again, just lawmakers on both sides of the aisle are increasingly turning to this as kind of a proxy for big tech, basically. Their anti-230 bills are really anti-big
Starting point is 00:43:21 tech bills. And part of me feels optimistic about that in a way that they are tapping into or recognizing this very real and growing energy to quote unquote, do something about the growing power of big tech companies. And that's actually good. And so I think what happens is up to us. The future is unwritten. And so it's sort of our job to try to channel this ambient anger and anxiety that we all have about big tech companies and social media and channel it productively into pushing for real policies, real change, structural change, systemic change to address the harms that we see while preserving the transformative and potentially empowering power of the internet. I completely agree. And now, before I let you go, obviously, you are the deputy director with Fight for the internet. I completely agree. And now before I let you go, obviously, you are the deputy director with Fight for the Future. Are there any important campaigns that
Starting point is 00:44:09 you're working on right now that you think the listeners should know about? Yeah, for sure. I mean, we've always got our hands in a bunch of different stuff, but I'll shout out a couple really quick. So if you're interested in everything I mentioned about Section 230 and preventing lawmakers from either side of the aisle from blowing up this foundational law that has essentially allowed the best parts of the internet to exist. We have a site, saveonlinefreespeech.org, where you can easily submit a comment to the FCC and to your federal lawmakers with just a couple of clicks. And then I'll quickly shout out, you know, again, we have sort of this dual mission of protecting the positive transformative aspects of technology while fighting against uses of technology to take away people's basic human rights. So on the other side of that, we've been one of the leading organizations calling for an
Starting point is 00:44:53 outright ban on facial recognition surveillance, which is a uniquely dangerous form of biometric surveillance. And we have a campaign going with dozens of other organizations at BanFacialRecognition.com. So those would be the two campaigns I'd shout out today, SaveOnlineFreeSpeech.org and BanFacialRecognition.com. And of course, you can always find us at FightForTheFuture.org to just learn more about the organization and the work that we do. Awesome. And I'll put those links in the show notes so everyone can go check it out. You know, Evan, I follow your work a lot and I really respect the work that you do. Thank you so
Starting point is 00:45:24 much for taking the time to chat with me today. Yeah, thanks so much for having me. Evan Greer is the deputy director of Fight for the Future and you can find information on a few of their campaigns in the show notes. You can also follow Evan on Twitter at Evan underscore Greer. You can follow me, Paris Marks, at Paris Marks and you can follow the show at Tech Won't Save Us. Tech Won't Save Us is part of the Harbinger Media Network. And if you want to find out more information about that, you can find it in the show notes. And if you like the show and you want to support the work that I put into making it every week, you can go to patreon.com slash tech won't save us and become a supporter.
Starting point is 00:45:59 Thanks so much for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.