a16z Podcast - All about Section 230: What It Does and Doesn't Say

Episode Date: June 9, 2020

 We cover the tricky but important topic of Section 230 of the Communications Decency Act. The 1996 law has been in the headlines a lot recently, in the context of Twitter, the president’s tweets, ...and an executive order put out by the White House on “preventing online censorship”. All of this is playing out against the broader, more profound cultural context and events around the death of George Floyd in Minnesota and beyond, and ongoing old-new debates around content moderation on social media. [Please note this episode was first published  May 31.] To make sense of only the technology and policy aspects of Section 230 specifically — and where the First Amendment, content moderation, and more come in — a16z host Sonal Chokshi brings on our first-ever outside guest for 16 Minutes, Mike Masnick, founder of the digital-native policy think tank Copia Institute and editor of the longtime news & analysis site Techdirt.com (which also features an online symposium for experts discussing difficult policy topics). Masnick has written extensively about these topics — not just recently but for years — along with others in media recently attempting to explain what’s going on and dissect what the executive order purports to do (some are even tracking different versions as well).So what’s hype/ what’s real — given this show’s throughline! — around what CDA 230 precisely does and doesn’t do, the role of agencies like the FCC, and more? What are the nuances and exceptions, and how do we tease apart the most common (yet incorrect) rhetorical arguments such as “platform vs. publisher”, “like a utility/ phone company”, “public forum/square” and so on? Finally: how does and doesn’t Section 230 connect to the First Amendment when it comes to companies vs. governments; what does “good faith” really mean and what are possible paths and ways forward among the divisive debates around content moderation? All this and more in this extra-long explainer episode of 16 Minutes, shared here for longtime listeners of the a16z Podcast. image: presidential tweet activity/ Wikimedia Commons

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, everyone. Welcome to this week's special episode of 16 Minutes on the News, our show where we cover the headlines from our vantage point in tech and cover tech trends, offering analysis, frameworks, explainers, and more. I'm Sonal, your host, and today's episode is special, not just because it's three times 16 minutes long, but that this is our very first time on the show, now 32 episodes in, bringing on a special outside guest. Our topic is Section 230 of the communication Decency Act, which makes sure that interactive websites are not liable for their user's content as their distributors not producers of content. And so we also cover the recent news around Twitter and the president's tweets and the subsequent executive order, unquote, preventing online censorship, all from this week, which is also all playing out against the broader, more profound cultural context of George Floyd, Minnesota, and way beyond. In this episode, we do a deeper dive on the technology and policy aspects of platforms and content moderation, including an explainer on the evolution of Section 230 and all the key nuances of the debates around it and content moderation
Starting point is 00:01:12 to understand. My special guest is Mike Maznick, founder and editor-in-chief of TechDirt, which is a leading tech news and analysis website and my longtime go-to source for these topics. He also, by the way, recently launched a new section dedicated to Tech and COVID coverage, as well as a new form with different voices called the greenhouse, which focuses on tricky tech topics and lots of the tradeoffs involved like privacy and so on. You can find both of those at techdirt.com. So that's the intro and why this is a special 3X episode of 16 minutes. Let me now kick things off by asking Mike, since the premise of this show is the tease apart what's hype, what's real, and to break down and explain what Section 230 does and doesn't say, especially since the broader
Starting point is 00:01:57 Communications Decency Act, Mike, has been around since 1996? Yeah, the law is actually very short and very simple and very straightforward. And I should note that the Communications Decency Act itself did have many more things that it did, but all of that was determined to be unconstitutional. So the only thing that survives is Section 230. There was a big lawsuit, ACLU versus Reno, in the late 90s, and that threw out most of the Communications Decency Act is unconstitutional. The thing that remained was 230. So what Section 230 does, it really does two things, and they're somewhat related, and they're both incredibly important to the functioning of the modern Internet.
Starting point is 00:02:35 The first thing that it does is it puts the liability on the person actually violating the law. So if someone goes onto a website and says something that is defamatory or otherwise violates the law, the liability for that action belongs on the person who is speaking. and not the platform or site that is hosting that content. The second thing that it does is that it says if a website chooses to moderate its content or anything that is put on the site, then it is not liable for those moderation choices.
Starting point is 00:03:15 I'm so glad you're bringing that up because this is the number one thing I wanted to start with, which is the flip side of it, not just the protection, but the fact that they can moderate whatever they want. So can you actually break that down, Mike, What does that mean? So where it came from, which I think is important, is to give sort of the history very quickly,
Starting point is 00:03:30 is that there were a series of lawsuits in the early 90s that tried to hold internet services that had moderated some content and there were defamation cases effectively brought up. The most famous one is Stratton-Okmont versus Prodigy. And as a little fun aside, Stratton-Okmont was a financial firm that was immortalized in the movie The Wolf of Wall Street. That's a fun fact. Yes. And Stratton Oakmont got upset because people in Prodigy's message boards were accusing the company of being up to no good. And so they sued Prodigy.
Starting point is 00:04:04 And a court said that Prodigy was liable for the libelous statements because Prodigy positioned itself as a family-friendly service that would moderate content. And because it moderated some content to try and take down cursing or porn, anything that it felt was in. inappropriate, that anything it left up, according to the judge, it was now liable for as if it had written that content itself. And that freaked out people in Congress, namely at the time two members of the House, Chris Cox, a Republican, and Ron Wyden, the Democrat. And they put together Section 230 to say, wait, that's crazy. If a website wants to moderate content to create, for example, a family-friendly environment, it shouldn't get sued for the content that it chose not to take down.
Starting point is 00:04:57 And so that section of CDA 230 is designed to make sure that any website can moderate content how it sees fit in good faith to present the content in a way that meets with the goals of the service. Right. And to be clear, these are not just, quote, content moderation things. Like it could be spammy posts. and the kind of thing that would actually turn you off from using a service or will want to be a family-friendly site and getting rid of porn. The companies can use whatever discretion they wanted as long as it complied with their terms of services, basically, which that itself could change. But it was interesting about this backstory, it's a very small thing that was preserved, but it had huge consequences for where we are today in terms of the Internet we have today.
Starting point is 00:05:39 Whether it's going on a recipe swap site, whether it's sharing photos of family and friends, whether it's posting a car for sale, there's so many layers to this. It has allowed the modern internet to thrive. One of the best lines I heard, I think this is actually in Verge, is that in many ways this act was a gift not to big companies, but a gift to the internet. I think the point is not that it is the biggest gift to big internet companies or that it's the biggest gift to the internet.
Starting point is 00:06:09 I think it's really the biggest gift to free speech for everybody, right? Because if you don't have 230 set up the way it is set up, there would be much more limited ability for users to actually post content online. And it's a little bit crazy to me that people think that changing or getting rid of 230 will enable more free speech when the balances that are set up within 230 are very much designed as a gift to free speech. Okay. So now my question for you is, given that we did enter this world where user-generated content,
Starting point is 00:06:40 whether on sites like YouTube with videos or educational or non-educational or political or non-political. We now live in a world where a lot of these sites, people often use a framing of platform versus publisher, which I think is kind of meaningless and arbitrary. And then they also sometimes use the ridiculous phrase, platisher, as a hybrid of the two. I'd love to get your take on that framing and how that doesn't or does apply here. So one of the things that comes up over and over again, you see people say, well, if they moderate or if they change content, they are no longer a publisher. they are now a platform, and therefore they lose Section 230 protection. The law makes no distinction between platform and publisher.
Starting point is 00:07:22 The law is not designed to protect one or the other or say that there is a difference between it. There's no classification. It's not a safe harbor where you have to meet, you know, A, B, and C criteria in order to get the protections. You just need to be an interactive computer service that hosts third-party content. So the debate over are they a publisher or are they a platform, is completely meaningless under the law.
Starting point is 00:07:47 Let's actually talk about some recent events because I think it's a useful case in sort of understanding 230 and then we can break down some of the recent news as well around that. So one recent event is at Twitter. It added a feature earlier this week where one of the president's tweets,
Starting point is 00:08:05 they added a link to other sites as a sort of quote fact check mechanism. And this could be contentious because a lot of people do not actually believe that everything the media writes is correct. That said, it linked to other third-party news sites and it kind of labeled it as a fact-check feature. Then they added another thing where they kept a tweet from the president up in the context of the Minnesota George Floyd protests, but put like a limit on it where people could retweet with
Starting point is 00:08:39 comment, but they couldn't retweet, like, or reply to it because it violated their site's terms of services around speech that incites violence. And so in case one, they were adding what they quote called a fact check layer. And in case two, they were adhering to their own terms of service around spreading violent speech, which they said they kept up in the public interest. So that's a super, super high level summary of what happened so far. And so my question for you now, Mike, is how does and doesn't section 230 apply here? Because in this case, the fact check could be construed commentary content, not just third-party content. It's a really complex topic, and each layer of it adds new complexities, and each of those
Starting point is 00:09:24 complexities is in some way important. Let's do the two tweets separately. The first tweet, they added something, and just as a minor correction, and this has been going around a lot, people said this is the first time that Twitter had used this. Twitter has been using that feature over the last couple months, but this is the first time they had done it on a politician's tweet. And so what's amusing to me is the time I saw it used before, which was about two weeks earlier, it was used to debunk a Jimmy Kimmel video that was making fun of Mike Pence. And Twitter put on a thing that said, this is manipulated media. It is not accurate. It was a tweet that had gone viral. It was making fun of something that Pence had done. And Twitter stepped in and said,
Starting point is 00:10:07 no, this is incorrect or manipulated media. And it had a link to third-party content saying, why it was manipulated. And so that is allowed under 230. What it is doing is adding more speech. It is linking to other sources. It is providing more context. The part that is not protected by 230 was never protected by 230 and no change to 230 is going to change that is any speech that comes directly from Twitter itself. So in this case, that was the very narrow line that was put under Trump's tweet that said something like get more facts about mail-in ballots or something to that effect.
Starting point is 00:10:48 That particular line is from Twitter itself and therefore is not protected by 230 but is completely protected by the First Amendment. The third party content that they link to, that is then protected by 230. And then the second tweet, Twitter did something new, which this one I had not seen before, in which they put up a note that said
Starting point is 00:11:07 that this tweet violates the terms of service. However, they want to keep it up because they feel that it is relevant and important for people to be able to see the content, but to understand that it violated the terms of service. So they're adding more context and they limited the ability for people to retweet it or reply to it. And again, this is 100% allowed by 230. It did not remove the content. It didn't take it down. Even if Twitter chose to take it down or take down his speech or take down that tweet, that wouldn't violate his free speech rights. the First Amendment protects people from the government acting, not from a company.
Starting point is 00:11:42 Now, I have also since seen Twitter now use that this tweet violates our terms of service, but we are leaving it up because it is newsworthy message on at least one other tweet this morning from somebody who was defending the president. Let me ask you another question, and then we can break down the executive order, which is, what do you make, since we already kind of debunk this platform publisher distinction, that these companies that provide these interactive web services are like phone companies. Like, they always use this line that, oh, but imagine if the phone company decided to take down that conversation you had and interrupted you in the middle.
Starting point is 00:12:17 What do you make of that analogy? Yeah. So that is popular in a wide variety of circles across the political spectrum. That doesn't fall into any sort of partisan viewpoint and sort of the public utility argument. And that, by the way, of course, reminds me of net neutrality, which we've both covered quite a bit. Right. There are some funny parallels between this situation and net neutrality and that a lot of people's positions are reversed from one to the other.
Starting point is 00:12:38 We don't have the time to talk about net neutrality, but I covered it extensively at Wired, as you know, from all different perspectives, from carriers to FCC to internet companies, you name it. That is exactly what's fascinating to me is that the positions and the sides are inverted in this case. So anyway, what do you make of the phone and common carrier type of argument? Yeah. So it's an important one to understand, but I don't think it applies. And I think that most people who are deeply familiar with public utilities and what is required
Starting point is 00:13:06 to be declared a public utility would recognize that. that internet services, what's sometimes called edge providers, which are the services that you and I use every day that we interact with, that they do not qualify and they do not meet the requirements of a typical traditional utility service. And to clarify what that means, usually a utility service is something that is offered to everybody, but also is something that is commodified. The telephone service, if you use AT&T or Sprint or Verizon, you are getting the exact same service.
Starting point is 00:13:38 no real differentiation in terms of the service that you're getting, it is a commodity. One provider to the other, same thing. That is not the case with various internet edge providers, you know, Google, YouTube, Twitter, Facebook, each of them have all of these different features and all these different things. They are not one-to-one replaceable. It is not a commodity that you can switch out. And therefore, the public utility argument does not really apply. You can argue that there should be some other kind of classification, and some people argue that. But comparing them directly to a telephone service is different because it's not core infrastructure. It's things that are at the edge, things that you use as a service provided
Starting point is 00:14:17 beyond that. What do you then make of this public square slash public forum argument? People say that Twitter or Facebook shouldn't be allowed to do any moderation. They shouldn't be able to take down any content because it's the new public square and therefore it violates their rights. They will often point to two different lawsuits in making this argument. One is Prune Yard and the other is Packingham. And these two cases come up and they've been brought up in a whole bunch of lawsuits. And I'll just say that every time they've been brought up in a lawsuit to argue that a social media site is the public square they have failed. And I have not seen a single judge anywhere agree that these things make any sense in this context.
Starting point is 00:14:53 But just to give the quick background on the two cases and they go deep, but I'm going to give as high level as I can and as quick as I can, Prunyard was a case about a mall that was trying to kick people out effectively. and it was argued that the mall was a gathering place and became the sort of de facto public square and that took away some of the rights of the private property owner that owned the mall to kick people out. And the court said that it was a defective public square and they could not kick people out.
Starting point is 00:15:19 Now it is an extraordinarily limited ruling and extraordinarily focused on the facts of that case which was that this was effectively the only place in town that anyone could gather that the mall owner sort of acted as a local government and was therefore replacing government functions, functions that normally were done exclusively by the government. Every other case after that that references prune yard has effectively limited it.
Starting point is 00:15:44 It only applies in a very, very narrow situation, which is basically prune yard and prune yard alone. You can't just say that something is a public square. The Packingham case is a more recent case. It was a Supreme Court case that kicked out a state law that was basically saying if criminals had done some sort of criminal activity online, part of their punishment could be. that they are barred from using the internet. And the Supreme Court said, you cannot pass a law that kicks people off of the internet
Starting point is 00:16:11 because the internet is so essential to people's lives and ability to work and all that kind of stuff. So people have taken that to mean like, oh, then the services themselves cannot kick people off. But that is not what the case has said. It has just said that the government cannot pass a law that forces people offline. There is a third case that people never mentioned,
Starting point is 00:16:31 but is the most important case. It was just decided last summer. and that is the Manhattan News Network case. And this was a case, and we'll get into the details, but what the Supreme Court ruling just last year, and it was written by Brett Kavanaugh, who was the most recent appointment, and his ruling said that you can't just declare
Starting point is 00:16:51 any place where people can speak, even if a lot of people speak there, a public square, and it doesn't become a state action, it doesn't take on government control. The idea that something is a public square, or that there is state action involved from a private company only applies in a very limited set of circumstances where that service or operation is, again, replacing activities that were exclusively
Starting point is 00:17:18 traditionally done by the government. And that ruling makes it very clear that things that Twitter and Facebook and YouTube and every other website out there do does not qualify. They are not replacing government. They are not offering services that were traditionally only given by the government. Right. This basically means that if the sites that do perform services that are exclusively, a service provided by the government, it would be an example like if the government decided that all tax reimbursement would be done entirely online and no longer through the U.S. Postal Service. And therefore, that would then have to comply with being treated
Starting point is 00:17:53 as a thing that would have these provisions on it. Right. There could be an example. Something that was traditionally and exclusively handled by the government. And so I could see an argument where, like, someone could not be kicked off or blocked because that would imply state action issues. So now let's talk about the news, again, as a way to explain what CDA 230 is and isn't. We've explained and debunk some of the myths and framings around arguments of platforms and publishers and analogies to phone systems. So now let's actually just talk very briefly about the recent executive order that was issued this week because this shows purpose is to tease apart what's hype, what's real. And this is very rich in that very domain. Let's talk about what the executive order
Starting point is 00:18:35 can and can't do here, or what it purports to do and doesn't do. Right. So there were drafts of this kind of executive order that made the rounds over the last two years. This is something that the White House has kind of been thinking about. I reported on it and a number of other news sites reported on it and different drafts were leaked out to the press all about these earlier versions of this executive order. And the story is that in the past, they've passed this around to different agencies like the FCC and the FTC. And the message that the White House got back was that this was unconstitutional and they couldn't do any of this. But it seemed that they took it out of the drawer and dusted it off and put a fresh coat of paint on it. And it says a lot of very angry stuff about the internet
Starting point is 00:19:16 services and platforms and the way that they handle moderation. There's like seven different sections and the two sort of scaryish parts of the executive order that are concerning is that to one extent it effectively tasks the FCC with coming up with a new interpretation of 230 where it hints very strongly what the FCC's interpretation should be and that interpretation is totally at odds with both what is written in the law and what 20 years of case law have said. And that's worrisome only to the extent that anyone would ever actually pay attention to that FCC interpretation. The FCC in ACLU versus Reno, which is the lawsuit that rejected and made most of the Communications Decency Act
Starting point is 00:20:01 unconstitutional, made it extremely clear that the FCC has no authority whatsoever to regulate websites. None. Zero. Zilch. It's not even an open question. They cannot. And just to be very clear here, the FCC Federal Communications Commission, it's an independent agency. It has a five-member commission. I believe there's currently three Republicans, two Democrats. And what it can do and can't do, because it comes up a lot, FCC can do this, can't do this. Like it cannot make laws, but it does have the ability to sort of interpret existing laws and put out certain rulemaking things. Like they do, these like requests for comments, which create public records of people's commentary and whatnot. And they also have
Starting point is 00:20:42 the power to ask for documents and they can do distracting things, but they may not have legal making authorities. So I think it'd be very helpful for you to break down a bit more specifically hear what they can get away with and also can't. Yeah, so they can do rulemaking, and that is a long-involved process. And interestingly, because it is an independent agency, the president cannot instruct them to do something. So the executive order instructs the NTIA, which is part of the Commerce Department, to ask the FCC to do this.
Starting point is 00:21:10 And technically, the FCC does not need to do this, but the FCC will certainly feel the pressure to probably do something. The FCC could certainly create a lot of nuisance. And yes, there will be comment periods and people have to testify and put in comments. And, you know, as we saw with the net neutrality hearing, the comment system was filled up with bots and nonsense. So the commenting and the rulemaking process is a bit fraught with distraction. And so, yes, it can make rulemaking and then it can do something to enforce that rulemaking. If the rulemaking covers things that it is authorized, the FCC is authorized to have,
Starting point is 00:21:50 have regulatory power over by Congress. That are in its jurisdiction, so to speak. That are in its jurisdiction. And websites are not, are clearly not. Congress has never said that websites are within the FCC's jurisdiction. And the main court case that tested the theory that websites were in the FCC's jurisdiction has said no. And one other thing that I do want to note about the executive order and the request to the FCC
Starting point is 00:22:12 is that it is couched in a term that totally misinterprets CDA 230. Which is? So earlier, I talked about the two different. parts of the CDA, that one is about liability on third-party content, and one is about the platform's protection in moderation. And there are a few very narrow conditions on that moderation ability. It says it has to be in good faith, and there's a list of different kinds of content that you can moderate that includes otherwise objectional content. That otherwise objectional content is very, very broad. It can cover basically whatever the platform thinks is otherwise objectionable.
Starting point is 00:22:46 And good faith, in order to argue good faith, would open up a whole other First Amendment can of worms. But what the instructions to the FCC indicate is that those limitations, the good faith, otherwise objectionable stuff, that somehow applies to the first part of CDA 230, which is the part about not being responsible for a third-party content. That has never been the case. Nobody's ever suggested it is the case. It has never shown up in any lawsuit. It has never been argued in a legitimate way. And yet the executive order suggests that the FCC should look into whether or not that interpretation makes sense. So you're basically saying that the two provisions of CDA 230
Starting point is 00:23:22 that people are not liable for libelous content that their users might put on their side or any other content their users might put on their sites is being conflated in this case with the good faith aspect of being able to discretionarily moderate in good faith. Exactly. They're sort of mixing those two things up and I would argue that is done in bad faith
Starting point is 00:23:42 to make use of the good faith limitation on this. So what other aspects of the things? the executive order, again, without going into breaking down every little detail, because this is really more about the underlying principles, would you say have impact for understanding and really interpreting and explaining what CDA 230 is and isn't? Yeah. So one important part, and this was added at the last minute, perhaps literally, because the draft that was leaked the night before did not have this, but the final executive order did have it, is that it instructs the attorney general to draft a law, oddly not a federal law, but to draft like a reference state law to
Starting point is 00:24:19 effectively reinterpret CDA 230 in a way that diminishes its power. And that could be problematic. Here's an aside that I probably should have brought up earlier, which is that 230 is not a universal immunity. It is not as universal as people make it out to be. And one thing that it does not cover is federal criminal liability. So if you break a federal law, drug trafficking, human trafficking, child pornography, child pornography, all of that stuff, the sites are still liable. 230 specifically exempts that. So the Justice Department and the FBI, if they felt that any of these platforms were violating federal law, they have always, always under 230 been able to go after those sites. And that includes third-party content. There's a whole bunch of conditions on that.
Starting point is 00:25:11 So if there is drug dealing, human trafficking, those things going on on those sites, those sites potentially could be criminally liable. So the Attorney General and the Justice Department, the FBI, have always had the leeway to make use of the law to go after these sites. And yet, for the last few months, the Attorney General has been attacking 230 and acting as if it limited his power in some way when it simply does not. But now he can draft a law. And he's sort of already been doing that.
Starting point is 00:25:40 What's so amazing about what you just said, though, Mike, the part about the federal part actually immediately reminded me of the encryption debate, which we actually have discussed on this very show, 16 minutes, and listeners can listen to our reframing of that debate. Another place where policy makers on both sides have very conflicted views on.
Starting point is 00:25:56 Yeah, and there's already a bill that's in Congress that was put together with the help of the Attorney General and it sort of ties the 230 debate to the encryption debate. And it's very convoluted. Oh, this is the Earnit. The Earnit Act. And what it has the potential to do is to say that if you are offering end-to-end encryption on your service, you no longer get 230 protections.
Starting point is 00:26:20 It's a little more complicated than that. But his ability to do that in a manner that would remain constitutional is a pretty big question. But, again, it could create a huge nuisance. And part of this is also he's going to establish a working group. And so there will be discussions and roundtables and panels and hearings and subpoenas and all sorts of things that are going to happen in the meantime that are designed to be an intimidation tactic. To try in the phrase that everyone uses is work the refs, right? It means basically, hey, Twitter, Facebook, YouTube.
Starting point is 00:26:53 If you don't want us to keep causing trouble for you, maybe don't be mean to us. You know, don't fact check us. Don't limit our tweets. Don't limit our content. Don't put extra notices on it or other limitations on it. Because the more you do that, the more of a pain we're going to be to you. So summarize at a super high level, the FCC has extremely limited jurisdiction over websites, specifically. The attorney general does have some ability.
Starting point is 00:27:21 We haven't talked about what's not in the executive order, but this is where there's a little bit of the dust storm is very distracting, which is that Congress could choose to rewrite policy. if they wanted, using this as an incitement for that. Yeah, there are people in both the House and Senate who have said that they will introduce legislation based on this and try and do more than the executive order can do. Whether or not that legislation can actually go anywhere, any such legislation would almost certainly be subject immediately to a First Amendment challenge and would likely fail, but that would be many years into the future. Right. So we forgot one bit of the executive order, which is probably the only legit thing in there seemingly. which is that part of this had the threat of limiting any government dollars of advertising going to these sites. And I, by the way, did a little quick check.
Starting point is 00:28:09 And based on federal procurement records, this is according to the Verge, apparently only $200,000 of advertising have been provided to Twitter specifically since 2008, which sounds a little crazy to me. It can't be getting everything that seems way too low. But even still, it does suggest that the government advertising is actually a very, tiny piece of the bottom line revenues of these companies. But I'm curious for your take on that. Yeah. So that is one thing that an executive order actually can do, right, which is instruct certain federal agencies in terms of how they're spending their money in some form or another.
Starting point is 00:28:43 Oh, by the way, to be clear, when you say their money, we're actually still talking about taxpayers here. Yes, yes. Mostly taxpayer money. There are a few exceptions, but mostly taxpayer money is what we're talking about here. And what's funny is the executive order sort of implies that it is telling agencies to stop spending on these websites. But it doesn't actually say that. It says they have to account for what they are spending and they have to submit it to the Office of Management and Budget. And then something may happen in the future based on that. And the implication is that they should not be spending. So there could be a tiny, tiny, tiny, minuscule drop in spending. And what's silly, of course, is that if you look, I would bet that the
Starting point is 00:29:24 various political campaigns of everyone who is cheering this on are still spending. much more money themselves as campaigns on these social media platforms in order to advertise. No question on all sides. So the one concern from a societal perspective is that the few federal agencies that do advertise on social media actually probably have pretty good reason for that. And the one big example is the Census Bureau. And it's 2020 and we're in the midst of supposedly collecting the census. I forgot about that.
Starting point is 00:29:56 Because every 10 years, we have to do a census. And one of the best ways that the government has found to get out the word and to get people to actually fill out their census forms is through advertising on social media. And therefore, pulling that budget and telling the Census Bureau that they cannot advertise actually could limit the ability of the Census Bureau to collect the data that they are required under the Constitution to collect. So, Mike, this is a wonderful summary so far of what Section 230, the Communications Decency Act does and doesn't allow. of the recent news, what's hype, what's real, and sort of really using that to explain sort of these laws that have allowed our modern internet. I will be linking just in the show notes
Starting point is 00:30:39 that people know to a lot of the articles that did good explainers, a lot of your wonderful pieces in particular, as well as the actual executive order and the analysis of the differences that Eric Goldman, our mutual friend, put up and did. One question I do have for you, this is very much playing out against a broader backdrop of debates around big tech.
Starting point is 00:30:59 debates around content moderation. And so one question I have is, given that the recent example did not necessarily remove or necessarily even fully restrict, except maybe in spread and engagement and scale, there's been a lot of complaints about things like shadow banning. There's also a lot of conflation between content and behaviors, like what sites can do versus what they say. And for me, it seems like when it comes to this content moderation debate, you're damned if you do and you're damned if you don't. I'm curious for your thoughts on, A, where this fits in that longer, broader escape of that debate? And then, B, is there a way forward in your mind? So I put a joke on Techord a few months ago, and I keep referring to it over and over again.
Starting point is 00:31:37 There's a famous economist Kenneth Arrow. He had this thing called the Arrow Impossibility theorem, which is he looked at all different kinds of voting systems and argued that none of them can accurately reflect the will of the populace. And so I did a play on that, which I called humbly, the Maznik impossibility theorem. You are very humble guy. We go way back. I think it's been quite a number of years I've known you. I don't even remember how long ago that was, but it was way back because... It might be like 15, no, not 50, maybe 15, almost like 12 years now. I don't know.
Starting point is 00:32:08 I love that you named it after yourself. I want to hear about the Maznik impossibility theorem. It is that it is impossible to do content moderation well. And there are a variety of reasons for that. One being that any kind of content moderation is going to piss off someone, and that is generally the person whose content was moderated. The second element of it, too, is that... that so much of this is subjective decision-making.
Starting point is 00:32:31 And everybody has a different view on these things, and everyone has a different determination on this. And we ran a sort of conference event a few years ago where we made everyone in the audience have to be content moderators for a number of different case studies, effectively. And we had 100 content moderator experts in the audience, and none of them agreed.
Starting point is 00:32:51 On every case that we did, people had strong disagreements over what should have been done about this particular content. And then on top of that, you just have the law of large numbers. And if you're making decisions on 500 million pieces of content a day and you get at 99.999% correct, you're still going to have a huge number of mistakes, however you define mistakes. You know, there are things that are going to be missed. There are things that are going to be taken down that probably should not have been taken down.
Starting point is 00:33:18 That is going to happen. There's no way to avoid that. And in absolute numbers, because the overall set is so large, it's going to be taken. to appear like these companies are incompetent in how they moderate content. That is just the reality of the process of moderating content, and nothing is going to fix that. Hiring more human moderates is not going to fix that. Building better AI is not going to fix that. You can improve on it, but one of the nice things about Section 230, and the way it is structured in that there is no liability for the moderation, is that it allows for different experimentation to happen. So you have
Starting point is 00:33:53 very different approaches. And everybody focuses on Twitter and Facebook. and YouTube, but then you have to take into account tons of other sites, including Wikipedia. Wikipedia is allowed to have all these individuals editing their platform because of 230. Or you look at another site like Reddit, right? Reddit has set up all these different subreddits when each of them have their own moderators that allow them to set up their own rules. That's allowed. That is possible because of Section 230.
Starting point is 00:34:19 And any of these changes could make those kinds of things impossible. It's funny because in the examples you listed, you made sites that are. very often used by students like Wikipedia for research. But also, I just want to make a point on this that it applies to vaccine sites and anti-vaxer sites. It applies to all kinds of sites. And that variety is partly the point here as well. And I think that's really important to underscore. And let me underscore it even further. CDA 230 protects every website online. People say that, oh, it's a gift to big tech and newspapers don't get this. No, newspapers get it too for their website. Every website gets this. And that means your personal blog.
Starting point is 00:34:56 It means when you retweet someone, you get that protection as well. All of these things and all of these other sites and all of these other services and everything that everyone is building. I mean, lots of people listening to this are building different internet services. All of those services are protected by 230. And this matters way beyond just the big three or four companies out there. I'm so glad you brought that up, Mike, because the most and really only alarming line in the executive order to me, was this, quote, for purposes of this order, the term online platform means any website or application that allows users to create and share content or engage in social networking or any
Starting point is 00:35:36 general search engine. And that is quite literally every site. That is every site. Every site of every size. And it makes me think of the other law. It's not Maznick's law of impossibilities. It is the law of unintended consequences. And this seems true for every regulation. And I I think of GDPR and all these other regulations that all they really did, in fact, was help bigger companies, a very group they were trying not to. And then all the smaller players who don't have huge compliance arms and legal officers and many more people they can hire to moderate and process queries and take down requests get punished, which then further entrenched it. So it's a vicious loop, essentially. And that should be very scary because part of the executive order itself
Starting point is 00:36:18 starts out by claiming that the reason they have to do this executive order is because there are limited number of social media sites out there. And yet the definition that they have and the setup of what they're trying to do would effectively limit that even further by making it impossible for new competition to show up and for smaller sites to exist. And the more you put in place these kinds of rules and regulations, the more difficult you make it for there to be any new startups in the space, any new websites, because it becomes a costly mess for any smaller website to comply. Right. And while I completely agree with you that people, alone or technology alone is not the answer. One thing I do want to point out about the way forward
Starting point is 00:36:57 part of it is that this conflates the ownership of who decides versus also the size of the company that decides. So for instance, instead of having like a single CEO decide, this is my vision for this big company. Crypto is an oftenly cited case. My partner, Chris Dixon, has written an op-ed and wired about this a couple years ago as a way forward for thinking about the governance of some of these sites and thinking of a crypto decentralized native way so that it's a community owned and operated service, which is his way of thinking about it. And you and I have talked about crypto many, many times over the course of our friendship in years.
Starting point is 00:37:32 And I think at the inaugural Copia Policy Institute, I think you had a whole section on crypto, if I remember. And I'm curious for your thoughts on that as well. Yeah. So last year, I wrote a paper for the Knight First Amendment Center at Columbia University, which is called Protocols, not Platforms. Ah, I remember this. I teased you about it where I was like, Mike.
Starting point is 00:37:49 Protocols, not platforms. Yeah, I was like, Hallows, not Horrocks, right? And I myself do not love when people use
Starting point is 00:37:57 Harry Potter analogies, but my God, that was so perfect for that. I'm sorry. It's very much Hallows, not Horroxxes, which is great. Protocols, not platforms.
Starting point is 00:38:05 Yeah, yeah. Yes, you know, that paper discusses what the content moderation world looks like in a distributed decentralized system potentially based on
Starting point is 00:38:15 crypto. The paper touches on not just crypto, but just more decentralized interoperable protocol-based systems, and that changes a number of the content moderation questions. It doesn't make them go away.
Starting point is 00:38:29 And I do think that is one mistake that some people make, which is they think, like, well, if we just set it up on a crypto-based distributed system, then we just wipe our hands of it, and it's everybody's individual decision, and however it's implemented, let that happen. It also doesn't leave room for the variety of governance approaches that are inevitable in that as well,
Starting point is 00:38:47 because for the record, just as you're arguing for a variety of victims, experiments, whether it's a privately owned, public-owned company, centralized, decentralized, decentralized, whichever, even in the crypto world, there's a variety of governance approaches that can be applied, which is great. And there's been a lot of experiments already playing out on that front when it comes to protocols. And I think that's good. It is that experimentation that we need. And that experimentation is not designed just to like find the best result, but to recognize that there are different best results for different communities and different purposes and
Starting point is 00:39:13 different services. And there are certain cases where you want a Wikipedia approach. And there are certain cases where you want a Reddit approach and there's certain cases where you want a Twitter approach and whatever other approaches there are as well. And you can have all these different things and some of them work in some cases. And the only way we're allowed to figure that out is if we have the freedom to make those choices and see what happens. That's a wonderful note to end on. So in this show, we ask our guests, our experts, to bottom line it for me. And while this has been longer than 16 minutes, it's a special long episode. Bottom line it for me, Mike. What's a big takeaway. So the rules of how the internet works are under attack. This executive order by
Starting point is 00:39:50 itself is not going to effectively change anything directly. It's going to cause a lot of heat and light, but very little actual fire. But what we are seeing, and this goes beyond just this executive order, is that people are really trying to change the way moderation works online. And we've already seen some laws, both in the U.S. and certainly elsewhere outside the U.S., there have been a bunch of laws that are directed at content moderation. And that is going to continue. And I worry very strongly about what that does and whether that locks everyone into a specific type of content moderation and what that means over the long term for freedom of speech on the internet. Thank you so much for joining this segment, Mike.
Starting point is 00:40:31 Thank you for having me.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.