On with Kara Swisher - Will Killing Section 230 Kill the Internet?
Episode Date: February 23, 2023As Justice Kagan has asked, “Every other industry has to internalize the costs of its conduct. Why is it that the tech industry gets a pass?” Yet she and the other 8 Supreme Court Justices seemed ...wary this week as they heard oral arguments in two cases that could upend the Section 230 immunity that social media companies enjoy, Gonzalez v. Google and Twitter v. Taamneh. Today, we hear from three experts: Stanford Law professor Evelyn Douek, National Constitution Center President and CEO Jeffrey Rosen and UC Berkeley computer science professor Hany Farid. Up for discussion — what’s at stake in these two cases, which way the wind seems to be blowing and, of course, will killing Section 230 kill the internet? Questions? Comments? Email us on@voxmedia.com or find us on Twitter @karaswisher and @nayeema Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Do you feel like your leads never lead anywhere?
And you're making content that no one sees,
and it takes forever to build a campaign?
Well, that's why we built HubSpot.
It's an AI-powered customer platform that builds campaigns for you,
tells you which leads are worth knowing,
and makes writing blogs, creating videos, and posting on social a breeze.
So now, it's easier than ever to be a marketer.
Get started at HubSpot.com slash marketers.
Do you feel like your leads never lead anywhere?
And you're making content that no one sees
and it takes forever to build a campaign?
Well, that's why we built HubSpot.
It's an AI-powered customer platform
that builds campaigns for you,
tells you which leads are worth knowing, and makes writing blogs, creating videos, and posting
on social a breeze. So now, it's easier than ever to be a marketer. Get started at HubSpot.com
slash marketers.
It's on! general. Just kidding. This is On with Kara Swisher, and I'm Kara Swisher. And I'm Naima Raza. Trump Jr. is still backing James O'Keefe, by the way. He loves that Project
Veritas.
Ugh, whatever. He's such a loser.
I'm told there are a lot of Project Veritas people crawling around South by Southwest,
where we're going to be in a couple weeks.
I don't care if they catch me saying they suck. I don't care. I don't care. I don't
talk to people I don't know, so fine.
Do you not? Not really. You literally have a career where you talk to people I don't know, so fine. Do you not?
Not really.
You literally have a career where you talk to everybody you don't know.
I understand, but I don't like, if I say it publicly, I say it publicly.
Yeah, you wear your opinions on your sleeve.
That is correct.
You sound very sick still.
I've been sick for days.
I've been sick for days.
Again, that was before the last sickness.
I had rotavirus, which was repulsive. And now I
have this chest cold that I got from my kids. And it's really, you know, having toddlers is a
great way to lose weight and feel terrible all the time. It's just been one big petri dish of
crap. I don't know what else to say. It's just, I have not gotten COVID yet, though. That is the
one thing I have not gotten. So everything else, though, come and talk to Kara.
You can have it anyway.
My immune system.
But anyway, Kara, even on your deathbed, you're here taping the podcast.
And today we're going to talk about two cases that the Supreme Court has heard this week.
But you've been covering Section 230 from the beginning.
The beginning.
Before the beginning.
It was 1996 is the beginning, right?
I think it's Stratton Oakmont with Prodigy when Prodigy got into trouble for this.
Explain what Prodigy is for the young people, Cara.
Prodigy was a thing that Sears and IBM put up and I called it everything Sears knew about
computing and everything IBM knew about retail.
It's a squat.
It was terrible. AOL came in behind it and
kicked their ass, but there was also CompuServe,
which was slightly better. Were these internet providers? Yes, they were very big. They were
the beginning of everything. But anyway, they got into trouble for what was on their services,
like what people were saying. And it's crazy because you can't control if you're creating
this worldwide information network. People say the craziest stuff. And so you can't be responsible
for anything that's published on it. You're not really a publisher. You're a platform. And so this was designed in Congress to bring this nascent industry to the fore. It was very important, or else they would have been sued out of existence just for a comment on a board and stuff like that. Yeah, so in 1996, Congress basically said, if you're a platform, not a publisher,
you effectively have immunity.
And over time, that immunity has been narrowed
very, very slightly, right?
Slightly.
I'm curious because obviously this was a pre-algorithm law
for a post-algorithm world.
Has your point of view on it changed over time?
Like, do you think that these platforms
should have more responsibility?
I mean, that's a big part of your...
There should be a way to make these companies liable for some things, right?
How they behave, how they manage.
And at the same time, not mire them in lawsuits.
There's almost no fixing this.
You know, when I think about it and think about it, there's got to be some way.
I think around...
My feeling is you do very stringent privacy laws and things like that, and that takes care of it.
And then you watch where the money goes, how they advertise and things like that.
This law cannot be removed without really removing the main infrastructure of the Internet.
It would collapse.
It would collapse.
Because all of a sudden they would have no incentive to allow anyone to say anything.
Or they'd have an incentive to say everything, right? It's weird. It's weird. They have to make
good faith efforts to clean up their platforms and we've got to let them. Right. Right. And so
that's the problem is that it's impossible to do it. But without this, it's impossible to operate.
That's I think the interesting thing about hearing these arguments in the Supreme Court, because it's such an intellectual exercise. They're like, well,
if you look for his Becky Rice, it's kind of constant philosophical argument about where do
you draw that line? Well, it's just not going to, this is a nonsense case. Both cases are nonsense.
I'm sorry. That's a way to pitch the episode. I'm just saying, I just don't see the Supreme
Court as the one that should be weighing in on this.
If Congress wants to make changes, they need to work with consumers and the industry to figure out the best way to do this.
This is way too, this is the third rail, and you can't touch it.
You just can't.
It's been enlightening to watch this, I believe, octogenarian lawyer for the plaintiff, Eric Schnapper, who's having to argue algorithms and more.
Yeah, he's not doing very well.
He keeps being like, I don't want to answer that question because that question takes
me down a rabbit hole.
I don't want to go down because that doesn't end well for me.
This is not going to end well for him, just so you know.
Like he was telling me, apparently a lot of the lawyers have been conflicted out by
the tech companies.
So the plaintiffs were kind of left with Schnapper anyways.
Well, I'm sure the rich tech companies
conflicted everyone out.
That's called the Rupert Murdoch move.
It's not the only Rupert Murdoch move, sadly.
Kagan had a good point on this, which she said,
every other industry has to internalize
the cost of its conduct.
So why is it that the tech industry gets a pass?
It's a little bit unclear, but on the other hand,
I mean, we're a court.
We really don't know about these things. Yes, that's correct. That's pass? It's a little bit unclear, but on the other hand, I mean, we're a court. We really don't know about these things.
Yes, that's correct. That's correct. That's a good question.
Elena Kagan's been very funny, but they both seem to be in agreement.
So the court just heard these oral argument in these two cases,
Gonzalez versus Google and Twitter versus Tomne. Both involve terrorism and families claiming that
these companies, Google and Twitter respectively, are liable for the loss of family members as
a result of aiding and abetting terrorists effectively.
And we have a panel of experts here to discuss it with us.
Evelyn Dueck is a professor at Stanford Law School who's focused on regulation of online
speech.
Hani Fareed from UC Berkeley School of Information is a computer scientist, and he's an expert
in misinformation.
And then Jeffrey Rosen, he teaches at GW Law School and is a president and CEO of the National Constitution Center.
Yeah, I'm very excited to talk to them.
There's so many smart people on this topic, but this is a great trio of guests, and they should illuminate us.
I'll enjoy talking about the nonsense cases, Kara.
It's nonsense.
That's my legal opinion.
That's my...
I'm a very good lawyer.
Nonsense!
I was going to be a lawyer.
That's more like a judge.
I don't think you're a lawyer.
You're a judge, Kara.
That's true.
All right, let's take a quick break,
and we'll be back with Evelyn Dweck,
Connie Fareed, and Jeffrey Rosen.
prison. Fox Creative. This is advertiser content from Zelle. When you picture an online scammer,
what do you see? For the longest time, we have these images of somebody sitting crouched over their computer with a hoodie on, just kind of typing away in the middle of the night.
And honestly, that's not what it is anymore.
That's Ian Mitchell, a banker turned fraud fighter.
These days, online scams look more like crime syndicates than individual con artists.
And they're making bank.
Last year, scammers made off with more than $10 billion.
It's mind-blowing to see the kind of infrastructure that's been
built to facilitate scamming at scale. There are hundreds, if not thousands, of scam centers all
around the world. These are very savvy business people. These are organized criminal rings. And
so once we understand the magnitude of this problem, we can protect people better.
magnitude of this problem, we can protect people better. One challenge that fraud fighters like Ian face is that scam victims sometimes feel too ashamed to discuss what happened to them.
But Ian says one of our best defenses is simple. We need to talk to each other. We need to have
those awkward conversations around what do you do if you have text messages you don't recognize?
What do you do if you start getting asked to send information that's more sensitive? Even my own father fell victim to a,
thank goodness, a smaller dollar scam, but he fell victim and we have these conversations all the
time. So we are all at risk and we all need to work together to protect each other.
Learn more about how to protect yourself at vox.com slash zelle.
And when using digital payment platforms, remember to only send money to people you know and trust. business, while you're so focused on the day-to-day, the personnel, and the finances,
marketing is the last thing on your mind. But if customers don't know about you,
the rest of it doesn't really matter. Luckily, there's Constant Contact. Constant Contact's award-winning marketing platform can help your businesses stand out, stay top of mind,
and see big results. Sell more, raise more, and build more genuine relationships
with your audience through a suite of digital marketing tools made to fast track your growth.
With Constant Contact, you can get email marketing that helps you create and send the perfect email
to every customer, and create, promote, and manage your events with ease, all in one place.
and create, promote, and manage your events with ease, all in one place.
Get all the automation, integration, and reporting tools that get your marketing running seamlessly.
All backed by Constant Contact's expert live customer support.
Ready, set, grow.
Go to ConstantContact.ca and start your free trial today.
Go to ConstantContact.ca for your free trial. ConstantContact.ca.
Evelyn, Hani, and Jeffrey, thank you so much for being here to talk Section 230 with me. It's been exciting for me for a long time, but nobody else. It's everybody's favorite subject now.
Evelyn, why don't you start very quickly about what the law is and why it matters?
Sure. So the law everyone's talking about, Section 230, is famous for shielding platforms
for liability from content that other people post on their websites. So, Cara, if I go on
Twitter and I defame you, you can sue me, but you can't sue Twitter. And that's really well
established. That's the central, you know, that's the heartland of the law. But what these cases are about, especially the case
yesterday, Gonzalez v. Google, is about what if YouTube starts amplifying that, recommending it
through things like the UpNext algorithm, does it then lose its Section 230 immunity for that content?
So being an editorial, essentially, right? Making editorial decisions.
I think it's further than that, because I think it's well accepted that sort of some sort
of editing is part of the publishing function, right?
Like publishers, the New York Times, it decides where to put certain things on its front page,
its homepage.
That's a fairly core part of editorial functions.
I think it's something more than that, that the plaintiffs are arguing that case, that
they're really sort of pushing it at you, recommending it to you saying, hey, this is
content that you like, you know, here's a terrorist that you may know and want
to talk to, that kind of thing. Right. And when it was initiated, I'd like you to go back very
quickly, it was sort of to make them good Samaritans to clean up stuff, right? To keep
the platforms clean without being subject to lawsuits that would sink them. Right. Yeah. I
guess there's two parts of the law. So the first is to make sure that they, you know, don't over-censor content, over-take things down because of risk of liability,
because, you know, they may not care about a specific, an individual post. And so, you know,
it's in their interest if there's even a specter of liability to sort of take that down. But there
was also this court holding that said, well, but if you start taking things down, you can be assumed
to be engaging in this kind of editing function and that you have knowledge and take responsibility from what's on
your site. And so this reversed that holding in Stratton Oakmont to say, no, even if you engage
in content moderation, if you're a good Samaritan doing these kinds of things, you're not going to
become liable. Now, Jeffrey, let's dive into the cases. Can you describe each case that is before
the Supreme Court and how they're similar and where they differ?
This is the first time a 230 case has come before the Supreme Court.
That's right.
And both cases raise the issue about whether or not when the platforms host content that might lead to illegal acts but don't know that it's going to lead to particular illegal acts, they're liable or not.
So the Google case involves the question of whether-
Gonzales versus Google is one of them.
Let's name them so people know.
Exactly.
Google Gonzales is filed by a 23-year-old American woman's family.
She was Nomi Gonzales.
She was killed in Paris in an ISIS attack.
And the lawsuit claims that Google, which owns YouTube,
violated the Anti-Terrorism
Act's ban on aiding and abetting terrorism by recommending ISIS videos. And the Gonzalez case
raises particularly the question of whether Section 230 of the Communications Decency Act
protects Google and other internet platforms when its algorithms target users and recommend
other people's content.
And the central question is the one you were just discussing with Evelyn. Is an algorithm
a form of recommending? Is it an active act that removes Google's immunization? Or as many,
many of the justice suggests, is a neutral algorithm that recommends cat videos or cooking
recipes in the same ways that it recommends ISIS
videos, not a form of active recommendation. And the other one?
And the Twitter case, in particular, involves the question of a terrorism act that says that you're
liable for, for example, aiding and abetting a bank fraud, when you take certain active steps
to keep the books and you know that your actions are going to help the robbery.
And the big question here is whether the algorithms are that kind of active knowing or not.
And all the questions of the argument focused on, do you have to know that the algorithm
is going to promote a video that will lead to a particular act or help a particular terrorist?
Or if you're simply hosting ISIS videos but don't know that there's going to be any
connection to the Paris attacks, for example, does that not create liability under the act?
So the two cases are very closely connected because as Justice Barrett said in the oral
argument, if the court finds that there's no liability in the Twitter case and that the
promotion of the videos didn't aid and abet terrorism, then the Section 230 case goes away.
Honey, you're a computer scientist, not a lawyer. A lot of the conversation here is,
as Justice Kagan puts it, about pre-algorithm law in a post-algorithm world. So should companies be
more liable for, say, the creation of a thumbnail or for a recommendation list? Should there be a
narrower view of 230? Yeah, I think the issues here got a little muddled up in the Gonzalez v. Google case.
So what Google was trying to argue is that it is absolutely necessary for YouTube to recommend videos.
It's fundamental to their operation.
But that's not true.
What Google was saying is, look, if you do a Google search,
we must somehow use algorithms to organize data. 100% correct. That's what a search does.
But what's at issue here in the YouTube case is not doing a search. It's a recommendation
algorithm, which says, you've watched this video, here's another video. You've watched a video and
there's a panel along the right-hand side of your browser that says, here are some other things you
might like to watch. That's in order to get you to consume more. It's a business decision, correct?
That's exactly right. It's a design decision. YouTube could have eliminated those features
entirely, and then these issues don't come up. So the issue to me is not recommendation algorithm or not. It's a design
decision, which then has an algorithm that promotes ISIS videos and horrific content.
And that, it seems to me, gets us closer to the Lemon Snap case from a few years back,
where Snap was found liable for designing a product that the courts said did not get 230 protection
because it was a product that was designed, as the court said, to encourage people to
drive fast because it showed your speed over your video.
That was not a 230 protection.
So I think I was frustrated that I thought the issues were getting muddled up.
This is not about thumbnails.
It's not even really about recommendation algorithms.
It's about a design of a service. Design to make you do this. An
algorithm has demonstrated to lead you to more extreme content. I've seen it with my own kids.
My issue here is that YouTube chose to make those design decisions. And I think that doesn't give
them 230 protection because it's not about user-generated content at that point.
So Evelyn, is this a case about 230, really?
The Justice Department has taken a diametrically opposed position on each case. In Gonzales,
the government is supporting plaintiffs, saying that Google should be liable in some way.
The algorithm funnels content to users. In Tamna, the Justice Department sided with a social media
company, saying Twitter should not be liable. Explain the difference. So in Tamna, they're saying that this couldn't constitute aiding and abetting liability under
the act, under the Anti-Terrorism Act.
But they are saying in Gonzalez that the platform's role in recommending content at some point
might become a Pierce Section 230 immunity.
Now, the question in Gonzalez could be potentially
broader than anti-terrorism act situation. So this might be, you know, the question is,
if you lose liability, then you go back down to court and you sue around the merits of an
individual case. And so it's sort of, it's not diametrically opposed in the sense that
all that Gonzalez says is, can we get in through
the door to get to court to sue these platforms? And then Tom is like, well, can we sue these
platforms in this particular case? Do we have enough to show liability under the Anti-Terrorism
Act in this particular case? And the Justice Department's answer in that case is no.
No, not in that case. Okay. So the case today, Twitter versus Tomnick, right, was specifically about is there
underlying liability? So can you say that Twitter is not, it has aided and abetted terrorism by not
doing enough to take ISIS content down off its platform? If I could just sort of stop on this
for a second, because I think this is really important. One thing that's really remarkable
about these cases is how attenuated the causal link is. So in both cases, there's no evidence in particular that
Google or Twitter had any particular role in encouraging or committing these specific two
attacks in these cases, or that any of the attackers saw a particular piece of content
or were radicalized or recruited through the platforms. It's just some sort of very generalized
idea that, oh, well, come on, we all know platforms have ISIS content on them and they didn't get absolutely
all of the needles in the haystack. And so therefore they have aided and abetted terrorism.
So that was the question today. Now, even if the plaintiffs win on that and say, yes,
that is aiding and abetting under the Anti-Terrorism Act, they can still lose if Google wins its
230 case.
Because the whole point of Section 230 is to say, look, you can have liability.
There can be a defamation cause of action.
There can be an anti-terrorism civil liability cause of action.
But 230 says it doesn't matter.
The platform is immune.
One thing I would quibble with, with what Hani said, though, is this distinction between, you know, content that was saying, you're going really fast, go faster,
go faster, that sort of encouraged a specific kind of dangerous behavior. This is sort of,
and I think there was sort of interest from the court in saying, okay, you're right,
there has to be a point at which a platform has done so much that they are really shoving this
down your throat. But the justices were literally
asking the lawyers, give us a line, give us a line between a newsfeed algorithm and some sort of-
How different would that be? Every single news organization does that now.
Read this, go this, go here next. It's just more intense on a Google or a Twitter.
So I think that's the point. I mean, I think they're saying, look, this is what is inherent
to publishing. The whole point of Section 230 is that you are immune for doing what every single news organization does, which is saying,
here's some content that you might like. And that the point of 230 was to immunize platforms from
that. Now, the justices are saying, well, surely there's a point where platforms are doing it so
intensely, so targetedly, so persistently. Yeah, they know who you are persistently.
But they were asking the lawyers to say, hey, give us a line. Show us where the platforms cross line. And the lawyers just
couldn't give them a line. And I think it really scared the justices. Yeah. Right. There's no line.
Honey, can you respond to that? Yeah. I don't think it's fair to equate what the New York
Times does by highlighting articles with what YouTube does. YouTube is scraping up,
vacuuming up every morsel of my personal data,
looking at my viewing habits,
other people's viewing habits,
and then making very targeted recommendations
regardless of whether those recommendations
are good, bad, ugly, or illegal.
And I don't think you can say that about the New York Times.
When I go to the New York Times
and, Kara, you go to the New York Times,
we see the same information, and you can't say that about YouTube.
It's not targeted specifically to you.
Exactly.
In that regard. That's a fair point. Jeffrey, there's an obvious ideological split between
conservatives and liberals on the court. But with Section 230, it's not there. They all seem to be
saying the same thing. I was sort of shocked. And we'll go into the individual
justices. But
what is happening here on the court right now? You're absolutely right. There is a fascinating
broad split, but it was not present in this case. The split arose in a series of opinions by Justice
Thomas, where he expressed- Yes, nonsensical, but go ahead.
Well, his concern was that 230 might allow platforms to discriminate on the basis of
content and ideology. And he's sympathetic to laws passed by Texas and Florida, for example,
that require the platforms to obey First Amendment standards and not to discriminate on the basis of
content. And essentially, he suggested that the platform should be treated like common carriers with an obligation, even though they're not governmental bodies, to
open their platforms to all comers without speech discrimination. That split was not present in this
case where there was broad consensus among the justices that converged around the traditional
First Amendment standard that said that to be illegal, content has to be intended to
and likely to cause imminent violence or lawless action. Hevelin rightly focused on this question
of attenuation as the central issue in both cases. And generally, the First Amendment in a standard
accepted by liberals and conservatives, first introduced by Justice Brandeis in the Whitney
case and then embraced by the Supreme Court in the Brandenburg case,
says that unless you both intend lawless action and your speech is likely to cause it, then you can't be liable.
And although Section 230 and the Terrorism Act pose that question differently, there was discomfort on the part of liberal and conservative justices here for holding platforms accountable for speech they neither intended nor could have possibly anticipated would lead to lawless action.
And just as on both sides, from Justices Kagan to Justice Kavanaugh, both said this could kill the internet.
It would just result in business chaos.
It would also completely obviate Congress's purposes in passing Section 230,
which is why it was really interesting that both Justices Kavanaugh and Kagan said this is for Congress to decide. Justice Kagan said, you know, we're not nine internet experts
here. We're really not very good at this. Yeah, well, let's listen to that. Justice
Kagan made that great point. I mean, we're a court. We really don't know about these things.
You know, these are not like the nine greatest experts on the internet.
So she should have a career in stand-up. She was very funny yesterday. But
she was making, I think, the salient point is, why are we deciding this? And so, honey,
should this be decided by the court? They seem to be backing away from this as fast as possible,
which makes me question as to why they took it in the first place.
I agree with your interpretation. I think both sides scared them. Not doing anything scared them and doing something scared them. Obviously,
I do think this is better handled by Congress. But the problem is when you go to Capitol Hill,
you hear very different things from either side of the aisle. The right hates the technology sector
because they have bought into this false narrative that content moderation is anti-conservative.
It is not.
Conservative voices dominate social media.
The data is overwhelming to suggest that.
But that is the narrative they are pushing.
The left is saying that tech is not doing enough to mitigate the harms from technology.
And if you disagree on the nature of the problem, you fundamentally disagree on the solution.
One says less moderation. One says more moderation. So I'm not particularly hopeful that Congress is
going to move in the right direction. Having said that, we are seeing some good movement from the
EU and from the UK and from Australia. And this is another thing for us to think about. We are a
country of 350 million people, about 5% of the world's population. We need to think very
carefully how we start to litigate and regulate the internet that is going to affect 95% of the
people who don't live in this country. And I don't know that I have confidence in Congress to do that.
Yeah, I just want to jump in on this about the politics and, you know, finding agreement. And,
you know, one of the areas where there might be a chance of
finding agreement is terrorism. And of course, that's sort of looming large in the background
of these cases. And perhaps one of the reasons why these cases were the ones that got picked up,
because, you know, there's sort of a question of how much are these, you know, specifically
terrorism's exceptional kinds of cases. And I have to say, one of the things that I'm really worried about is the court and potentially Congress having a big blind spot
about sort of the First Amendment and the free speech issues at stake here with respect to
terrorism, where they have in the past as well, creating a rule that really incentivizes platforms
to sort of over-moderate and be extremely risk-averse in this context that has disparate
impact on marginalized communities. We've seen this a lot, for example, in Gaza and Palestine,
where a lot of Arabic content just goes missing or evidence of war crimes from Syria because
platforms are so scared about potential liability. And I want to be really clear,
there might be a point at which platforms are so negligent and so willfully blind
to the problems on their platforms that they should absolutely be liable. But we're talking
about a case where platforms, they have programs and there might be questions about whether these
programs are sufficient. But holding them liable for trying to get, as I said before, every single
needle in the haystack might create a draconian regime with really problematic, disparate impacts.
The opposite being that they're too loose. And then they're too loose. It sort of drags them might create a draconian regime with really problematic, disparate impacts.
The opposite being that they're too loose.
Right.
And then they're too loose.
It sort of drags them both ways. Support for this show comes from Grammarly. Thank you. Time for a change. It's time for Grammarly. Grammarly's AI ensures your team gets their points across the first time, eliminating
misunderstandings and streamlining collaboration.
It goes beyond basic grammar to help tailor writing to specific audiences, whether that
means adding an executive summary, fine-tuning tone, or cutting out jargon in just one click.
Plus it surfaces relevant information
as employees type so they don't waste time digging through documents. Four out of five
professionals say Grammarly's AI boosts buy-in and moves work forward. It integrates seamlessly
with over 500,000 apps and websites. It's implemented in just days and it's IT approved. Join the 70,000 teams and
30 million people who trust Grammarly to elevate their communication. Visit grammarly.com
slash enterprise to learn more. Grammarly. Enterprise ready AI.
Support for the show comes from Indeed. If you need to hire, you may need Indeed.
Indeed is a matching and hiring platform with over 350 million global monthly visitors, according to Indeed data, and a matching engine that helps you find quality candidates fast.
Listeners of this show can get a $75 sponsored job credit to get your jobs more visibility at Indeed.com slash podcast.
Just go to Indeed.com slash podcast right now and say you heard about Indeed on this podcast.
Indeed.com slash podcast. Terms and conditions apply. Need to hire? You need Indeed.
I want to talk about how the justices are reacting in each case. And again, I've been surprised by how cooperative they've been.
They're actually acting like justices.
It's kind of cool to remember when that was.
Let's start with Gonzalez.
What are the themes emerging from each justice?
It really was striking, the level of agreement.
Justice Thomas kicked things off by saying that if an algorithm recommends ISIS videos based on a user's history
and also recommends cooking videos, then how can it be held responsible? And several justices
echoed that concern. Justice Gorsuch was focused, as always, on the text of the statute. And in both
cases, or in Gonzalez in particular, he's skeptical about the Ninth Circuit's test, which is based on
neutral tools. And he said that wasn't in the language of the statute. But by going back to
the language, he felt that it would be a firmer basis for avoiding liability. Justice Jackson
maybe is most sympathetic to the Gonzalez family. She said that maybe Congress is trying to
obviously protect internet platforms that are
screening offensive materials. And here she's saying the argument is that Section 230 protects
the promotion of offensive materials. How is that conceptually consistent with what Congress
intended? Although in other parts of the argument, she was concerned about the attenuation question.
parts of the argument. She was concerned about the attenuation question. Justice Kagan, as we heard,
said the court might not be well suited to solve the question and express concern about the business model for the internet. Kagan was echoed by Kavanaugh, who specified that Congress had this
broad language in Section 230, and all courts had interpreted that to provide for protection for conducts like YouTube,
and why should the law be totally changed here? That was a really significant fact for
all the justices that all the precedent basically supported this. Justice Barrett was just zeroed in
on the question of how the whole case might go away if the Twitter case ended up not finding
liability. And that was the gist of the nine justices. As you say, it was really
heartening to hear the justices act like a court. And it may be because there really is a broad
consensus about the core of the First Amendment tradition really requiring intent and likelihood
to cause imminent lawless action. And also because the precedents of the lower courts had been relatively in the same place. Always the same, honey.
Yeah, I was expecting to see more partisanship around this anti-conservative false narrative,
and I was heartened not to see that. I was disappointed that I think some of the technical
issues, as my role as the computer scientist in this conversation, were getting muddled up. And I think Google did a very good job of muddying the waters
in terms of equating a Google search with a YouTube recommendation saying, we have no choice
but to organize the world's information. And I don't think the judges pushed back on that or
really understood that in any real way. And I would have liked to have seen
a more clarity on what exactly the underlying issues are here. And I think, I think,
I think Cain got it right is that these are not the nine world's experts on the internet.
Right. But you know, Google, that's their whole job because they're God.
So Evelyn, anything that struck you?
Yeah. I mean, Hani's right that the anti-conservative bias narrative didn't come up.
You know, I wouldn't get too excited or hopeful about that.
Sorry, Hani.
That's partly because those cases are coming.
They're coming likely in the next term with their choice cases arising out of Texas and
Florida.
And I wonder if, you know, I was surprised that Justice Thomas and Justice Alito sort
of seemed to be more sympathetic to the platform's point of view in these cases than I expected.
And I wonder if that's because they know that they're going to have a second bite at the apple next
term. The other surprise, I think, was Justice Jackson in Gonzalez really seemed to be interested
in trying to find ways to narrow Section 230. Obviously, we had no idea what Justice Jackson's
views were before this argument. And she was really going back to where you started, Cara,
about the good Samaritan nature of this provision. She's saying, isn't this intended to make sure that platforms
take content down? And if they are not acting as good Samaritans, shouldn't they lose 230
immunity? So she sort of seemed really interested in that reading. And both Jackson and Sotomayor
today as well in the Tamna case seemed really concerned about the idea that the platforms might
be leaving terrorist content up and at what point they should get liability for that as well. So
that was interesting. But I mean, the politics around these issues in general right now are so
weird. It is one of the few areas in sort of constitutional law right now, the First Amendment and then the statutory issues where, you know, it's super high stakes, but it doesn't fall neatly
in a left-right divide because it cuts both ways.
Absolutely.
So let's talk about the lawyers too, because I got to say that was not impressive in the
case for the plaintiffs.
Jeffrey, you get Eric Schnapper, the ever-signing Eric Schnapper.
How do you assess
his performance here? The justices seemed impatient with him. He would repeatedly apologize for not-
He's doing both cases. Is that correct? That's right. Yes, he is. A distinguished
legal scholar and advocate, very great background. And he apologized several times for not squarely
answering the question, but in particular, because he favored multi-part tests and refused to give a categorical answer for what standard, in both cases, the court could embrace for recognizing liability.
Both textualist justices like Gorsuch and more pragmatic justices, including Kagan, expressed impatience.
So, you know, it's a complicated position that he's arguing for, not a simple one.
And at one point, interestingly, I think it was Justice Kagan said, why isn't this a jury question?
I mean, if you can't identify a clear standard, why not send this back to juries case by case? So he was well defending his position in the sense of arguing for
a different approach to 230 that wasn't categorical, but the justices just didn't
seem to be buying it. Yeah, I don't know doesn't seem to be a very good answer to these people.
That's my legal take. But Evelyn, what about the defendants? First, Lisa Blatt,
who's representing Google, and Seth Waxman, who was representing Twitter?
Yeah. Can I just say briefly on the plaintiff's lawyer, it was actually pretty disheartening
because it was shockingly bad, stunningly bad. Not, this is a hard case and he struggled with
some tricky questions. These were totally totally predictable questions, like how do you differentiate your case from a search engine?
And he literally went silent and sighed.
And so I don't know.
Stunningly bad.
Yeah.
Stunningly bad.
Stunningly.
And it's kind of depressing, you know, how many hours of, yeah,
academic articles have been written about this,
podcasts by the, you know, thousands of hours
that have been preparing,
running up to these arguments, and it sort of all came down to these stunned silences,
which is a little depressing to see the sausage get made in that way.
On the platform lawyer's side, I mean, these are experienced, established, well-known,
very competent professional advocates. They were very strong in making many of their arguments.
Obviously, sort of nothing as surprising there. I was a little
surprised at some of the concessions that Lisa Blatt made on Google's behalf yesterday,
in terms of saying, like, if you had a featured video, would that make you lose Section 230
immunity? That was sort of a bigger concession that I'm not sure how her clients would have felt about that.
And then today on the platform side, Seth Waxman, you know, I think I was a little surprised that he didn't talk more about the speech implications here. You know,
there was a lot of discussion about, is this like a bank? If a terrorist has an account at
a bank account, are they aiding and abetting? And I think it's really important that we talk
about sort of the disparate impact that how speech intermediaries are different from other kinds of intermediaries. You know, speech is somehow, I think we should recognize that it is special and it has First Amendment protection for a reason. And so we should be very cognizant of the potential sort of harms that come from being overly censorious.
come from being overly censorious. Honey, anything to add? How do you view the court discussing technology? Often, these are spectacles like Zuckerberg in the House testimony in 2018.
They're trying to be dignified here. They're trying their hardest.
Yeah, but they weren't. I did find myself on many times screaming at my computer screen
at Schnapper's responses. I found the whole thumbnail thing, which he seemed to put a big stake in the ground
for, completely baffling, that he was arguing that because Google or YouTube rather generates
a thumbnail of an uploaded video, they have now created the content. The judges were confused.
I was confused. Defense's lawyers were confused. And it was a bizarre argument. I also was puzzled
as how he couldn't explain the most
basic fundamental differences between a Google search and a YouTube recommendation. And I don't
think he did his client justice. Okay. Now I want to make a prediction from each of you. It'll be
outcome of each case. Evelyn, Twitter versus Tamna, and then Google versus Gonzalez.
So today's argument was really confusing for me. I went into it thinking there
is no way that they could find that this was sufficient to constitute aiding and abetting
because the knowledge was just too generalized. But they were pushing on Twitter's lawyer extremely
hard that I sort of, the first half of the argument, I was thinking, oh no, the platforms
are really going to lose this case. And then it sort of flipped back around on the other side when they were questioning
Schlapper. So I mean, I'm not a betting person. I think I'm definitely more worried than I was
that they will find that this is sufficient to be aiding and abetting liability under the
Anti-Terrorism Act. And what about Gonzalez? Yeah, I think, so I can't see them just finding liability in this case.
There's a couple of different ways that it comes out.
If they decide with Twitter in the Twitter v. Tumner case, they actually, you know, my
speculation is they will dismiss the Gonzalez case as improvidently granted because they
don't need to decide that if there's no underlying cause of liability.
And one of the reasons why I think they might do that is I just got the sense that they
were pretty scared about drawing a line that's going to mess everything up.
I think that they were really desperately asking the plaintiff's lawyer, you know,
give us a line, give us a rule that we can use in our judgments to sort of make sure that platforms
aren't liable for absolutely everything that they recommend, but also are liable for some of the
stuff that they recommend. And no one came up with a really good rule that they could sort of copy paste into their judgments.
And so I think that they are either going to leave the status quo and say,
this is not for us, it's for Congress,
or say, you know, we're going to decide this another day when we have a better set of facts.
Okay.
Jeffrey?
I understood about the pushback in Tamna, but in the end, I think they will not find
liability.
So many of the justices of different perspectives thought that there had to be some connection to knowledge
of a particular person or action before you could find liability. And since neither was here, and
because they're concerned about the consequences of liability, I don't think they'll find it.
And then I agree there won't be liability in the Google case. It could be dismissed if the Tomna
case is clear or on
other grounds, but I think it'll be a rare example of bipartisan consensus on the US Supreme Court.
Honey?
I, of course, don't have the legal credentials that-
What do you think should happen? You're smart.
Yeah. So I was surprised to hear the arguments in Twitter, Tomna. I actually think there's a
chance that they rule for plaintiff
here and Twitter is going to lose that one. So I'm with Evelyn on that. I'm with Evelyn and
Jeffrey that I don't think Gonzalez is going to win. And I think everybody knew this was a weak
case going in, but I do think the justices are hungry to do something. And I think they may open
the door in their ruling for a future case to welcome maybe a case that is better on the facts that would have more guardrails and allow them to rule in a more narrow way to rein in some of the over-interpretation of 230 that some of us think has happened over the last few decades.
That's a good point.
Jeffrey or Evelyn, which one will it be?
Is there one coming?
So, I mean, I think that there are real legitimate questions about the breadth of 230 as the way the lower courts have interpreted it. I think,
you know, Hani talked about the Snapchat case earlier, which is a good example of where 230
immunity was pierced. And I think, you know, there are other really good questions around
really bad actor platforms that know all of this stuff is going on and not taking action.
Teen mental health, for example.
Yeah, I mean, I think, you know, there's going to be causal chain problems on some of those cases.
But, you know, I do think that Hani's absolutely right. The court took these cases because there's
sort of hunger. Everyone's talking about Section 230. We should be talking about Section 230.
But I think that these weren't the fact sets that they thought. And so it'll be interesting to see if they come back and have another bite at it soon.
Jeffrey, is there another case? court is likely to divide over. And it's possible that that could have implications for how liability is applied in other cases too. But that's going to be absolutely fascinating and
so squarely poses the conflict about whether or not the platform should be treated as common
carriers and obey First Amendment standards. And in some ways, those will even be more
constitutionally significant than these cases. All right. Is there any other industry that gets
blanket immunity protections the way social media companies do? Everybody gets sued except them. Is there any sort of parallel here, can any of you think?
No, there isn't. I mean, I'm not the legal scholar here, but we've heard this. And I think even one of the justices says this during the Gonzalez hearing is, why does the tech industry get so much protection? Every other industry has to internalize these risks and deal with it.
And I don't know of any other industry that has this type of almost blanket immunity.
I mean, the tech industry obviously gets sued all the time, but I do think that there,
I mean, this is a somewhat exceptional statute provided for what sort of Congress recognized
at the time as an exceptional situation, which is,
you know, these platforms have become the custodians of all of our speech. And I think,
you know, the important thing to remember about Section 230 is, yes, it provides platforms
immunity, but it also provides users immunity. And the point of that platform immunity is to
protect the speech of users. I'm sounding much more libertarian on this podcast than I intended to.
I have to say, you know, I really do think.
You've lived in Silicon Valley too long.
Yes, six months.
That's all it took is something in the water.
You can be libertarian light, which is most of them, honestly.
I think content moderation is extremely important.
I just get nervous about government rules that incentivize over moderation
and that platforms that don't care about sort of marginalized communities or disparate impacts end
up, you know, we have seen this before with sort of the Foster amendments as well, taking down
speech of people who, you know, don't have the same resources. Can I follow up on that, Kara? So
Evelyn raises an absolutely valid point that we do have to be careful about overmoderation. I will point out, however, that when we passed the DMCA, the Digital Millennial Copyright Act, these same claims were being made by the tech companies that you are going to force us to overmoderate to avoid a liability. And it wasn't true. And look, DMCA is not perfect, but it has largely been fairly effective.
And it created a healthy online ecosystem that has allowed us now for both creators and producers to monetize music and movies and art.
And so when you have rules of the road, they can actually be very, very good at creating a healthier online ecosystem.
And since the companies are incentivized to keep content up, that's the financial side,
I think that on balance, this might actually work out even if there is more liability with
reduction of 230 protection.
I would just say that industries that are immunized from suits include lawyers.
They're the ones who are most protected. And all the privileges that the courts have protected against ineffective this immunity, as Evelyn says, for the platforms is designed to achieve a First
Amendment value, which is deliberation and not over-moderating. And it's heartening, despite
the really tough questions that are on the horizon involving the scope of the First Amendment,
to see a consensus that 230 did achieve its purpose. And there's a reason that the US has
a freer free speech platform than Europe, for example, which lacks this immunity, and the
consequences of abandoning it might be severe. So let's just pause during this brief moment of
agreement, not to sing Kumbaya, but to say,
it's great that thinking about this hard,
the justices may be inclined to think that 230 isn't so bad after all.
So my last question,
because that you led me perfectly to it.
There's two ways to go here,
is that you know this world
and how powerful these social media companies are.
There's one way where Google, Twitter, Meta, etc. gets their ships in
order without legislative or judicial action, because they should be in charge of all this
stuff because they were duly elected by nobody. Or, as Kagan specifically called out Congress to
act, which are our elected officials, as damaged as they may be. Two things. One, who should be running the show here?
And let's imagine a world with rational internet regulations. What would those be,
and what would the internet look like? Hani, you start with the first one,
and then Jeffrey and Evelyn, you can answer the second one.
There is no evidence that the technology company can self-regulate. The last
25 years have taught us this. And not only that, is that the business model that has led to the
Googles and the Facebooks and the TikToks of the world continues to be the dominant business model
of the internet, which is engagement-driven, ad-driven, outrage-driving. And that business
model is the underlying root poison, I would argue.
I don't think we can sit around and wait for the companies to do better. I don't think they will.
There is no evidence of it. I think, despite the fact that I don't want the regulators putting
rules of the road, I think there is no other choice here. Ideally, by the way, the consumers
would have made the choice. We would have said, okay, we don't like the way you're doing business. We're going to go elsewhere. But in addition to phenomenal wealth, they have
virtual monopolies. And so we as the consumer don't even have choices. And that means the
capitalism won't work here. And so we need the regulators to step in.
All right, Jeffrey, what Congress should act? My feeling is Congress should have done privacy and
antitrust legislation and taken care of this in a whole different way.
But what do you think about that part?
I guess the question first is, will it act and what should it do?
And will it?
Probably not because there's not consensus as we've been discussing with conservatives more concerned about content discrimination for better or for worse, and liberals more concerned about hate speech and harmful conduct. I find it hard to imagine what a
national free speech regulation would look like. And in fact, I can't imagine one that's consistent
with First Amendment values short of imposing them, which there's an argument for not doing
at the federal level because companies need some play in the joints to take down some more be better than the alternatives of either really sweeping,
imposing a First Amendment standard on the federal level or allowing a great deal more
moderation than would be consistent with First Amendment values.
Evelyn, you get the last word. 230 looks like it's going to live to fight another day.
Yeah. There's no rational world where the best way to make tech policy is by nine older justices weighing in on
a case every 20-something years to sort of catch up on what's been going on. That is not how this
should happen. Absolutely, Congress, if it could get its act together or it could pass some
legislation enabling a digital agency that could be even more nimble and sort of,
you know, gather facts and understanding in which to make policy that's more sort of
finely attuned to the problem. And, you know, then we could talk, absolutely, Kari, you know,
you mentioned privacy and antitrust, that would be 100%, you know, the sort of place where I would
start. I would also really start on transparency legislation and data access, you know, what are
these platforms doing?
And are they doing what they say they're doing?
Let's get researchers in.
Exactly.
And that's where I'd start because you can't solve problems that you don't understand.
And I think that that's step one.
And the only other thing, you know, before we close,
as this has been a very sort of parochial conversation,
but there are other legislatures and Europe is taking action.
The Digital Services Act is coming. And so these platforms are going to have to change and adjust anyway,
because they're going to be regulated, you know, no matter what the Supreme Court does.
You know, that's a thing I say to every US legislator. I'm like, you're letting Marguerite
Vestager, who likes to knit, run the internet, just FYI. And if it's not her, it's someone else
and like, i don't
know australia exactly those foreigners oh god the australians that's a terrible outcome i was just
doing that on you well you know what they do a lot better than we do i really appreciate all three of
you it's been a very thoughtful discussion uh jeffrey honey evelyn thank you so much and we'll
see where it goes thank you thank you so much and we'll see where it goes. Thank you. Thank you so much. Thanks, Kara.
If only we were with Margaret Vestager.
Yes,
she's great.
By the way,
I don't mind her running the internet.
I'm perfectly happy.
Exactly.
She's a badass and she can run anything she wants.
That's my feeling on her.
I really appreciated how Evelyn was taking this libertarian stance, but she's right. She's not generally like that if you read her writing.
Yeah. And her papers. Well, it's a complex issue. I mean, I think that was the point. And so I think what's really nice, what I've seen here is the justices really coalesce in some way
that they understand the importance of this because they should not be meddling here. And
they knew it and they said it, thank God. It's unclear, like they should not be meddling here. And they knew it and they said it, thank God.
It's unclear, like they shouldn't be meddling here,
but they should have a philosophy around it.
And it might not be with these cases,
but there is something that's happening
with this argument around free speech
and internet infrastructure.
Maybe that, that is maybe true.
But this is so fundamentally a congressional thing. That's crazy. You know, having Biden putting out statements about 230,
Trump just did it off the top of his head. It needs to be considered by our congressional
electorate officials how we want to do this. I know, but so little happens there, Kara.
It's not true. They passed 230. Guess what? Yeah, 1996, very different Congress.
They did a great job. I know they can do it again.
I'm pulling for them.
Tara Swisher, Congress's main cheerleader.
They did a good job then.
And I haven't always supported 230.
There's gotta be some ways of them being liable
for some things.
That's through privacy, data, antitrust.
It'll work.
Go right to the business plan.
I thought Hani's point at the beginning
was very interesting about Google kind of hiding
behind this, we have to do X, that you don't have to recommend, you do have to rank and
search, is what he was saying, kind of.
That, to me, raised two really interesting questions.
One is, how should we think about where that line gets drawn?
And it is an interesting philosophical argument to say, okay, if there were no Section 230,
how would these companies behave?
They wouldn't exist.
They wouldn't exist or they would exist in a very narrow way or they would have approvals, right?
Like everything would need 15 minutes to appear.
Everything would be slower.
It would be too expensive.
It would be too expensive.
The whole thing would fall apart.
You would not have your TikTok or whatever.
But yeah, when you tell these companies to fix something, you tell these companies to fix child pornography, for example, they can figure it out.
So there is a little bit like chicken and egg.
Let me just tell you, hit it where it hurts.
Business plan, privacy, data, surveillance.
We can do those things.
Nothing to do with free speech.
They have nothing to do with 230 here.
And everybody chit-chatting away on the internet.
You can hit them where it hurts.
They need to be liable in those ways. So antitrust, privacy, et cetera.
Yeah. Transparency.
Transparency. The other thing that I think is remarkable here, and this was the kind of Mark
Zuckerberg 2018 Congress thing, is how I actually thought the court did a good job on the technical
elements, maybe better than Schnapper did, right? They did a good job because they didn't say they
knew what they don't know. Exactly. That's a very powerful thing to say, I don't know.
We need more young people to be in government and to be kind of grokking these issues and
thinking about this. We looked at the Supreme Court lately. I know, I know. Yeah, they're
meeting at 5.30 for dinner. Anyway, let's go on. They did a good job here.
They did.
And as Jeffrey Rosen said, it's heartening to see the court doing, acting like a court.
I agree.
I was very upset by these cases because I think they're so stupid.
Thank you.
All right, Judge Swisher, read us out.
Today's show was produced by Naima Raza, Blake Nishik, Kristen Castro-Rossell, and Raffaella Seward.
Rick Kwan engineered this episode.
Our theme music is by Trackademics.
If you're already following the show, you may proceed.
If not, you're out of order.
Go wherever you listen to podcasts,
search for On with Kara Swisher and hit follow. I'd be such a good judge.
Thanks for listening to On with Kara Swisher
from New York Magazine, the Vox Media Podcast Network, and us. We'll be back on Monday with more. and billing platforms supporting millions of businesses around the world, including companies like Uber, BMW, and DoorDash. Stripe has helped countless startups and established companies alike
reach their growth targets, make progress on their missions, and reach more customers globally.
The platform offers a suite of specialized features and tools to fast-track growth,
like Stripe Billing, which makes it easy to handle subscription-based charges,
invoicing, and all recurring revenue management needs. You can learn how Stripe helps companies of all sizes make progress
at Stripe.com. That's Stripe.com to learn more. Stripe. Make progress.
Thanks to Huntress for their support. Keeping your data safe is important. However, if you're
a small business owner, then protecting the information of yourself, your company, and your workers is vital. In comes Huntress. Huntress is where fully
managed cybersecurity meets human expertise. They offer a revolutionary approach to managed security
that isn't all about tech. It's about real people providing real defense. When threats arise or issues occur,
their team of seasoned cyber experts is ready 24 hours a day,
365 days a year for support.
Visit huntress.com slash Vox to start a free trial or learn more.