Offline with Jon Favreau - Big Tech's Big Tobacco Moment
Episode Date: April 4, 2026Mark Zuckerberg is finally being held accountable–not by government regulators, board members or shareholders, but by two lawsuits. Tech journalist Casey Newtown, editor of Platformer, joins Offline... to explain how a young woman in California beat Meta and Google on the grounds that Instagram and YouTube had destroyed her mental health. Jon and Casey discuss the strengths and weaknesses of the case, whether losing end-to-end encryption could lead to a surveillance state, and what happens if social platforms’ defensive shield, Section 230, is overturned. Then Jon speaks to New Mexico Attorney General Raúl Torrez about his successful lawsuit against Meta, how the social media company plans to appeal it, and whether the case he’s made could ultimately lead the Supreme Court to regulate this 21st century addiction.
Transcript
Discussion (0)
Offline is brought to by Zbiotics pre-alcohol.
Let's face it, after a night with drinks,
we're not bouncing back like we used to because we're old.
We're in our 40s.
And honestly, if you're 30 or over and you're not using Zbiotics,
I need to pull you aside and have a personal talk with you about your life choices.
And also, don't think you're so great if you're in your 20s.
You could use Zbiotics in your 20s too.
And I think you just feel better.
Probably should be more varsity.
Zbiotics, pre-alcohol probiotic drink is the world's first genetically engineered probiotic.
vented by PhD scientists to tackle rough mornings after drinking.
Here's how it works. When you drink, alcohol gets converted into a toxic byproduct in the gut.
It's a buildup of this byproduct, not dehydration, that's to blame for rough days after drinking.
Pre-alcohol produces an enzyme to break this byproduct down.
Just remember to make pre-alcohol your first drink at the night.
Drink responsibly, and you'll feel your best tomorrow.
I have zbiotics everywhere, all over the place.
My house and in my bag.
I kept running out, so I bought like a giant stockpile.
Yeah.
It's like, it's my strategic oil reserve.
That's my sprow.
I got to release the Zbiotics.
It's indispensable.
From the fairways in Augusta
to the first pitch of baseball season
and the start of festival circuits.
April is a sprint of outdoor celebrations.
Don't let a rough next day keep you on the sidelines.
Drink pre-alcohol to stay ahead of the game
and make the most of every sunny Saturday.
Go to zbiotics.com slash offline to learn more
and get 15% off your first order
when you use offline at checkout.
Zbiotics is back with 100% money back guarantee.
So if you're unsatisfied for any reason, they'll refund your money, no questions asked.
Remember to head to Zbiotics.com slash offline and use the code offline at checkout for 15% off.
In moments like these, it's easy to feel overwhelmed and even easier to feel powerless.
But we are neither.
I'm Stacey Abrams, and on my podcast, Assembly Required, I take on each executive action, legislative battle,
and breaking news moment by asking three questions.
What's really happening?
What can we do about it?
And how do we keep going together?
This is a space for clarity, strategy, and hope rooted in action, not denial.
New episodes of assembly required drop Tuesdays.
Tune in wherever you get your podcast and on YouTube.
Hey, sweetie, your mother showed me this Carvana thing for selling the car.
I'm going to give it a try.
Wish me luck.
Me again, I put in the license plate.
It gave me an offer.
Unbelievable.
Okay, I accepted the offer.
They're picking it up Tuesday from the driveway.
I haven't even left my chair.
It's done.
The car is gone.
I'm holding a check.
Anyway, Carvana.
Give it a whirl.
Love you.
So good, you'll want to leave a voicemail about it.
Sell your car today on...
Carvana.
Pick up fees may apply.
Once a juror understands that a company has been researching this
and that the more they looked into it,
sort of the worst stuff they found.
And then also that research kind of gets canceled
or the researchers get moved to other projects.
kind of does start to feel like a big tobacco moment, right?
Yeah.
I'm John Favreau, and you just heard from tech journalist Casey Newton,
who is our guest this week, along with New Mexico Attorney General Raul Torres.
I think there was a huge development last week in the fight to free kids
from having their lives controlled by what's on their screens.
Something that worried me before I had kids and keeps me up at night now that I do.
It has to do with Mark Zuckerberg, one of the world's richest men who runs one of the
world's richest companies.
someone who spent most of his charmed life using money and power to remove whatever obstacles get in the way of what he wants.
And what he wants always seems to be more.
More users, more money, more market share, growth at any cost.
Even if that's meant violating people's privacy.
Even if it's meant stealing data or lying to investors.
Even if it's meant trying to bury mountains of meta's own internal research about the harms that Facebook and institutions,
Instagram have unleashed about how addictive their products are, especially to children,
something the employees knew and talked to each other about.
Quote, oh good, we're going after 13-year-olds now, one wrote.
Targeting 11-year-olds feels like tobacco companies a couple decades ago.
Here's another.
No one wakes up thinking they want to maximize the number of times they open Instagram that day,
but that's exactly what our product teams are trying to do.
Then this exchange between two meta researchers.
Oh my gosh, y'all.
Instagram is a drug.
We're basically pushers.
We are causing reward deficit disorder
because people are binging on Instagram so much
they can't feel reward anymore.
Mark Zuckerberg has basically escaped
any kind of meaningful accountability for this
or anything else.
Huge regulatory fines haven't faced him.
They're basically a rounding error.
Congress hauls him in to testify from time to time so they can yell at him, but they haven't really touched him.
Whistleblowers from inside his company who've come forward have been smeared and threatened with lawsuits.
I know I was supposed to interview one until she got hit with a gag order.
Zuckerberg is so used to getting his way that when local Hawaiians objected to his $300 million,
$200,000-acre compound and underground bunker, because it was their land, where
their ancestors are buried, Mark actually tried to sue them because he thinks the money and the
power allow him to get away with anything. And until last week, he was basically right. But now he
isn't. When he walked out of a courtroom here in Los Angeles last week, after taking the stand
in front of a jury for the first time, Mark Zuckerberg was finally held accountable. Not by
government regulators or board members or shareholders, but by a young woman named Kaylee from
his own backyard in Northern California. Kaylee was on YouTube when she was six, and Instagram by nine.
She said that she initially got a rush from all the likes and notifications, which during class,
she would run to the bathroom to check because she was panicked that she might be missing out on something.
Pretty soon she was spending all her time on the platform. She stopped hanging out with her family,
She stopped making friends.
She hit 16 hours a day on Instagram.
She tried setting time limits.
It didn't work.
Her mom tried parental controls.
That didn't work either.
She was bullied and sexually extorted,
and she still couldn't keep herself off the platform.
She bought likes and she added filters,
but all the other filtered photos
made her more insecure about how she looked.
She couldn't sleep.
She became depressed.
She started cutting herself.
She contemplated.
suicide. But eventually, she got help. She also got a lawyer. And when she was 17 years old,
she sued Mark Zuckerberg. For the first time in history, a jury held that both META and YouTube,
also a defendant in this case, were negligent in the design of their platforms. Jurors found that
the company's negligence was a substantial factor in causing Kaylee harm, and that they had failed
to warn users about dangers that the companies themselves had long been aware of.
But there was more.
The day before the LA verdict, a jury in New Mexico found that META violated their state's
consumer protection laws by designing a product that fails to protect children from predators.
The result of a lawsuit brought by New Mexico Attorney General Raul Torres,
whose office set up an undercover investigation where they created a fake Instagram profile of a 13-year-old girl
that was almost immediately flooded with messages from child predators, three of whom were then arrested.
The combined damages in the LA and New Mexico cases amount to a few hundred million dollars,
which is, again, a rounding error for meta.
But the money isn't really what matters here.
What matters is that meta and the rest of the social media giants have now lost the legal shield
that has protected them for 30 years.
Because Kaylee didn't sue them over the content on their platforms.
She sued them because their platforms are defective.
Because the product's design isn't safe for all users, especially children.
Meta knew that and didn't tell us that.
None of them did.
And so for the first time, these verdicts might finally force tech giants to do what no one else has been able to make them do.
Fix the design.
Make it safer.
Get rid of social media's most addictive, harmful features, infinite scroll, auto play, push notifications, beauty filters, even algorithmic recommendations.
This is all on the table now for these juries and judges.
And there will be many more.
2,000 similar pending lawsuits will now move forward,
including a massive federal case with 1,600 plaintiffs that starts this summer.
Meta is not happy.
They will appeal.
They will keep making the same argument they made with Mark in the L.A. trial.
That Kaylee's problem was an Instagram, it was Kaylee, or her mother,
or anything else in her life that was an Instagram.
They'll keep arguing that their right to free expression protects them from being forced to change
their platforms. And to be honest, I totally get why so many people are concerned that these
verdicts could also end up forcing social media companies to have more censorship and surveillance
on their platforms. Ideally, you would pass a law that deals with social media's most
harmful features while still protecting speech and privacy, especially for adults. But that would
require a function in Congress and a president who wasn't the most powerful living example
of social media brain rot. So here we are.
And I think that whatever reservations people might have, most Americans understand what those jurors understood.
That freedom of expression does not include the freedom to design an addictive product that you know to be harmful, especially to children.
This isn't some abstract legal debate.
It isn't some moral panic.
It's what the people who've built and sold these products have said themselves, even though their bosses tried to bury the truth.
And most of us are sick of it.
All kinds of people.
People with different politics, different backgrounds, people without kids, people with kids,
and the kids themselves.
They don't want to spend their childhood stuck in their feeds.
Most of us don't want these tech companies to keep stealing more and more of our attention
just so they can make another billion.
And we certainly don't have much confidence that the next set of tech gods
creating super-intelligent robots will do a better job than the geniuses who blesses.
us with the algorithm, probably because they're run by some of the very same people, like Mark Zuckerberg.
The anger and disgust that most Americans feel towards big tech is real. It's become a potent
political force with an organized, growing movement behind it. What's needed now are political
leaders willing to listen, take up this fight, and rally the country around a future where we
control the technology that shapes our lives, not the other way around? At the end of the day,
Today, that's all the families who filed these lawsuits and cheered these verdicts really want.
As the trial ended here in L.A., some of those families were standing outside the courthouse,
holding up photos of their children, their sons and daughters who struggled with depression
and eating disorders, kids who had taken their own lives.
These parents have been showing up to courthouses and congressional hearings and school board meetings
for years now, holding up those photos, begging someone to listen.
Thank God that last week, a jury of 12 people in Los Angeles finally did.
Up next, my conversation with New Mexico Attorney General, Raul Torres.
Attorney General Torres, welcome to offline.
Thanks for having me. I appreciate it.
So you just want a landmark verdict against META based on a lawsuit you filed in 2023
after an undercover operation where your office created a fake profile of a 13-year-old girl.
what happened after you created that profile and what did it tell you about what meta already knew about their product?
Well, what we were trying to do is recreate the actual experience of a young person who is new to the platform.
We had been hearing from our law enforcement officers inside the agency that a lot of the predatory behavior that we were most concerned about had migrated to these spaces.
And so we were just trying to test and see what happened.
And she was flooded with sexually explicit material requests for, you know, some kind of real-world interaction.
And what was most shocking is instead of flagging this explosive growth in this young girl's account, the company actually sent her information about how to monetize her following and how to grow her following.
And that was the moment for me.
I was like, you know, we really got to dig into this and go a whole lot.
lot deeper. So I guess the sort of parental controls that Instagram offers didn't really do anything in
this case. Yeah, no. I mean, what you saw again and again, every time we pulled back another curtain
inside the company, you saw all of these communications, emails, and information that was being
shared about not only the addictive nature of the product, how harmful it was.
to kids, but they're very clear awareness of all the predators that were there. And to match that
and compare that with what they have been saying publicly, with what Mark Zuckerberg has been
saying publicly, was something I think really prompted the jury. I mean, they heard six weeks
of testimony, came back with a decision in less than a day. And in my sense is they were trying
to send a message. And so hopefully everyone who's been paying attention to this case really
starts to understand the sense of urgency that I think people in the community have about it.
So the jury awarded the maximum penalty per violation, $5,000 each, but the total of $375 million
was under the $2 billion you asked for.
One juror said they compromised on the number of violations but maxed out on the penalty per child.
How did you read that?
Well, I mean, to your point, I think they did compromise.
We were looking for, you know, something that captured the full extent of the harm,
all the underage kids that were on the platform.
I think they took a compromise and went with a number that represented the estimate of kids that might have actually been harmed.
The thing to remember, though, is that that $5,000 penalty hasn't been changed since 1970.
Since we first enacted this consumer protection law, had it been adjusted for inflation, it had been a $40,000 per violation, that would have pushed the result to just under $3 billion.
dollars. And so one of the things that we're doing in the aftermath of this verdict is pushing to
both expand the definition of what's covered under the act, but really ratchet up those
penalties because I recognize, I think everybody recognizes that that's not a big enough
stick for a company that has this many resources and engaged in this kind of, you know,
commerce all over the world. We need to have stronger deterrence. It's something that I'm working
on. I'm also really trying to push the other AGs around the country to really
re-examine our consumer protection laws because most of them haven't been updated in years.
I read that you're also going back to the table in May to ask the judge for additional financial
penalties and a ruling that would force META to make changes in their apps. Can you talk more
about what specifically you'll be asking for? So the judge separated out our public nuisance
claim and so we're going to come back. We're going to really present more evidence about how much
harm the company's products have caused here in New Mexico will be asking for additional monetary
penalties. But the more important piece of the presentation that's going to happen in May is on
our request for injunctive relief. That means real age verification, changes to the algorithm
where they stop bombarding kids with notifications during the school day in the middle of the
night, changes to infinite scroll, to autoplay of videos. And we're going to actually be asking the
court to set up an independent monitor, hopefully relying on technologists and experts from
around the country to help us design very clear and specific features to create a safer environment
there. The cool thing about it is that if we can do this here effectively, we can actually
establish a blueprint for what can happen around the rest of the country and around the world.
So I think there's a real opportunity for us to change fundamentally the way this company does
business. So this one case, if it's held up on appeal and if the judge agrees, like, it could lead to
maybe the end of infinite scroll of some of these notifications, push notifications for children,
age verification, just like all across meta and perhaps other social media companies as well?
The jurisdiction of this court is obviously limited to the state of New Mexico. So what we would
effectively be doing is asking them if they're going to continue to do this business in New
Mexico, they're going to have to come up with a different standard of doing that business here.
But once they've established that, like once we've gone through the process of doing it,
if we prevail on appeal and can establish the feasibility of implementing these changes,
we could actually change it across the board for this company and set a new benchmark for the industry.
Now, look, I wish Congress would wake up and put this at the top of their agenda.
I think this is a place where there's a lot of bipartisan opportunity for meaningful change,
but they have been stuck in place.
And so if we have to do this through a court process, through litigation process, I'm going to just push forward.
But I think this is an opportunity to kind of use litigation to prompt some higher level policy engagement in Congress.
And that's what we're really hoping for.
So META's argument was that this case is still really about content, not design.
You know, they're calling it consumer protection is just a way to get around Section 230, which, as you know, essentially shield social media companies like meta from being held liable for the content on their platforms.
But it's not just meta making this argument.
Mike Maznick at TechDirt called your verdict, quote, a really problematic result that easily should have been tossed on 230 grounds.
What's your response to people who say this is a speech case dressed up as a consumer protection case?
Well, I think they don't really understand the nature of the evidence that was presented. I don't think they understand the nature of the legal arguments that were made. We weren't focused on specific third-party content, which, as you noted, is what Section 230 is all about. This is about specific design choices and features that have made this an addictive and dangerous product. And it's also about the affirmative misrepresentations that the company is made. And one thing is clear is that,
that when you build a product and you, in the design choices that you have, they're built into it,
create known harms and then you lie to people about those harms, that is outside of the ambit of
Section 230. And so, you know, again, meta and other tech companies have been hiding behind
Section 230 for the last 30 years. And, you know, I'm assuming they're going to be, you know,
centrally focused on that in their appeal.
I don't have a sense that this is going to change,
at least with respect to the judiciary here in New Mexico.
Now, whether or not they can get some of the more conservative justices
on the court to bite or even some others
who are concerned about that aspect of their defense,
it remains to be seen.
I think from the public's perspective,
we ought to be able to create some basic safety standards
around, you know, these types of spaces without infringing on expression, content, things of that nature,
because I'm sensitive to that. But I also don't want to live in a world where we have to live
with exploitation and addiction and all of this harmful activity as a price that we're forced to pay
because, you know, Mark Zuckerberg claims that he's some pamphleteer from the 18th century when he's not.
Yeah, I wanted to get into the tension around balancing sort of protecting users with protecting privacy.
So internal metadata documents showed that encrypting Messenger would impact roughly 7.5 million child sexual abuse reports to law enforcement.
And then mid-trial meta announced that they were going to roll back encryption on Instagram direct messages.
I also talked to a tech journalist Casey Newton for this episode.
who noted with some alarm that this is the first time a major platform has ever rolled back encryption protections.
And he said that we shouldn't have to give up our basic right to privacy so cops can make fewer phone calls.
What do you say to that and sort of the general concern about, because I've heard this from a few places, that like, you know, infinite scroll, auto play.
Some of these features people could live without and they say, okay, those aren't 2.30.
those aren't content, but encryption, you know, once government, especially this government,
could break encryption, you know, that's not only going to protect children, but protect people's
privacy all over the country. Yeah, so, you know, I read that same comment. Again, I don't, I think
this is probably the view of somebody who doesn't share the perspective of people, you know,
like myself, who worked in child pornography and child solicitation cases for a number of years. And one really
important piece of context is Meta and Mark Zuckerberg decided to go to end-to-end encryption
the day after we filed this lawsuit. So I'll leave it to you to decide whether or not their
motivation was really protecting the privacy interests of their users or whether it had to do
with shielding themselves from liability. My view is that the lawyers got around the table
and said, hey, as long as we can see all of this solicitation between miners who we've lured
onto this platform and predators that we've failed to kick off, we're on the hook.
But if we blind ourselves by implementing end-to-end encryption, you know, we get to hide behind
that.
And by the way, you can tell the marketing department to dress it up as privacy, even though we
literally track every single piece of information that we can track about every single user
that we have.
I don't think people are buying that.
And I also think that, to your point, the fact that they were as a result of that decision
shielding referrals to law enforcement,
I think that got to the ultimate decision
to roll that back because it wasn't something
that was defensible in court.
And to that last piece about cops having to make phone calls,
it's not cops making phone calls.
I don't have access to that information.
This is about a company that can see
whether or not a 40-year-old man
is trying to solicit a 12-year-old girl
on their platform for sex.
and if they have that information, then I would hope that they were going to be sharing that with law enforcement.
But I think it's a distortion to equate the lack of end-to-end encryption with someone in government having immediate access to everyone's private communication,
because that's not what this has been about.
The other piece is when it comes to having kids online, look, if it's adults communicating with other adults and there's end-to-end encryption,
I don't have any problem with that.
When it's a 50-year-old man
communicating with a kid down the street from me,
I have a very serious problem with that.
And I think most Americans can walk and chew gum
at the same time.
We can craft solutions
that both protect basic privacy interests
without putting kids at risk.
Yeah, I mean, the way I was looking at this
is I can see on an app like Instagram
where it seems like if you're going to have
encrypted DMs on an app
that is also algorithmically
connecting you to strangers, then that's a problem, especially for children. I wonder, like,
does this mean that for encrypted apps like WhatsApp, signal, even I messages, that, like,
there has to be age verification because you don't want kids on encrypted messaging apps at all?
Yeah, age verification is going to be key. It's going to be part of what we talk about in
the May presentation on public nuisance. And we're going to be asking the judge to really,
start exploring real age verification for precisely that reason is that we have to have
different guardrails based on the ages of the users that are in these spaces and the potential
harm to those users. Again, if it's end-to-end encryption between adults in these spaces,
you know, I'm not really interested or talking about that. You could solve part of the
end-to-end problem by just mapping a blanket rule where no one over a certain age,
who is unknown to a minor can connect with that minor.
They can't communicate with them.
There are companies in the space that have taken that step.
With respect to coming up with a more nuanced solution,
there are opportunities to develop actual technology.
It's imperfect, but it can do age estimation based on some of the sort of the angles, right?
Every time you look at a camera has the ability to estimate age.
Now, it's not perfect, but it sidesteps the problem that other people have correctly.
identified of uploading and sharing maybe sensitive personal information on an ID or something
like that. But I think the real way we have to start thinking about it is lawmakers and
policymakers, if they're going to engage in meaningful tech regulation, they have to start
iterating the way technologists do. The problem is we created Section 230 in 1997 and we walked
away and decided not to do anything. And it sat there for 30 years. And it went from a moment when I was
waiting for my dial-up tone on AOL to now a time where there's more computing power in my
pocket than there used to be in my laptop. And we haven't changed the regulatory or legislative
framework to keep pace with technology. I think policymakers have to just get comfortable with
iterating around these spaces, understanding that you're never going to be completely in
alignment, but having some basic priorities. And that should start with making sure kids are safe
in these spaces.
So there are now over 40 state AGs with lawsuits against META, thousands of pending cases that will now move forward.
Are you coordinating with the other attorneys general?
Is there a legal strategy here that is analogous to what happened with big tobacco in the 90s?
Yeah, I mean, I've been hearing from my colleagues around the country.
I'm aware of the action that they put together.
Ours was a little different because we focused on exploitation so heavily.
And so there was a different sort of evidentiary.
basis, but we did have elements where we talked about addictive design, we talked about some of those
other features. We are sharing some of the notes and the feedback that we have from our litigation
team with them to sort of inform how to make those presentations and those arguments. I think more
generally I'm trying to get all of my colleagues to re-examine their underlying consumer protection
laws, I'm in the process of trying to redesign hours, right? I mean, 1970s a long time to go without
meaningful changes in those spaces. But I think that instead of coming up with all of these
specific sort of bespoke solutions to technology challenges that are, that are, you know,
really pressing in the moment, but change over time, I think we should look more broadly
at the kind of authority that we have to really get into this space.
and try to protect people.
So we're working both on litigation
and potential legislation at the same time.
And hopefully, like I said,
it's a moment where after six weeks of evidence,
this jury came back in less than a day.
That's a pretty powerful signal.
And I hope that the companies heard that signal,
but more importantly, members of Congress did too,
because I think that's where we really need to see
some action taken on these issues.
You mentioned the Supreme Court
where this case or one of these cases,
cases could end up. Have you thought about this court with this composition of justices,
what, you know, what kind of arguments you think would be persuasive to, you know, some of the
more conservative members of the court or just members of the court who maybe haven't been
as forward-leaning as you were on this case?
Yeah, I actually think it's actually something that will be centered probably more at the
middle of the court because I can see folks on both the left or the or the right who have a
maximalist interpretation of some of the the sort of the free speech rulings when it comes to
corporations being more susceptible to an argument advanced by meta. But my sense is that
there is a middle ground where you can start identifying the unique
harms that product design and misrepresentation presents to kids and to young people and the
vulnerable populations and that that will be a way to distinguish this type of action from
those that are obviously, you know, based on content, obviously motivated for, you know, by a
political or ideological motivation. I think by keeping this centered on child welfare, there's a,
there's a real possibility that you can get some combination of moderates or persuadable Republicans
to step up and sign on to a decision that better protects these kids.
Attorney General Raul Torres, thank you so much for taking the time and talking about this case
and the strategy going forward.
Really appreciate it.
Thanks for taking the time.
Up next, my conversation with Hard Fork co-host and platformer author Casey Newton.
But first, if you love Dan,
analysis on Pod Save America, take a listen to our subscriber-exclusive pod, Polar Coaster.
It's like having a really smart friend, break it all down for you. I love Polar Coaster.
I never miss an episode. It is great to hear Dan, one of the smartest political strategists I know,
breakdown polls. It's also one of the biggest polling nerds I know, and it's a fantastic show.
So check it out. You can get that show, and a whole bunch of other subscriber-only shows if you
subscribe to Friends of the Pod. You can also get ad-free episodes of Pod Save America offline.
Love it or Leave It. Pod Save the World.
all your favorite Crooked Pods.
We have an extra episode of Pazave America called Pawsave America Only Friends that subscribers get access to.
You also get access to our growing list of excellent substack newsletters.
And you get to feel good about supporting independent pro-democracy media.
So hit pause and subscribe to Friends of the Pod right now at crooked.com slash friends.
This episode is sponsored by BetterHelp, whether you're dealing with anxiety, depression, conflict, and relations,
or simply need an impartial third party to help you deal with daily stress,
BetterHelp is there to connect you with the support you need.
BetterHelp therapists work according to a strict code of conduct and are fully licensed in the U.S.
BetterHelp does the initial matching work for you so you can focus on your therapy goals.
A short questionnaire helps identify your needs and preferences in their 12 plus years of experience
and industry-leading match fulfillment rate means they typically get it right the first time.
If you aren't happy with your match, switch to a different therapist at any time from their tailored wrecks.
With over 30,000 therapists, BetterHelp is the world's largest online therapy platform, having served over 6 million people globally, and it works. With an average rating of 4.9 out of 5 for a live session based on over 1.7 million client reviews. When life feels overwhelming, therapy can help. Sign up and get 10% off off off-off at betterhelp.com slash offline. That's betterhelp.com slash offline.
Offline is brought you by Mint Mobile. I don't know about you, but I like keeping my money where I can see it. Unfortunately, traditional big wireless carriers also seem to be.
to like keeping your money, too.
After years of overpaying for wireless,
if you're fed up with crazy high wireless bills,
bogus fees.
I'm fed up.
And free perks that actually cost more in the long run.
You say free perks?
Oh, different things.
Yeah.
Then switch to Mint Mobile.
You could be saving a lot with Mint Mobile.
Have you checked how much you're paying a month for your mobile phone bill?
Probably not.
I get that.
Unfortunately, my mobile phone bill gets texted to me by my mobile phone company.
and every time I think that is insane.
How is that possible?
That is an outrageous number.
Maybe you should think about Minn Mobile.
Stop overpaying for wireless just because that's how it's always been.
Mint exists purely to fix that.
Mint Mobile is here to rescue you with premium wireless plans starting at $15 a month.
All plans come with high-speed data and unlimited talk and text delivered on the nation's largest 5G network.
Bring your own phone and number.
Activate with ESIM in minutes and start saving immediately.
No long-term contracts, no hassle.
Ditch overpriced wireless and get three months of premium wireless service for Mint Mobile for $15.
bucks a month. If you like your money, MintMobile is for you. Shop plans at Mintmobile.com
offline. That's mintmobile.com slash offline. Upfront payment of $45 for three-month,
five-gigabyte plan required, equivalent to $15 a month. New customer offer for first three months
only, then full-price plan options available. Taxes and fees extra, see MintMobile for details.
Casey, welcome to offline. Hey, thanks for having me, John. I want to talk to you about
meta's rough week in court. Juries in two different cases held the company liable for designing a
product that harm consumers, in these cases, children. I'm also talking to New Mexico Attorney General
Raul Torres for this episode. The other big case was here in L.A., where Mark Zuckerberg himself
took the stand, and the jury found that met his design features, as well as YouTube's harmed a young
woman's health. I know you covered that case closely. Common take I've seen is that this is a big
tech's big tobacco moment. Do you agree? How big of a deal is this? I agree that it is a big deal.
And I think that over the past couple of years, the world has been coming around more and more to this framing of the issues surrounding social media as a kind of public health crisis, right?
It seems like there is something about these apps that produce really harmful effects for some subset of the population.
And this was the first moment that juries actually were able to find a legal path to hold them accountable.
What was some of the most damning testimony and evidence against META in your view from this trial?
Yeah. So, I mean, in the trial itself, it seems like jurors were really swayed by the internal research that META had done in which their own researchers had found that, again, for some subset of users of Instagram, there were negative mental health effects.
Now, you know, META would say, well, you know, those effects were exaggerated and, you know, you're sort of leaving us.
out a lot of context here. But I think once a juror understands that a company has been researching
this and that the more they looked into it, sort of the worst stuff they found, and then also
that research kind of gets canceled or the researchers get moved to other projects, it kind of does
start to feel like a big tobacco moment, right? Yeah. What was meant as defense to that in the trial?
Well, they said essentially the effects that you are talking about at trial were cherry-picked,
and we can show you lots of other data that shows that the vast majority of people never experience a problem here.
And also, some of the research that we have done is why we have added various features that are designed to help you mitigate the effects of the thing that we built.
Yeah. And it seems like they also tried to argue that this young woman had preexisting problems and issues and with her family and with other struggles.
and that somehow because of that, they couldn't be held liable.
Yes, although the Surgeon General under President Biden,
when he did a big report on this subject,
one of the things that he found was that it was precisely the teens
who have preexisting mental health conditions
who are at more at risk of these terrible outcomes on these platforms.
So simply to say, oh, well, she doesn't count
because she was already having mental health problems.
It's like the whole problem is,
is that you're serving millions of people who have mental health problems.
And we just know that Instagram and other social apps can be really bad for those folks.
Yeah.
And it seems like the key is that the jurors didn't have to prove that meta and YouTube
were the sole cause of the mental health problems,
but that they were a, I think it was like a significant factor.
Yeah.
And again, like that's, that really is a big deal because for the past 30 years,
platforms have been insulated from these kinds of attacks, right?
they've been able to hold up Section 230 and say, we are not responsible essentially for anything
that happens here. And so what's really been fascinating to me about this case is that it seems
like the plaintiff's lawyers have finally found a way through that shield and juries are responding
to it. Yeah, I want to get into that shield even more. But I did see that jurors said they
were unimpressed by Mark's testimony. Shocking, I know. The judge also didn't seem all that
impressed with his team recording the proceedings via their meta AI glasses. I guess that was a no-no.
What did you make of Mark's testimony and his general posture throughout the trial?
You know, I think basically since Cambridge Analytica, the sort of, you know, 2017 post-Trump
election backlash, meta has been in this posture of delay, deny, deflect, right? And Zuckerberg
has been carefully trained to sort of give the least that he can get away with. And this is just
mostly work for him, right? Like, this is a guy who's gone before Congress a lot, has been asked a lot of
the same questions. He, you know, chokes out a few words. Then he gets interrupted. And I think,
you know, it really wasn't all that different at trial, right? He doesn't really give folks almost
anything. But that wound up costing him, right? Because I think a lot of what the jurors are responding to
is the idea that like, hey, like the people who are getting hurt, like the plaintiff in this trial,
like this is a real person. This is like not some statistical abstraction. And there are a lot more
people like her. And like because the executives of these companies can't really speak to that,
increasingly they're getting in trouble. So you just published a really thoughtful piece about
what these verdicts mean for the wider internet. And you sort of laid out three camps,
three different reactions to the verdict, the plaintiffs who are euphoric, the defendants,
who plan to appeal. And then,
writers and thinkers who worry these verdicts could break the basic compact that holds the
internet together. And this is what you were just getting at with Section 230. Talk to me about
the concerns of that third group. So a good thing about the internet that we have arguably,
I don't know, maybe some people would disagree, is that you can have very wide-ranging
political discussions on there. You can say really edgy things. You can say ideas that are
sort of fringy and even a little bit dangerous. And one of the big reasons that you can do that
is that the platforms are just confident that if they get sued over this, they can get the suit
toss rather easily. So you can imagine a lot of like stuff that people were saying about COVID
in the early days, like turned out to be true, but was like super edgy at the time. The platforms just
sort of mostly let it happen. The fear is that if the Section 230 shield disappears, all of a sudden
platforms are going to start over-moderating content. They're going to say,
say, hey, this is starting to feel a little bit spicy. Like, maybe it's a red state where we have a lot of
laws targeting LGBT people. Maybe in that state, we don't want to permit quite as much discussion
of LGBT issues, right? And all of a sudden, like the surface area available for us to have
public conversations shrinks. So that's one of the big fears. But depending on how the cases get
adjudicated, there are even worse ones. And there's one in particular about New Mexico that I'd love
to talk. Yes. And I do want to get to that. But like, it seems like,
with this case, and this is what the New Mexico,
which is about encrypted communications,
I think we should put that aside for now.
Because this case, and I think what was novel about it
and innovative in the legal strategy is they did not go after
content moderation.
And they basically said, yeah, of course platforms
can still be shielded from legal,
or have legal liability shields from getting sued for content,
for user content.
But this is about the design itself.
And so we should be able to regulate some of these features, infinite scroll,
algorithmic choices that are made.
Some of the, I'm trying to think of what are the other ones?
Auto play video.
Auto play, yeah, that was the other big one.
Auto play.
And those don't have to do with necessarily with free speech and free expression.
Right.
And so like this is the argument that I'm trying to make is that like content and
design sort of exist along a spectrum. There are some things that I think most of us can agree
are mostly just design. Like the decision to send you 12 push notifications after midnight when
you're a teenager trying to sleep. That's really like a design decision, not a content decision, right?
And then, you know, then there's like literally what subjects can you talk about and will we remove
them from the platform? That's like obviously a content decision. My argument has been like,
let's try to find those design things that like we can develop a consensus around. And like,
particularly when they seem to serve no real social purpose,
I would argue that like autoplay video, infinite scroll,
are like probably in that category.
And maybe we can go after those and still have a section 230
that enables the rest of us to have political discussions.
Where I think I guess really tricky is around the algorithms.
Because I think most of us have this sense in our gut
that the reason that I can't stop looking at Instagram
and the reason I keep reinstalling it every time I delete it
is because I just know it's going to show me something good, right?
that casino effect is working, and I just want to pull the lever of that slot machine.
There are real difficult questions there about whether these algorithmic recommendations are
protected speech under either Section 230 or the First Amendment, and that I think is just going
to be a lot harder to untangle.
Yeah, that seems the feature that's the trickiest to me, because infinite scroll, auto play,
notifications, I do think it's hard to argue that those are expressions of free speech.
but a recommendation algorithm,
like basically if you're telling a platform
what it can and can't recommend,
does that start to feel like regulating speech?
Because is that like telling a newspaper
or a TV news program
which stories they can air and which they can't?
Absolutely.
And you could just see the way that that could be used
against the media in ways that we wouldn't really like.
I do think there is a potential path forward here, though,
which is just trying to regulate this by age, right?
Like, I think, look, once you're an adult,
your hippocampus is fully formed,
if you want to spend eight hours staring at TikTok every day,
like God bless, go for it.
If you're 14, we might want to give you a little bit more protection.
And so maybe they don't regulate the actual content of the algorithm,
but they say, look, if you're under 18,
we're going to prevent these companies from personalizing it too much, right?
Like maybe we'll allow them to do some very high-level personalization,
but we're not going to like fixate on your absolute exact interest.
So if there's any path forward there, I think it might look something like that.
So you said the most alarming part of these verdicts was how the New Mexico case implicated encryption.
Meta actually, and you wrote about this as well, actually ended encryption on Instagram DM's mid-trial in the New Mexico case.
You noted that that's the first time a major platform has ever rolled back encryption protections.
You know, A.G. Torres would say that encryption enabled predators.
to go after children in the dark.
You'd say, and I'm quoting your piece here,
that we shouldn't have to give up our basic right to privacy
so cops can make fewer phone calls.
How do you resolve that?
Well, I think A.G. Torres needs a minus business.
Like, we know that cops want to spy on us.
They have always wanted to spy on us.
And what we have said is, no, you're not allowed to
because we have privacy rights.
So, like, look, I don't want to be too glib about this.
understand there are really painful tradeoffs involved when you allow folks to have encrypted
speech. But look, in the world we're living in, I truly do not want the state to be able to spy on
all of my communications. And I think we just have to absorb the cost of that and find other ways
to catch predators. And by the way, there are other ways to catch predators, right? Yeah.
So to me, I thought a lot about this. And it feels like you need spaces like or you need platforms,
like WhatsApp signal where
I message I guess
where like
encryption is protected and guaranteed
and there are places
where you can communicate with people
where you do not have to worry
about the government spying on you
just like in real life right
just like pretend we didn't have any of these
there should be places where you can go
with someone one on one and have a conversation with them
I wonder like I was thinking about the Instagram DMs
and encryption there
platforms where they also have
these recommendation algorithms, discovery, where they are connecting you with a bunch of strangers,
and then those strangers can have conversations with you that are encrypted, that seems like
less of a, you know, a sure thing in terms of like keeping that encrypted. Yeah, I think that's
fair. And I've spoken with employees at Meta who have made the same case to me, like even folks
who are generally pro encryption, they're like, look, on the subject of Instagram, because it is a place
where strangers meet, we might want to make encryption, like, at the very least, not the default.
I talk to some who are sort of happy to see it go away. I can live with encryption on Instagram going
away. In fact, they never even rolled it out to most people. But what I object to is for the
Attorney General of New Mexico to be able to say that because meta offered encryption, the platform
was inherently unsafe. In fact, I'd be willing to bet that to the extent any of these teenagers
did have encryption on Instagram, it probably did keep some of them safe, but just by allowing them
to have private conversations without the state stooping.
And by the way, I guess if that is the finding,
then that means that WhatsApp is a defective product just by its nature.
And so is signal.
And so are these other places where people are having encrypted communications.
Yeah, that just feels like a true slippery slope.
And it is why, like, you know, I want to be reasonable on most issues of tech policy.
I try to be just kind of a real hardliner about encryption because it's just so easy for the whole thing to unravel once we start going down this road.
Yeah.
Offline is brought you by Quince.
This time of year might make you rethink what's in your closet.
You want to move away from clutter toward high-quality pieces you can actually live in.
That's why you should check out, Quince.
The fabrics feel elevated, the fits are thoughtful, and the pricing actually makes sense, too.
Quince makes high-quality everyday essentials using premium materials.
Their 100% European linen pants and shirts for men are lightweight, breathable, and comfortable.
Basically the perfect layer for spring.
The pants strike the right balance between laid back and refined,
so you look put together without trying too hard.
And their flow-knit active wear, moisture-wicking, anti- odor.
I love anti- odor.
That's important.
And soft enough that you'll actually want to wear it all day.
The best part is their prices are 50 to 60% less than similar brands.
How?
Quince works directly with ethical factories and cuts out the middleman.
So you're paying for quality, not brand markup.
Everything is designed to last and make getting dressed easy.
Love Quince.
I got to go online and get some more spring stuff because I go online like once a month
to go to Quince and see what they got.
they always get new stuff and it's always comfortable and it's always affordable.
It's getting hot out.
Refresh your wardrobe with Quince.
Go to quince.com slash offline for free shipping and 365-day returns.
Now available in Canada too, go to Q-U-I-N-C-E.com for free shipping and 365-day returns.
Quince.com slash offline.
In moments like these, it's easy to feel overwhelmed and even easier to feel powerless.
But we are neither.
I'm Stacey Abrams, and on my podcast, Assembly Required,
I take on each executive action, legislative battle, and breaking news moment by asking three questions.
What's really happening?
What can we do about it?
And how do we keep going together?
This is a space for clarity, strategy, and hope rooted in action, not denial.
New episodes of assembly required drop Tuesdays.
Tune in wherever you get your podcast and on YouTube.
Mom, can you tell me a story?
Sure.
Once upon a time, a mom needed a new car.
Was she brave?
She was tired, mostly.
But she went to Carbana.com and found a great car at a great price.
No secret treasure map required.
Did you have to find a dragon?
Nope, she bought it 100% online, from her bed, actually.
Was it scary?
Honey, it was as unscary as car buying could be.
Did the car have a sunroof?
It did, actually.
Okay, good story.
Car buying you'll want to tell stories about.
Buy your car today on...
Carvana.
Delivery fees may apply.
Eric Goldman, the Section 230 Scholar, you cite him in your piece, says the social media industry now faces existential legal liability and we'll need to reconfigure their core offerings if they can't get relief on appeal.
There are about 2,000 pending lawsuits, massive federal trial this summer with 1,600 plaintiffs, 40 plus state attorneys general have filed suits against META.
Do you agree it's existential?
And like what kind of design changes do you think META might contemplate making or be forced to.
to make to settle or prevent future litigation.
Yeah, it's a great question.
Is it like existential in the sense that maybe meta will be out of business by the end of the
year?
No, I don't think it's existential in that way.
Are they going to have to rethink some of the features of the platform if these cases
get upheld on appeal?
Yeah, I think they will.
Where it gets tricky is, and this is one of the problems with having juries decide this
sort of thing instead of Congress, is there's no legal standard now for what
constitutes a safe platform, right? Like, there's no rule anywhere that says, well, if you just get
rid of autoplay video and infinite scroll and you don't personalize the algorithm too much, we will
consider you non-defective. And so to some extent, the platforms are just going to have to guess.
On the other hand, these platforms also employ behavioral scientists who have PhDs who are working
around the clock, tried to exploit every feature of your brain that will get you to stare at the
glass rectangle longer. Maybe the platforms could just say, hey, stop that, knock it up. Let's maybe
you roll back the last 15 things we did in that regard, maybe they would be a little bit less
hypnotic. Yeah, because I thought about this too, and I'm like, okay, what makes this different
from any kind of media company trying to keep its audience, right? Which is you design your programming,
whether it's TV, whether it's film, whether it's a newspaper magazine, because you want people
coming back from, even a book, right? It has cliffhangers, right? You want people coming back for more.
But what's different is at least, you know, all of those media are produced for, it's the same media produced for everyone.
This is now like individualized, bear down into your brain, know what you want style stuff that we've just never dealt with before.
And so the psychological effects of that, as we're seeing and the psychological harms are just so much different than any other media we've had.
Absolutely.
Like, again, Section 230 was a lot past because people were defaming each other on platforms and people were
suing the platforms. And lawmakers at the time said, hey, we're just never going to have an internet
if you can sue a platform out of existence because two users were mean to each other.
We did not predict the world of infinite scroll and autoplay video and cognitive scientists who
were measuring the scroll depth on your phone to the exact pixel that you scroll down
and understanding exactly what video you were watching and how that relates to the 80 million
other videos they might show you in the moment. So we just have to kind of account for the growing
technological sophistication of these platforms and how good they've gotten at hacking our brain.
I do want to just zoom out on meta for a second. They have pivoted away from the Metaverse,
despite renaming the entire company after it, to the tune of about $80 billion in losses,
hundreds of layoffs just this month. What is Meta's identity right now? Does Zuckerberg have a coherent
strategy, or is he just trying to survive? I think he really has been in survival mode.
You know, interestingly, the Metaverse was also a survival thing because at the time, he was just
having such huge conflicts with Tim Cook over at Apple, he felt like, unless I own the hardware of
the next generation, like, I'm always going to be subject to this one person's whim. So he wanted to
go out and build it himself. It turned out not too many people wanted to follow him along on that
journey. But, you know, while they were building headsets and glasses, Silicon Valley started to make
huge advances in AI. Meta, in fairness, also made big investments in AI. There just didn't work out
as well. Right. And so now Zuckerberg is in a situation where he's really behind in
AI, and I think just having a very difficult time getting the company anywhere close to the frontier.
So, I mean, look, if you look at most of the numbers that investors care about, meta is still
doing just fine. But I do think you're starting to see some cracks in the armor there.
And the next couple of years, like there are scenarios where it just goes pretty badly for them.
What's their case on AI? Like, what do they think they're competitive advantages in this field
with all these other AI giants?
I mean, it's so grim, John. I mean, like, the true vision for like an AI version of meta is that, again, using all the tricks we've just been talking about to understand what are your exact particular interests, they're going to use the models they have to generate synthetic content that keep you looking at the glass rectangle as long as they can. So, you know, this is a company that to the extent it had any social mission at all. It was to like connect human beings. That has been thrown out the window because they now want to.
to connect you with personalized slop. And like, I'm not even exaggerating. Like this, it just is the vision
of the company now. Yeah. Yeah, they're connecting us just with robots. Not even robots now.
I mean, we just spent this whole conversation talking about sort of the harms of the algorithmic
feed internet. AI. Is there any reason to think that the AI internet will be better for people?
Do we think we're just going to have the same conversation in five years about chatbots and
AI agents? Well, you know, there was an interesting study this week that said that a large
language models generally do a better job of connecting people to expertise, right?
Like the big language models, they're less likely to guide you to, I don't know,
you know, Bright Bart and Gateway Pundit.
Like they'll tell you something that actually happened.
So that's a good thing.
But on the whole, I basically just was worried about the AI era, if not more so,
because we've already seen how hypnotic these chatbots can be for some people.
I get emails every day from people who think that they've woken up, clod or chat
GPT. And some of these people have really terrible outcomes, right? So my fear is, particularly, again,
for the young folks whose brains haven't fully developed, there are so many reasons, like,
it's very hard to be a teenager. And it's just so easy for me to imagine a generation getting
addicted to these chatbots that never really push back on them, that always tell them they're,
they're doing great, they look good, you know. And I just think it's going to be a big problem.
Yeah, it reminds me of sort of the first wave of concern about social media or at least, you know,
a couple years ago was like the misinformation and it's going to push us into like political
bubbles and all that. And that was like the first. And there was there could be some of that with
AI. Like when I look at Grock, uh, I certainly see like you've got Elon Musk, you know, doing his
own thing with a biased AI, LLM. And I guess other companies could do that as well. But I am more concerned
about what the second wave of concern was with the, uh, with, with the social media companies,
which is people spending all day long just hooked to,
AI that is going, that is already sycophantic now because it wants to keep you on the platform,
because that's how they make money. Right. You know, when there's a company character AI,
maybe you're familiar with it, and they came along and they said, we're going to let you create a chatbot
out of any fictional character that you can imagine. It started to get some momentum. And so Zuckerberg
said like, oh, we just need to do that. And so this now exists on meta's platforms. You can connect with any
number of chatbots. There was one they got in trouble for named Nasty Nancy, who I guess was sort of a stepmom who
things she shouldn't. But yeah, that's that's the kind of the present of meta. So yeah, you can
imagine what that's going to look like in two years as the models improve. The last thing is,
like, there seems to be this growing gap between people in Silicon Valley in the tech world
who use AI and are saying like, oh, you know, it's here, the future is here, you wouldn't
believe what you can do with this. And then everyone else who either isn't using AI or who is just
like, you know, asking an LLM some just basic research questions. Can you talk about that gap
and sort of like what the people who use AI all the time and are very proficient with AI are,
like, why they're so excited or why they've been so compelled by this? My view of the gap is that
it's really about the folks outside of the bubble, not wanting to believe what the folks inside
the bubble are saying. And I think they have a lot of really legitimate reasons for that, right?
what are the folks inside the bubble saying? We're creating an existential threat to humanity. It's
probably going to take your job. It requires the largest energy and infrastructure build out in the
history of America. And that might wind up in your backyard and raise your electricity prices.
Of course, Americans do not want to have that vision come true. I think the AI industry is doing
a really bad job selling itself in that way. I think what the technologists would say is like,
look, whether you want to believe it or not, we actually do basically have the recipe figured
out. We know we can just keep pouring more data and compute into these systems and the amount of
intelligence that we have is going to scale. And so that just is going to create huge consequences for all
of us. So to me, it's really not about like who is right and who is wrong. It's about like,
what do you want to believe? Yeah. And that it's like if the future is here and this is happening,
then like what is the best way to adapt in a way so that we don't find ourselves in a really bad
situation. Yeah, and I mean, one of my big critiques of the AI industry is it's just like so
anti-democratic, right? It's like, I mean, you know, a big criticism that it justly gets is like
nobody asked for this, right? Like, people aren't asking for their jobs to be taken away and for
all of the rest. So I wish we would bring more kind of democratic governance to these systems.
Agree, agree. Casey Newton, thanks for, thanks for jumping on offline and helping us get smarter on this.
It was my pleasure, John.
All right, bye, Casey.
Bye.
Offline is a Cricket Media production.
It's written and hosted by me, John Favreau.
It's produced by Emma Ilich Frank.
Austin Fisher is our senior producer,
and Anisha Banerjee is our associate producer.
Audio support from Charlotte Landis.
Adrian Hill is our head of news and politics.
Matt DeGroate is our VP of production.
Jordan Katz and Kenny Siegel take care of our music.
Thanks to DeLan Villanueva, Eric Schute and our digital team
who film and share our episodes as videos every week.
Our production staff is proudly unionized with the Writers Guild of America East.
In moments like these, it's easy to feel overwhelmed and even easier to feel powerless.
But we are neither.
I'm Stacey Abrams, and on my podcast, Assembly Required, I take on each executive action, legislative battle, and breaking news moment by asking three questions.
What's really happening?
What can we do about it?
And how do we keep going together?
This is a space for clarity, strategy, and hope rooted in action.
Not denial.
New episodes of assembly required, drop Tuesdays.
Tune in wherever you get your podcast and on YouTube.
