On with Kara Swisher - Social Media’s Original Gatekeepers On Moderation’s Rise And Fall
Episode Date: January 27, 2025Since the inception of social media, content moderation has been hotly debated by CEOs, politicians, and, of course, among the gatekeepers themselves: the trust and safety officers. And it’s been a ...roller coaster ride — from an early hands-off approach, to bans and oversight boards, to the current rollback and “community notes” we’re seeing from big guns like Meta, X, and YouTube. So how do the folks who wrote the early rules of the road look at what’s happening now in content moderation? And what impact will it have on the trust and safety of the platforms over the long term? This week, Kara speaks with Del Harvey, former Twitter VP of trust and safety (2014- 2021); Dave Willner, former head of content policy at Facebook (2010-2013); Nicole Wong, a First Amendment lawyer, former VP and deputy general counsel at Google (2004-2011), Twitter's legal director of product (2012-2013), and deputy chief technology officer during the Obama administration (2013-2014). Questions? Comments? Email us at on@voxmedia.com or find us on Instagram and TikTok @onwithkaraswisher Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Elon's losing his fucking mind online.
It's like, like really today.
Hi everyone from New York Magazine and the Vox Media Podcast Network.
This is On with Kara Swisher and I'm Kara Swisher.
My guests today are amazing.
Del Harvey, Dave Wilner and Nicole Wong, three of the original content policy and trust
and safety people on the internet.
Del was the 25th employee at Twitter.
She started in 2008 and eventually became head of trust and safety before leaving in
2021.
Dave worked at Facebook from 2008 to 2013, eventually becoming head of content policy, and he wrote
the internal content rules that became Facebook's first published community standards.
Nicole is a First Amendment lawyer who worked as VP and Deputy General Counsel at Google,
Twitter's legal director for products, and Deputy Chief Technology Officer during the
Obama administration.
These three were absolutely key in designing safety and content policies at social media
under very difficult circumstances, but it's a hugely influential, mostly invisible job
that affects pretty much everyone who uses the internet and a lot of people who don't.
But their efforts to make the internet safer and having some guardrails are being unwound
by people like President Trump, Elon Musk, and Mark Zuckerberg.
So this is a perfect time to go back and look at the history of trust and safety and content
moderation.
I'm very excited to talk to these three particular people because despite the idiocy of Elon
Musk and Mark Zuckerberg, there are thoughtful people thinking through these incredibly difficult
issues, not making them partisan, not reducing them and making them seem silly.
They're not yelling censorship.
They're not yelling about the First Amendment, which they don't know anything about. These are hard issues and they treat them like
hard and complex issues like adults do. Others, total toddlers having tantrums. That's the way I
say it. Anyway, our expert question comes from Nina Jankiewicz, a disinformation researcher and
the CEO of the American Sunlight Project. She herself has had a lot of experience with disinformation, including being attacked unnecessarily
and unfairly. So stick around.
Thumbtack presents the ins and outs of caring for your home. Out. Procrastination, putting it off, kicking the can down the road.
In.
Plans and guides that make it easy to get home projects done.
Out.
Carpet in the bathroom.
Like why?
In.
Knowing what to do, when to do it, and who to hire.
Start caring for your home with confidence.
Download Thumbtack today.
This episode is brought to you by Samsung Galaxy.
Ever captured a great night video, only for it to be ruined by that one noisy talker?
With audio erase on the new Samsung Galaxy S25 Ultra, you can reduce or remove unwanted noise
and relive your favorite moments without the distractions.
And that's not all.
New Galaxy AI features like NowBrief
will give you personalized insights based on your day schedule
so that you're prepared no matter what.
Pre-order the Samsung Galaxy S25 Ultra now at samsung.com.
With TD Direct Investing, new and existing clients
could get 1% cash back.
Great, that's 1% closer to being part of the 1%.
Maybe, but definitely 100% closer
to getting 1% cash back with TD Direct Investing.
Conditions apply, offer ends January 31st,
2025. Visit td.com slash DI offer to learn more.
Dave, Del, and Nicole, welcome and thanks for being on On. You three are some of my
favorite people to talk about this topic. I've talked to all of you over the years about
it. You helped pioneer trust and safety on social media
and created a field that hadn't existed before.
So I'm excited to have you together for this panel.
Thank you for coming.
Thank you.
Thanks for having us.
I can't actually remember
if we've actually all been on a panel together before.
I know, have you?
No, I don't think so.
I don't think so.
Well, here we go, see?
History is made.
So let's start with a quick rundown where things stand today, and then we'll go back to the beginning I don't think so. I don't think so. Well, here we go. See? History is made.
So let's start with a quick rundown where things stand today and then we'll go back
to the beginning and figure out how we got here.
Mark Zuckerberg recently announced that Metta is getting rid of fact checkers, replacing
with community notes.
I have nothing against community notes, but they always seem to be shifting around in
all their answers.
They also loosen rules around certain kinds of hate speech now, including hate speech
aimed at LGBTQ plus people.
Immigrants, they're quietly getting rid of
their process for identifying disinformation.
I'd love to get everyone's reaction to this move,
starting with Dave, since you ran
content policy at Facebook until 2013.
Yeah. There's a few different things.
I share your appreciation for community notes as an approach,
and I think in a lot of ways, the fact-checking part of this I share your appreciation for community notes as an approach.
I think in a lot of ways,
the fact-checking part of this got front-loaded in how it was all reported.
I honestly think that's a bit of a distraction from
a bunch of the other parts that you touched on,
which are far more important.
Three that seem particularly notable to me.
One, it came out that they are also turning off
the ranking algorithms around
misinformation or potential misinformation,
which is going to really change how
information flows through the system.
Explain what that is.
They've historically tried to detect
whether content might be
misinformation and added that into the mix of
how content shows up in people's feeds.
You can think of it
as changing the velocity with which certain kinds of information spreads through the network.
They're turning off the dampening on that. That feels to me like a much bigger deal in
terms of the amount of content it affects and the amount of views that it affects than
whether or not fact checks are appended to a relatively small number of stories because of the scalability of the process. So just on the truth question, that feels like the
much more significant change, if a little bit harder to understand from the outside.
Even on top of that though, the changes they made around hate speech, and there's two
coupled ones, I think are pretty significant and quite worrisome. So one, they're moving
away from proactive attempts to detect whether or not things might be hate speech just across
the board. They seem to be turning down those classification systems. They're justifying
that by saying it's going to lead to fewer false positives. That is true, right? If you
stop looking, you will make fewer over removals.
But it also means that particularly in more private spaces where folks are aligned around
particular sets of values that are maybe not so awesome, there's not going to be really
any reporting happening out of those spaces because it's a group of people who already
agree with that content talking to themselves. You can make arguments, what's the harm there? It's a group of people talking to themselves, but
groups of people on Facebook talking to themselves sometimes storm the Capitol. So there are
real harms that emerge not just from the speech. They also made a number of changes to the
actual hate speech policies themselves and very very surgically, frankly carved out the
ability to attack women, gay people, trans people, and immigrants in ways that you are
explicitly not allowed to attack other protected categories and in ways that go, that allow
more speech than the company has really ever allowed at any point where it had a formalized set of rules.
Right, like Christians. You can't attack Christians.
Nicole?
Yeah, there's so much to dig into on this change.
High level, it struck me, the fact-checking thing, I think, is somewhat of a red herring,
because it's such a small part of the ecosystem that it's looking at.
So the thing you take from it is, who is the audience they're talking to?
If you read Joel Kaplan's blog post, he adopts the language of censorship around content
moderation that I have to assume is deliberate.
Front loading all of this, we're not going to fact check you.
We're going to let you say the same types of things you get to say on TV and on the floor of Congress, that has
an audience.
And so to me, a lot of this is about who they've decided to try to appease, right?
That is a huge amount of it.
Absolutely.
The other thing that struck me, and it is the higher level changes that they are making,
which I think are more destructive than the removal of the fact checking, which is the basically refusal to throttle certain
types of content and the targetness of that content that they're going to let sort of
loose.
Last time I was with you, Kara, we talked about like, what are the pillars of design?
We talked about architecture.
Yeah.
Right. What are the pillars of design for social media?
Personalization, engagement, and speed.
They are releasing their hold on that personalization.
They explicitly say, we're going to allow more personalized political content so you
can be safe in your own bubble.
We're going to speed it up.
We're not going to stop the false statements
and other vitriolic statements that we have in the past, and we're going to let you go
on it. They are picking on all three of those pillars for what we know is rocket fuel for
the worst type of content. And you cannot believe that that's not deliberate.
Right. It's a design decision, is what you're saying.
Speaking of that, let's talk about other platforms.
I often say that X has turned into a Nazi porn bar, and today it really is.
I have to say, Elon's really trotting out all the Nazi references today, more of them.
He's doubling down on his behavior.
I think he's trolling us with these fascist loots and now he's been
tweeting Nazi related puns right now over on X. There's obviously not anything
happening there in terms of trust and safety. So what's the prevailing thinking
on content moderation today? Let's start with you, Del, since you were the original
Twitter person here and then Nicole and Dave. I mean, I think that in a lot of ways,
it depends on the platform.
It's worth a little bit of not all platforms goes here.
Right.
Because yes, you have a couple of
really big ones that are doing some really odd things. And also, you have a lot that aren't doing
the same sort of extremist behavior. I think that there is still very much
of value being ascribed to trust and safety. I think we are seeing in some part a shift
toward recognizing that trust and safety is I think we are seeing in some part a shift toward recognizing that
trust and safety is more than just content moderation.
And I think that's one of the most important learnings that hopefully people can take away
from everything that's happening in this space, which is trust and safety starts when you're
beginning to conceptualize what your product is.
You can design it safely. And I think that what we're seeing right now
with the guardrails sort of being pulled away
from this information ranking,
all this content that we know that is extremely explicit,
extremely drives extreme amounts of engagement.
Like that removal of guardrails on a couple of companies is making
a huge like fiery scene over here. And then there's a whole bunch more companies that are
trekking on and I think trying to stay out of the crossfire.
Such as? If you could really quickly.
Well, I think you know the two big companies that are currently spiraling. And then you've got
all the others, right? You've got Reddit, you've got,
you know, Pinterest, you've got all the different federated sites, all these different communities
that are still soldiering on. Right. Nicole?
So, Del said something which I think is so important, which is
So, Del said something which I think is so important, which is content moderation comes at the end in a bunch of ways, right, where you have people who are sort of outside what
you want your product to be.
So the first choices you're making about content are really about what is the site you're on.
And so when you think about what are the other platforms that are not getting into the kind
of hot water we see, it's not just about their scale. A lot of it is about their scale. It's about their focus as a
site. So LinkedIn is a professional site and you generally see them having fewer controversies
because if you're not talking about professional information, you're not on the right place,
right? That's not your audience. Pinterest to me is kind of the same thing, right?
Like you're here to get inspired about whatever it is
that inspires you, housing goods, shoes, fashion,
whatever it is.
If you're not talking about that,
then you should be somewhere else.
I think that a bunch of what we see for the platforms
that are having trouble with this,
are the platforms that deliberately went out there and said,
I wanna be everything to everyone.
And it's really tough to sort of manage that playground.
Right.
Right.
And so that to me is a bunch about the content moderation.
I think what we've seen certainly since I keep dropping me out here like the historical
artifact that I am.
Yeah, we're going to get to the early days in a second. Right. But like, the tools that they have, the professionalism of the teams that are
doing the work are so much better in the last decade or 20 years. That I think there's a
lot that's happening with content moderation. What I think is less clear is how does a platform position itself and design itself to be resistant to
some of the worst parts of the content ecosystem.
I think you're absolutely right.
The everything store tends to be a problem.
There's porn and Twinkies.
You know what I mean?
Exactly.
Like you have a lot going on.
Exactly.
And if you had your aisles clearly separated and you could be aware that you were walking into the Twinkie
aisle or into the porn aisle and you were of age to access those things, that might be one situation,
but that's not the situation. That's correct. The Twinkies are mixed in with the porn. I got it.
They are. And in an unfortunate way. Which sometimes they it is. Sometimes it is. All right, Dave.
But again, need those signs. Yeah. Dave?
All right, Dave. But again, need those signs.
Yeah, Dave.
I am maybe a bit of a fatalist, but in a sort of positive way in that I think a lot of the
need and origin for trust and safety arises from people using these services and getting
into conflict and demanding resolutions to the conflicts they have.
And that's not going to stop being a thing
because people are not going to stop getting
into silly fights on the internet
or much more serious ones, depending.
And you are sort of driven, frankly, by customer demand,
famously advertiser demand,
but even just basic user demand
to create some amount of order,
even if it's just on questions of spam
and CSAM and whatever.
And so, to me, trust and safety, it waxes and wanes in the face of all of these things,
but the overall trajectory has been towards more professionalization, more people doing
this work.
It's not even clear to me from Facebook's announcements if there are force reduction
implications.
Some of the stuff they said, I mean, to be clear, I'm not a big fan of these changes.
But they did talk about like adding more folks
to deal with appeals, adding more folks to do cross checks,
which like great, cool.
And don't seem employment negative.
So there's a little bit of a like the meta narrative
of oh, crisis and moderation, trust and safety
is going away.
I think on some level is maybe what they want.
Which they want the Trump administration to think they're really killing all these people.
Absolutely.
Right.
All right.
So let's go back to the aughts because you guys are early days people and so am I.
We all go way back to the 90s when AOL had volunteer community leaders that moderated
chat rooms and message boards.
We'll save that for another episode.
But Nicole, you were a VP and Deputy counsel at Google when they bought YouTube in 2006.
Saddam Hussein was executed that year and then two related videos were published on
the platform, one of his hanging and another of his corpse, and you had to decide what
to do with them.
This is not something you ever thought you'd have to do.
And I remember being in meetings at YouTube and different places about this, like, what
do we do now?
We thought it was all cats and it's not kind of thing.
So walk us through your deliberative process and talk about the principles you used at
the beginning as a First Amendment lawyer working for a private company.
Wow, you're taking me way back.
Way back, yeah.
I want people to understand that this was from the get was a problem.
Yeah, yeah.
No, it absolutely was.
And luckily, we had a little bit more time, right?
A, we had the grace of being sort of the cool new kids, and B, because it was so new, there
was a little bit of buffer to make hard decisions.
So what I recall from that was at the time of Saddam Hussein's execution, remember he had
been captured, pulled out of a hole, and then executed by hanging.
And there were two videos, one of which was of the execution, the other was of his corpse.
And the question was, do these violate our policies around graphic violent content?
Or is it news? these violate our policies around graphic violent content?
Or is it news?
Yeah. Well, was it news or was it historically important?
And so my recollection of it is we had exceptions for content that might be
violative as in violent, but had some historical significance.
There were others like educational significance, artistic significance, that
sort of thing.
And as I recall it, the call that I made was the actual execution was a historical moment
and would have implications for the study of the period later on.
But the video of his body seemed to be gratuitous. And so once you know
he's been executed by, in a certain manner, by certain people in a certain context, like
what does the body do in terms of historical significance? And so we took down the one
of the corpse, we kept the one of the execution. I was so much less professional than either Dell or Dave's organizations at the time.
That was Nicole's, like, here's my thought, and here's how we're going to make it stand.
But that was the decision at the time.
The decision at the time.
Really difficult.
That's something you'd never thought.
Now, totally difficult, presumably.
I wouldn't know what to do.
Dell Twitter initially took a very public ideological stance in favor of free speech.
It did pay off though, as a lot of press, and the press dubbed the Arab Spring was the
Twitter revolution in 2011.
In the middle of Arab Spring, the general manager of Twitter in the UK famously said,
the free speech wing of the Free Speech Party.
It should be noted at the time, free speech was generally considered to be more of a left
wing ideology in a weird way. The platform obviously undergo multiple transformations, and we'll get to that later. We noted the time free speech was generally considered to be more of a left-wing ideology
in a weird way.
The platform obviously undergo multiple transformations and we'll get to that later.
But how is your philosophy about trust and safety changed over that time?
And talk about what you were thinking then.
I mean, the very first thing that we started with in terms of policies, because there really
was nothing when I showed up.
The first thing I was assigned was like, can you come up with
a policy for spam?
Because every now and
then people are encountering spam.
And we don't ever think it'll be
a big problem because you can
choose who you follow.
But we still think we should have
a policy around it.
And I was like, it'll be a problem.
Yes, I will make you a policy.
And then after that it was copyright and trademark
and making sure that we had a relationship with the National Center for Missing and Exploited
Children and all of the sorts of like you get your initial ducks in a row of these are
these core functions that you need to make sure that you have in place to give people
a tolerable user experience.
Like all of those are things where people have expressed needs
and strong sentiments in those areas.
So you start with those.
And then we started expanding from there.
A huge challenge for years was that we only had two options.
We could leave it alone or we could suspend the whole account, which is a terrible series
of options.
You couldn't take it down.
You couldn't just remove it.
Right.
You couldn't take down just the content.
It had to be the whole account.
Once we added on the ability to just do it to a single piece of content, that was such
an exciting day for us.
And the advances since then, there are so many possible things
you can do now in trust and safety that just weren't even things we could imagine 10 years
ago, I would say even.
Dave, in 2009, five years after Facebook was founded, the same year the company first published
its community standards, a controversy erupted over Holocaust denial groups on the platform.
At the time, you defended Facebook's position to allow these groups and said it was a moral
stance that drew the same principles of free speech found in the Constitution.
Years later, Mark said they don't mean to lie.
In an interview with me, that he eventually reversed course, he had a little different
take than you had from what I could glean.
I don't know what he was saying, I'll be honest with I thought it was muddy and ridiculous and ill-informed
But what's your stance today?
And how is your thinking evolved and talk a little bit about that because you can see making an argument for it's like the Hyde Park
Example let them sit on the corner and yell. Yeah
So the the initial reluctance to sort of get into the initial stance on Holocaust denial
when we took it was downstream of an intuition that we frankly weren't capable of figuring
out how to reliably police what was true.
I think in some ways that has borne out.
I don't know that the attempts at that have gone super well.
That is true.
So, I think the sort of intuition that gave rise to the stance was right, but there are
multiple ways of getting at what is problematic about that speech that don't rest on the fact
that it is solely on the fact that it is false and sort of commit you to being the everything
fact checking machine, which we were just like deeply aware.
Well, I mean, we were like, there were like 250 people and most of us just graduated from college.
We were smart enough to know that we just couldn't.
So we had to adopt a we can't or we won't because we couldn't.
Very good point.
I will say over the course of my time there, and particularly since I've left, my thinking
on this has been influenced a lot by a woman named Susan Benish, who runs something
called the Dangerous Speech Project, which studies the rhetorical precursors to genocides and
intercommunity violence. And has done a really good job of providing really clear explicit
criteria for the kinds of rhetorical moves that precede that violence in a way that to me was like usable and specific such that you could
turn it into an actual set of standards you could hope to scale, which it's I guess been a little
implicit in some of what I've said, but my obsession from very early on and in some ways
still is, is this question of, okay, we've got a stance, but can we actually do the stance? Because if we can't do it, it's sort of,
is in some sense misleading to take it, right?
I would also chime in and say that
while I am in strong agreement
that that is not the way to do it,
that if you can't enforce something,
you shouldn't have it as a policy.
There has also been, for any number of years, any number of attempts to solve product problems
by saying, we're going to write a policy to fix that, which is, quite frankly, I'm impressed
that you managed to get them to not say, well, we're going to do it anyway.
Yeah.
Yeah.
Yeah. Go ahead, Dave. Yeah, no, that's fair.
No, that's totally fair.
And there was, I mean, I think Zuck's public sort of stance on founding the company because
of the Iraq War seems a little bit revisionist to me.
I wasn't there, but that wasn't what I heard.
But it is true that it was founded in the shadow of the Iraq War.
And to your point about sort of freedom of expression being a liberal value, there was definitely a sort of punk rock,
American idiot vibe around being asked to take things down. But also like, I don't know,
I was a child and we didn't know what we were doing. And I have learned several things over
the course of the last 20 years. Which Zuckerberg would never admit. He would never admit he didn't know what he was doing.
That is something he would never come out of his mouth.
But one of, I just say, you know,
the company was founded on rating girls' hotness in college.
But okay, fine, Iraq War, we'll go with Iraq War.
Masculine energy, I think that's what they call that.
That's right, we're back to the same place.
Everyone's like, are you surprised?
I was like, no, this is what he did.
He's a deeply insecure young man who became
a deeply insecure 40-year-old.
But when you talk about the moral stance,
it was the idea that we should be
able to tolerate negative speech, which
is a long-held American thing.
Yes.
The Nazis and Skokie, you know.
But it turns into something where people game the system
and allow what you were talking
about, which is the precursors to actual violence, where speech is the precursor.
Yes.
Yes.
That's absolutely right.
Some of that was literally a question of the academic work happening or us becoming aware
of it, some combination of it happening and us becoming aware of it to have a framework
where we could really figure out, okay, if you're comparing, this is all going to sound
kind of obvious now because it, it, it's, it's one of those ideas that when you hear
it, it's obviously correct, but there are these sort of rhetorical moves you can make
that dehumanize people and serve to authorize violence against them, not by directly inciting
violence or calling for violence or threatening violence, but by implying that they are less than human, that they are other,
that they are filthy, that they are a threat, that they are liars about their own persecution
that serve to make violent action okay.
I think we're seeing some of that now.
And part of the reason I found the, to circle back to your first question, part of the reason I found the To circle back to your first question part of the reason I found the reason change is so disturbing
Is they are designed to carve out things like claims that people are mentally ill which is like down the middle
Dehumanizing speech that obviously fits into this cat are using the word it for trans people. Yeah
We'll be back in a minute
We'll be back in a minute.
Support for this show comes from NerdWallet. Listeners, a new year is finally here and if you're anything like me you've got a lot on your plate. New habits
to build, travel plans to make, recipes to perfect. Good thing our sponsor NerdWallet
is here to take one thing off your plate. Finding the best financial products.
Introducing NerdWallet's Best of awards list, your shortcut to the best credit cards, savings
accounts and more.
The nerds at NerdWallet have done the work for you, researching and reviewing over 1,100
financial products to bring you only the best of the best.
Looking for a balance transfer card with 0% APR?
They've got a winner for that.
How about a bank account with a top rate to hit your savings goals?
They've got a winner for that too.
Now you can know you're getting the best financial products for your specific needs
without having to do all that research by yourself.
So let NerdWallet do the heavy lifting for your finances this year and head over to their 2025 Best of Awards at nerdwallet.com slash awards
to find the best financial products today.
What's up, Spotify?
This is Javi.
I remember this one time we're on tour.
We didn't have any guitar picks
and we didn't have time to go to the store.
So we placed an order on Prime
and it got there the next day, ready for the show.
Whatever you're into, it's on Prime.
Breaking news happens anywhere, anytime. Whatever you're into, it's on Prime. and for Canada. This situation has changed very quickly. Helping make sense of the world when it matters most.
Stay in the know.
Download the free CBC News app or visit cbcnews.ca.
So let's go on to our favorite time, the Gamergate controversy.
It was harassment campaign.
I know, it's just been one panoply of horror.
Aimed at women in the video game industry that include doxing and death threats.
It happened in 2014, I recall it extremely well.
And in some ways, it was the birth of a loose movement of very angry and very online young
men morphed into the alt-right.
Del, Twitter got a lot of negative press that came from Gamergate because of Twitter, along
with Reddit and 4chan had relatively little content moderation and there was harassment
on the site.
Walk us through the controversy and how the aftermath led to changes in how you approach
trust and safety.
You sort of went from being that very free speech heavy company to eventually focusing
on brand safety, which is important to product, as you just noted.
I would say that what you saw was reflective of, in many ways, also the company's investment
in trust and safety and whether or not there were the tools and actual functionalities
to do certain jobs because the same way that certain policies may have existed at Facebook
because there was no way to operationalize them, similar ones certainly existed at Twitter
in terms of,
it sure would be nice if we could do X,
but there's no feasible way to do that,
and so we can't.
If we try to, we're going to set
ourselves and people up for failure on it.
I think that what you've seen is trust and safety,
and this goes back to what Nicole mentioned earlier,
like content moderation,
you're late in the process
when it's gotten to content moderation.
Ideally, once you're at content moderation,
someone has generally experienced a harm,
and you're trying to mitigate that harm.
Whereas if we look at things like proactive monitoring or designing your product, not
to have some of these vectors for abuse or even educational inputs for people about what
to expect for, you know, hey, it's likely a scam if this, like all of those things come
before content moderation and have a much higher likelihood of impact
and ripple effect.
And it's, I think what you have seen is a slowly growing awareness of the earlier we
can intervene, the earlier we can build in these components.
There's this slow growth of like, oh, we should do more of that.
And I think that's the biggest shift since sort of the beginning of Gamergate is more
of a panoply of options for actioning along with more cognizance around needing to figure
out the point of origin.
Right, consequences, anticipating consequences.
Let's talk about that.
In 2017, the Myanmar army committed genocide
against the Muslim Rohingyas. They raped and killed thousands, drove 700,000 ethnic Rohingyas
into neighboring Bangladesh. In the run-up to the genocide, and I was right in the middle of this,
Facebook had amplified dehumanizing hate speech against the Rohingya. They did it despite the
fact that Facebook had been implicated in anti-Muslim issues, violence in Myanmar
a few years before, this is when it really started to get noticed.
How much responsibility would you assign?
I'm going to start with you, Nicole, to something like Facebook.
And what should have been done differently?
And Dave, if you want to jump in, you can too.
But when this happens, the direct responsibility was pretty clear.
But you could say, I'm a phone, and you can't blame a phone for people organizing, for example.
Yeah.
I mean, I want to go back to a little bit about the Gamergate, but I'll connect it up
to the Verhango part.
Because what I recall about Gamergate, and which I think changed the trajectory of some of the content policies that we ended
up doing is that academics like Danielle Citron connected the dots between harassment as a
suppression of speech, harassment not just being you're being mean to me, which has
always existed on the internet, but that it is an incursion on someone else's rights, and particularly those who
are least able to bear it, who have the weakest voices.
That to me was where Gamergate was like, oh, we actually should not just allow all the
speech because that speech is suppressing speech.
That connects for me into how we handle things like minority voices, like the Rohingya who
may not even be on the service, right, but are being harassed.
And so their rights in some ways are being taken away.
And Dave will be able to speak to this better about like how Facebook decided to
handle it.
Yeah.
I think that like a bunch of it has to do with the design of your ability to detect
and your policies about when you intervene.
And those are hard.
Those are always hard.
So because I wasn't at Facebook or WhatsApp, I don't actually know specifically the kind
of conversations they were having about how to balance out when you see the harm, whether you have the right detections for it, and
what is the correct position of the company to intervene in what may start as small private
conversations.
There was clearly a moment where it became broadcast, right? Where it became about propaganda and pushing people into a certain direction that was very,
very toxic and harmful and had terrible consequences on the ground.
And then the question is, what is a company sitting in the US?
What is their obligation to investigate, to get involved, to send in help?
Right.
So, Dave, you left Facebook in 2013, and then it moved to the 2016 election where Facebook
was pilloried for Cambridge Analytica, spreading fake news, the Russian propaganda, creating
news bubbles and media silos.
Talk a little bit about the difficulty of being the world's social media police, I guess,
is kind of what I'm thinking about.
And then of course, it got blamed for the election of Donald Trump.
That's where it led to.
What do you imagine is where you need to be then?
Yeah.
So, I mean, I think there's a lot of question in that question and in everything
we've said so far.
I think, I do think that platforms have a responsibility to intervene that arises out
of the fact that they have the ability to intervene.
And this is where the phone analogy falls down for me, right?
The technology that we're monitoring, yes, it is a communications technology. And also it does not work identically to all prior communications
technologies. And those design choices change in an almost like existential way. It's like,
well, too bad you're here now, figure out the meaning of your life. Like too bad you
have this product now. It creates responsibilities through its
design. And you sort of don't get to accept in my view, the upside of those design choices
from a sort of growth and monetization possibility point of view without inheriting some of the
downside, right? Like I think they're linked personally. I don't have like a formula or
that could become a regulation about how that responsibility should work,
but that is where I have netted out on this entire thing.
I do though think that returning to the point earlier
about general purpose communities,
that leaves you in a very difficult position
for sites that are aspiring to be an undifferentiated place
for everybody in the whole world to talk to each other.
We don't all agree as humans, and we don't agree to the level of real violence, right?
All over the world. And so the notion that, like, it is possibly the case that the building
of that kind of space is a little bit of a, at best you've
accepted the ring of power and now have to like go find a volcano to throw it into while
the ringwraiths try to kill you.
Like that might actually be the best you can end up with in that kind of a design choice.
Whereas if you are a Reddit or a Discord, everything has a context attached to it, which
narrows the problem to something that feels...
Manageable.
Well, more possible to not definitely end up hated by everyone.
Right.
Which I think is sort of what you're doomed to otherwise.
I also think, like, a bunch of the companies,
the ones that I was at, right, were sort of like,
it's the internet, anyone can access us,
and we forgave ourselves for not having people on the
ground understand where we were because we're not offering advertising there. We have no people on
the ground. Just because they pick it up doesn't make it our responsibility to serve them. There
was a bunch of that, which I think that the Rohingya moment changed that and said, like, actually,
the very fact that you are being accessed and you can see the
numbers, right, imposes the obligation on you.
Right. The algorithmic amplification is what turbocharges.
Absolutely.
Right. Okay. So, Del, one of the things in the aftermath of this, and including the election
and then the COVID pandemic, where Biden said Facebook was killing people by allowing vaccine
misinformation, though he later walked it back.
Trump himself, obviously, a fountain of disinformation.
There was a period, I think we can call the peak content moderation, right?
And some of Trump's tweets got flagged, the New York Post reporting on Hunter Biden was
suppressed.
And after January 6, Trump got kicked off of social media platforms.
Del, am I correct, you were the one who actually
kicked him off, is that you or all of you as a group?
Yeah, is it?
Well, it was a group decision.
I didn't go out there and just yolo my way into the day.
But it was something where we looked at,
there were these violations on the sixth,
where we said if there are any additional violations of any kind, we're going to treat that as suspension worthy.
And on the 8th, a couple days later, there were a series of tweets that ended in what
was taken as a dog whistle by a number of Trump's followers at the time of, I will not
be attending the inauguration.
And that sort of like, here's a target that I won't be at was how it was interpreted by
any number of people responding to him. And that was, I think we actually published sort
of the underlying thinking and that was the bridge too far.
Bridge too far. You know, I had been saying he keeps doing it. When are you going to,
and I said it to Facebook, if he keeps violating it and you don't do anything, why do you have
a law in the first place, essentially?
And one of the things I wrote in October of 2019, I wrote this, which was something interesting,
and I'm going to read it to you.
It so happens in recent weeks, including at a fancy pants Washington dinner party this
past week, and I've been testing in my companions with a hypothetical scenario.
My premise has been to ask what Twitter management should do if Mr. Trump
loses the 2020 election and tweets inaccurately the next day that there had been widespread fraud
and moreover that people should rise up in armed insurrection to keep him in office.
Most people I have posed this question to have the same response, throw Mr. Trump off Twitter
for inciting violence. A few said that he should only be temporarily suspended to quell any unrest.
Very few said he should be allowed to continue to use the surface without repercussions if
he was no longer president.
One high-level government official asked me what I would do.
My answer, I never would have let it get this bad to begin with.
Now, I wrote that in 2019 and I got a call from your bosses, Del, not you, saying I was
irresponsible for even imagining that and how dare I, essentially.
But talk about how difficult it is to anticipate, even though I clearly did.
Well, I would note again, you didn't get the call from me.
You didn't get, you didn't call me.
You didn't.
It was one of, you know who it was.
Anyway.
I do know who it was. Anyway. I do know who it was. And my point is, you know, I think you're looking at, by this point, we're already seeing
some ideological shifts in people's outlooks on how they wanted to handle content.
We're seeing pushback.
We saw pushback on labeling content as misinformation.
And in fact, part of the pushback we got at one point was somebody who's,
we were talking about how there's some misinformation that is actually so
egregious that it merits removal,
as opposed to simply labeling it as misinformation.
And that's because there's some types of misinformation that even if you label it,
this is misinformation, people are like, that proves it's true.
And it was really difficult to frame that in such a way
because there was this, well, why wouldn't they just believe in the info label?
And there were all these conversations where we're like,
but people aren't, people, like you might work that way,
but other people don't work that way.
Elon bought Twitter in October of 2022 and he quickly started reinstating people who
have been banned, obviously Trump, also Andrew Tate, Laura Loomer, Nick Fuentes, white supremacists
whose names we may not know.
One of the top priorities was reinstating the Babylon Bee, which started this whole
thing, a satirical Christian site.
It was taken off of Twitter when it misgendered Rachel Levine, who was then Assistant Secretary
for Human Services and called her man of the year.
The right wing obviously is obsessed with trans people and they've been very effective
job of dehumanizing and scapegoating him.
But satire does have its place.
And I said before, I thought Twitter was heavy handed in this case.
Now, looking back, how do you think about Twitter's policies around something like that?
And did you expect there to be so much resistance, I guess, in that regard, given the topics
and Elon's obsession with trolling people?
I am perhaps not surprised by the degree of response.
And also, our policies existed and were clear and the
responses that that tweet was getting were all further dehumanizing. Like at
one point there was the best answer to bad speech is more good speech. The best
answer to bad speech is not lots more bad speech
agreeing with it.
Mm-hmm. That's a good point.
So when you have something that violates our policies, pretty clear cut, is doing so on
the heels of a lot of other people making the same joke and targeting this individual,
it turns into like, yeah, this is pretty clearly a policy violation.
We're going to take action on it.
Yes, that upsets some people.
And you know what?
I'm sorry that upsets you.
I think it probably upsets people who are trans more that you feel like they don't deserve
to exist.
Right, right.
But nonetheless, it led to Elon buying Twitter.
I think it's one of the biggest reasons.
He called me obsessively about it.
I can tell you that. A number of things he called me obsessively about it. I can tell you that.
A number of things he called me obsessively about.
This one really bothered him.
Looking back, it was the tip of the spear
in conservative fight against content moderation.
The GOP took back the House shortly after
as Jim Jordan began using the House Judiciary Committee
to investigate Biden administration's
supposed censorship regime,
supposed anti-conservative censorship and bias in tech.
And Stephen Miller's
legal organization began suing disinformation researchers.
From a conservative point of view, content moderation was an attempt to impose progressive
values on Americans.
They think they're just undoing the damage.
Nicole, you worked in government, putting aside obviously bad faith arguments of which
this is just littered with them.
Is there any point to be made here about these companies
which are private going too far?
Oh, there's so many points.
Let's start.
As you were sort of recounting that history,
it strikes me the acquisition of these platforms
by people like Elon Musk, this very sort of top-down drive of what is that platform for.
It strikes me that there has been a transition of believing when we started it these were
communication platforms that are intended to democratize the way that we communicate
with each other, to let small voices that were blocked out by mainstream media rise
so that we would hear from a wider
panoply of people and allow them to communicate with each other.
That's not what these policies are for right now.
These policies are about creating a bullhorn.
Who they are trying to attract to their services is very specific, and it is not about cross-communication
and global understanding.
It is about a propaganda machine.
And so to me, like, that is a really different goal, right?
And the policies just follow from that.
If we want the other internet that we started with, we have to change the goal.
That is a change of ownership, apparently.
So that leads to a question.
Each episode we get an expert to send us a question.
Let's hear this one.
Hi, I'm Nina Jankiewicz, a disinformation researcher and the CEO of the American Sunlight
Project, a nonprofit committed to increasing the cost of lies that undermine democracy. The big question I would ask is, with a consolidating
bro-ligarchy between tech execs and Trump in the US,
and online safety regulation on the rise in places like Europe, the UK, and Australia,
how are tech platforms going to reconcile the wildly different regulatory environments
around the world?
Dave, why don't you start with this one?
Nina obviously underwent a great deal of attacks, propaganda,
largely unfair, but this idea of consolidating
broiligarchy in these owners who aren't going to give up
these platforms by any means, and then you have,
you know, online safety regulation elsewhere.
Yeah, it's, I think we're in a very interesting situation
where it seems to me looking at them that they don't know the answer to the question either,
right? That question sort of presumed they had a plan. I'm not sure they do. I'm not sure,
you know, I'm, I don't know them at all, but like it doesn't seem to me like Elon necessarily makes
plans and whatever it is that Facebook's gambit is here seems to basically be a bet that maybe Trump
will be mean to Europe for them. And hopefully then somehow they won't have to do this, which
feels I don't know, not I'm not convinced that the EU is going to think that's cool
and totally go with it. But who knows? And
so it does feel a little bit like a bet on sort of actually splintering this further
and trying to use American economic power to put pressure on people to back off them.
At least that seems to be my view of Facebook's theory embedded in what they've done. I'm
not at all convinced that that's going to work
because this becomes a pretty core sovereignty
and power issue and linking it to government pressure
that way makes that actually more true.
And so, I don't know, maybe we see a splinter net,
maybe we see things increasingly blocked,
maybe we see the use of AI technologies
which I do think are going to change moderation
in ways that are going
to be somewhat helpful to the level of flexibility we have, end up with very different versions
of Facebook functionally being available or Twitter functionally being available to people
in different parts of the world.
I don't know.
I think it'll be some combination of those things.
That would just be a profoundly reckless way of understanding how they exist in the world
though, right?
These are companies that have people on the ground in these countries who are subject
to the laws of those countries, who have users on the ground.
It strikes me as enormously short-sighted about their ability to continue as a business if
they think they're going to blow off the rest of the world.
This is why it has felt felt from the get-go,
this set of announcements has felt like weirdly panicky and irrational to me. And that's sort of
why. I don't understand what the plan is here beyond like 2026.
plan is here beyond like 2026.
We'll be back in a minute.
Clear your schedule for you time with a handcrafted espresso beverage from Starbucks.
Savor the new small and mighty Quartado cozy up with the
familiar flavors of pistachio.
Or shake up your mood with an iced brown sugar oat shaken espresso.
Whatever you choose, your espresso will be handcrafted with care at Starbucks.
Toronto.
There's another great city that starts with a T.
Tampa, Florida.
Fly to Tampa on Porter Airlines to see why it's so tea-rific.
On your way there, relax with free beer, wine, and snacks,
free fast-streaming Wi-Fi, and no middle seats.
You've never flown to Florida like this before,
so you'll land in Tampa ready to explore.
Visit FlyPorter.com and actually enjoy economy.
It's Today Explained.
I'm Noelle King with Miles Bryan.
Senior reporter and producer for the program, hello.
Hi, you went to public school, right Miles?
Yes, Go South High Tigers.
What do you remember about school lunch?
Ooh, I remember sad lasagna shrink wrapped
in little containers.
I remember avoiding it.
Do you remember the nugs? The chicken nuggets?
Yeah, if I had to eat school lunch, that was a pretty good option.
I actually liked them.
But in addition to being very tasty, those nugs were very processed.
And at the moment, America has got processed foods in its crosshairs.
It's true. We are collectively very down on processed food right now. None more so than Health and Human Services Secretary nominee Robert Fulleride Kennedy
Jr.
I'll get processed food out of school lunch immediately.
About half the school lunch program goes to processed food.
Hen the man who once saved a dead bear cub for a snack, fix school lunches.
Today explained, every weekday wherever you get your podcasts.
So, a couple more questions.
I recently interviewed John Lacoon, Metta's chief AI scientist.
He says AI has made content moderation more effective, as you just said, Dave.
Dale, do you agree you've spoken about how trust and safeties are perpetually under-resourced?
I know they are.
Do you think that AI gives them the tools to do their job better, assuming people running the platforms
want to effectively moderate content in the first place? And I know Mark went on about it to me
10 years ago about AI fixing everything, but go ahead.
Assuming that you are using AI to help with scale and you still have humans involved in the circuit to make sure that
it hasn't gone wildly awry.
Like yes, please.
Absolutely.
We have been begging for tools for years and AI is a tool like any other.
It depends on how you use it.
If you deploy it carelessly, then it's going to cause problems.
But a lot of what Dave has actually been working on is in this space.
Well, let me just lead to,
Dave's been doing some really excellent work in this space,
so I just want to shout him out.
Okay. So Dave,
generative AI is the new frontier when it comes to issues we've been talking about.
We want all the trust and safety in AI,
but it's hard to trust the technology that sometimes hallucinates.
And then there's other issues like character AI that
shown AI has the potential to be very unsafe.
I just recently interviewed the mother who alleges her teenage boy took his own life
after having started a secret relationship with an AI chatbot.
It's a very compelling story.
Dave, you worked on safety at OpenAI and Anthropic, and you're doing your own thing now.
What does safety look like for AI?
Can you go into it more?
Do you think it'll end up being safer or more dangerous and corrosive?
I mean, could it be more duration?
I don't know.
I was going to say, unfortunately, challenge accepted.
No.
Some parts of it are very similar and other parts of it are very different.
So the set of interventions you have around your AI chatbot are a superset of the ones you have for content
moderation. So you have monitoring of inputs, what people are writing to the chatbot or
people are trying to post. You have monitoring of outputs, like what the chatbot says back.
And there you have all the different ways of going about that, whether that's flagging
algorithmically or human intervention or a combination of those things. But you also in the context of the AI chat bots do have the ability to try to train the
models themselves to behave in more pro-social ways or more the way you want them to.
That's that woke AI Elon keeps talking about.
I'm teasing.
Well, it's any AI, right?
If you just-
From mean AI or racist AI or-
Sure.
I'm here to ruin all the fun. This is what I do professionally.
But you do have that level of intervention.
And if you think of the AI as a participant in a conversation,
like in a chatbot product, your alternative
is actually it's two users having that conversation,
and you don't have any say in what either of them
wants to try to do.
And so in some sense, at least in my view,
single person interactive chatbot services,
in theory, once everybody gets good at this
and there's a problem here of deploying the technology
before we've gotten good at it,
should be something that we can actually make more safe
because you have all the same points of intervention
plus other ones that are not perfect,
but add another sort of-
Add another layer of safety.
And add another layer of cheese to the Swiss cheese.
So I have two more very quick questions, one for Nicole and then one for all of you.
We talk most about YouTube, ex Twitter, Meta, since this is where three of you work, but
TikTok is the elephant in the room.
It may or not not be banned.
I have some thoughts on that.
I'm not going to go into them, but it may or not be used for Chinese propaganda.
Elon may or may not end up owning it. There's all kinds of ways. But I have
thought that, and Trump just said it today, I said he's going to give it to Elon and Trump
just said, I'm thinking of giving it to Elon. Let's just say Elon does end up controlling
TikTok. Nicole, game out any consequences for us if it happens. He obviously has links
in China that are problematic, including his factories and his car sales,
all kinds of kinds of relationships there, questions about his conflicts of interest,
thoughts on where that's going?
Tick-tock is such a dumpster fire of an issue, both at a policy and a technical level.
I think there's nothing about his ownership of X that indicates it's going to
be a healthy environment. So to the extent we wanted to ban TikTok because we thought it would
be unhealthy for Americans to be on it, that doesn't strike me as it's going to get better
just because Elon has taken it over in the US. I'm probably going to get myself in so much trouble.
That's okay. I'm worse. I say worse.
in the US, I'm probably going to get myself in so much trouble for this comment. I mean, like, like the the ban itself was so poorly, yes, executed through and handled
if we want to solve foreign owned apps on our phones as a security issue.
Let's have that conversation, but have it broadly, not just to talk if we want to have
a conversation about like propaganda
and misinformation spreading on social media,
let's have that conversation, but not about TikTok, right?
Like there's a whole bunch of ways we could try to tackle
the surveillance and collection of US persons information.
Let's pass federal comprehensive privacy law
and stop having this stupid conversation about TikTok.
So like, to me, like the TikTok thing, I don't know where it's going to end up, but we still
have, we're not going to avoid the social conversation we actually need to have that
keeps us safe.
All right.
Nonetheless, we're having it, unfortunately.
Elan Maz, Sundar Pichai, Mark Zuckerberg, Sam Altman, TikTok CEO, Xiao-Chiu, we're all
honored guests,
including also Tim Cook. TikTok explicitly thanked Trump for helping restore it, even though it's
not restored because Apple and Google are declining to let it be downloaded because they understand
there's a law and they need to follow it. And also the Supreme Court said so. Some users have
reported that TikTok is hiding anti-Trump content. We'll see if that's actually the case. Either way,
it raises the possibility that some of the most influential communications
platforms that drive our culture in the hands of oligarchs, they don't like that word, it
hurts Mark's feelings.
I'm sorry, Mark, but that's what you are.
You have aligned themselves with Trump.
What are the implications of this new power dynamic between a president like Trump and
social media placards?
And what do you expect to see flip in the next few months and years?
So Dell, you go first, Dave, and then Nicole.
I think that we are most likely going to see
some period of time where everybody goes,
no, look, everything's fine still.
Everything's totally fine.
And then things are going to crash and burn.
How so?
It's going to start with all of a sudden
these marginalized groups don't have protections anymore,
and they start getting targeted more.
Maybe they try to do the right thing,
encounter with good speech or defending themselves or what have you.
But eventually when the content that's attacking them keeps getting heck, up ranked even, surfaced algorithmically, they're going to stop pushing back.
They're going to leave.
They're going to go elsewhere.
Then they're going to essentially have had their speech chilled.
There are only so many people who they can appeal to in terms of this sort of pro-fascism, anti-woke, United States number
one opinion of things. And the EU is just not going to be chill with this at all. So
there's like multiple different ways that this could end up in a giant fireball,
but it feels like at least one of them is pretty inevitable. And then we will come in
and we will clean it up like we do, and we will go back to trying to make things right
again because that's what we do.
All right. Dave?
Yeah, I'd agree with that. And again, this gets to the sort of like, not acceptance, but fatalism about the sort
of journey of things.
I used to say to my teams at Airbnb that the question is not where we end up on this,
it's how stupid the journey has to be.
And I think we're a little bit doing that.
And that's not to dismiss it, because a lot of people are going to get hurt by the stupidity
of the journey, which is a tragedy.
But you noted in your wonderful thread on that.
But it is to say, don't despair,
because these pressures simply exist.
I think if you run these sorts of platforms,
there are a lot of decisions you don't really
get to make.
It just seems like you get to make them.
And then you encounter the forces that sort of press you towards particular directions
and are either worn away or like Richa said, state of acceptance and understand the business
you're actually in.
And sometimes, you know, we have to go on a finding out journey around this stuff.
I don't have a prediction about exactly how it falls apart.
I do think, it occurred to me earlier when Nicole was talking, that in some ways we're
seeing social media really become media.
One way this potentially develops is these are all cable networks now because they're
more broadcasty.
Very point.
And you have the segregation into your MSNBC Blue Sky or
CNN threads for normies and just like CNN threads wants to be Fox News because they're the cool one
that everybody loves using a lot. And you may see segregation in that regard, which is, I don't love,
but is like a way of resolving the social context problem in some ways, but it also makes these
things much more propagandistic. I do agree though with Dell that like the more extreme
edges of this, there is a conflation between the fact that Elon is more wealthy now because
buying Twitter and setting it on fire was strategically advantageous to his broader portfolio.
That's correct.
Which is different than like, did this work out well for Twitter as a product where the
answer is like very obviously no.
Yeah, no, he didn't care.
Right.
And it wasn't the goal.
So his broader strategy is successful.
But in so far as you view the platforms themselves, they created a vacuum which created threads
and powered the rise of blue sky. There was a homeostatic reaction and it seems to be continuing and now starting to roll
up some of Metta's products. So it's not the case there hasn't been backlash. It just hasn't
resulted in a cathartic outcome in the bigger picture. As yet. That's yet. Yeah, you're
absolutely correct in why he bought that. Actually, Mark Cuban pointed to me the day he bought it. He
said, it's nothing to do with this platform.
It's everything to do with influence, which was interesting.
Nicole, finish up.
You're coming to me in a tough week.
Of the first day of EO's, I think what is going to happen on social media, I think that
we've all been seeing it.
I sit on Blue Sky, right? And so every time there's an X thing, there's a surge in the Blue Sky numbers that comes.
So I think that it's likely we see people sort of dispersing to find what is the healthiest
place for them to be, where they can find their people and the conversations they want
to have. I worry that the platforms, none of them rise to the moment.
And what we end up doing is we sit in small text groups on Signal,
which is candidly what I've been for the last six months,
is like we just, we make our world very small.
I think there's a, for the setting aside the role of social
media, I think there's a bigger problem with Trump and the closest to social media and the sort of
so far not very distinguished work of the mainstream media to hold them accountable.
Right? So if we believe that social media is one place that people get their information, but
that amplification really happens when it hits mainstream media, we are not getting
what we need in terms of having a trustworthy source of information.
People are going to seek those trustworthy sources of information.
It may end up being it has to be in our small text groups. Because I'm not sure what the trajectory is of where we're going to find it.
But people are going to look for it.
And so the question is who's going to rise to the occasion for that.
Right. And it's also, you know, with all this data sort of sloshing everywhere,
I do think people are going to get smaller.
You're right. I think the dissipation is really important to think about.
And architecture is one, as you, you don't realize the impact you had when you talked
about architecture to me around these things.
How you make something is how they find.
And I'm going to read you, actually, it's odd that you said that.
I just wrote an afterword to my book, burn book, and I'm quoting Paul Virilio, who's
a French philosopher who talks about these things.
And he was being interviewed.
And I'll read this to you, just I just like a very quick reaction if you think
It's it's a good idea and or a bad end
Poverty once talked about technology embedded into our lives in a
In a science fiction short story in which a camera has been invented which can be carried by flakes of snow
Cameras are inseminated into artificial snow which is dropped by planes and, and when the snow falls, there are eyes everywhere. There is no blind spot left." The interviewer then asked the single
best question I've ever heard and wish I had the talent to ask it of the many tech leaders I have
known over three decades. But what shall we dream of when everything becomes visible? And from
Virilio, the best answer to to we'll dream of being blind.
It's not the worst idea.
Do you think it is?
What should we dream of when everything becomes visible in the way it has?
Each of you.
Last question.
Del?
I'll take a stab at it and say I would wish for once everything has become visible to
be able to identify those things that are
meaningful?
Great answer.
Nicole?
I think that's such a terrific answer.
I had a similar, like, sometimes seeing everything is overwhelming, right?
So you need to know the hate, the misinformation.
What makes things worthwhile and meaningful and permits progress and distill that part
of it?
Dave, last answer.
I mean, my flippant reaction is, we'll dream of going outside and touching real grass.
You're a grass toucher.
I knew it.
No, but in the sense that like, I don't think we are fitted for that world.
And so the dreams will be dreams of escape, whether those are withdrawing to smaller spaces
or wishing that somehow the truth was less painful and was understood as meaningful or
wishing to be invisible, which was where my mind immediately went when you asked the question.
It's going to be a dream of escape because we're not, I don't think, prepared for that much awareness.
That's absolutely true. Well, thank you for all your efforts in trying to help us get through that.
And I really appreciate everyone, each one of you and your thoughtfulness.
I, sometimes I, tech leaders can seem so dumb and the people that work for them are not.
The people who work for them think a lot and think hard about these issues.
So I wanted to shine a light on that and I appreciate it. Thank you so much.
Thank you.
Thank you.
Thank you so much.
On with Kara Swisher is produced by Christian Castor-Russell, Kateri Yocum, Jolie Meyers,
Megan Burney, and Kailyn Lynch. Nishat Kurwa is Vox Media's executive producer of audio.
Special thanks to Kate Gallagher. Our engineers are Rick Kwan and Fernando
Arruda. And our theme music is by Trackademics. If you're already following
the show, you must be chock full of masculine energy. If not, go outside and
touch some grass. Go wherever you listen to podcasts, search for On with Kara
Swisher and hit follow. Thanks for listening to On with Kara Swisher
from New York Magazine, the Vox Media Podcast Network,
and us.
We'll be back on Thursday with more.