On with Kara Swisher - Why Twitter’s Former Safety Chief Left Elon Musk
Episode Date: December 1, 2022Elon Musk's Twitter is a spectacle, and the drama and meaning of the last six months is perhaps best unpacked by Yoel Roth. The company’s former head of trust and safety survived the early flood of ...firings and resignations — in fact, he was ascendant, in the early days of this new Twitter, and he became the face Musk presented to advertisers. Kara asks Roth whether he felt “used” by Musk and why — having been embraced by Elon’s inner circle — he ultimately decided to abandon Elon. Before the interview, Kara and Nayeema weigh in on Elon Musk’s seeming needling of Tim Cook, and they discuss whether Apple has too much power. Stay tuned until the end for Kara’s rant on this very topic. You can find Kara and Nayeema on Twitter @karaswisher and @nayeema. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Do you feel like your leads never lead anywhere?
And you're making content that no one sees?
And it takes forever to build a campaign?
Well, that's why we built HubSpot.
It's an AI-powered customer platform that builds campaigns for you,
tells you which leads are worth knowing,
and makes writing blogs, creating videos, and posting on social a breeze.
So now, it's easier than ever to be a marketer.
Get started at HubSpot.com slash marketers.
It's on!
Hi, everyone, from New York Magazine
and the Vox Media Podcast Network.
This is World Cup Tonight with 100% less soccer
or football, wherever you're from.
Just kidding.
This is On with Kara Swisher, and I'm Kara Swisher.
And I'm Naima Raza.
And yeah, we should never talk about sports.
We're not sports people, you and I.
No, we're not.
Let's not talk about it at all.
There's some people playing soccer somewhere in the world, and people seem excited about it.
Anyway, how are you?
How was your trip to Pakistan?
It was great. It was great. I'm done with the wedding. The wedding is over.
Good.
How are you?
Good, good. I just got back from Miami, where I interviewed today's guest.
Yes, Yoel Roth, who you interviewed at the Foundation's Informed Conference. It seemed
like a very good conference.
Yes, it was all sort of misinformation geeks or disinformation geeks.
You must have been a real celebrity.
I am.
That's like your target audience.
Yeah, I was. They were like, oh my God, it's you.
And I was like, really?
This is like, you know, low bar.
Low bar to have Kara Swisher be a star somewhere.
But yeah, it was interesting.
And this was the first interview Yoel did.
He was the global head of trust and safety at Twitter.
Until three weeks ago.
We talked about working there after Elon bought Twitter.
He did stay.
And what happens to content moderation
at Twitter? In a post-Elon world. He'd written a little bit about that in an op-ed last time.
He did. And he kind of, that was an interesting argument because he was basically saying that
even as chief twit, there are limits to what Elon can do or undo. Because not just the power of
advertisers, which maybe Elon can get away from, but regulators. And the most effective,
the most powerful, he seemed to think, was that in a world of smartphones and app stores,
we all really live on Google and Apple's internet. Yes, exactly. And he was talking
about who the gatekeepers really are. And one of them is advertisers. They're a regulator.
Advertisers leave if it's an unsafe thing. And then, of course, there's regulators who don't
do anything. And then there is Apple. And you can add Google and Android in here. But really, Apple does set the tone for lots of things, including privacy,
where regulators fail. Yeah, this was interesting timing, because Elon kicked off a tweet storm
this week, kind of trying to drag Apple and its CEO, Tim Cook, over the power that the
phone maker has. He seemed to be in quite a tizzy. He was just tweeting it away. Yeah, they had it all planned. Are you kidding? David Sachs, his minion,
was on TV. The whole thing was planned. It could be a planned tizzy. Tizzies have to be unplanned.
Every single thing he is doing is performative, but go ahead.
Well, reportedly, the issue is because Apple's 30% cut in the app score has created some problems
and more delays for the $8 that Twitter wants to charge.
You're kidding.
Oh, the 30% thing we've all written about for years?
Give me a break.
Exactly, which he called the secret.
The secret tax.
It's no secret.
It's just been a trial with Epic.
So wait, can you just—
I'll go through it.
Yeah, talk about what happened on Monday.
Elon just discovered that Apple's a big company, apparently.
It started on Monday afternoon with a tweet about how Apple pulled its advertising for Twitter, which it's allowed to do.
And which everyone did, by the way.
Yes, except that he linked it to free speech.
He said, Apple has mostly stopped advertising on Twitter, like everybody.
Do they hate free speech in America?
No, they hate your platform, I think.
And then he said, what's going on here, Tim Cook?
And in the next hour, he posts a Fortnite video inspired by 1984 Fortnite of course has been uh fighting with which is by Epic has
been fighting with Apple in court has been losing to Apple actually mostly it's still going on um
he launched a poll one of his scientific polls and I'm using scientific in a very broad way
with the statement Apple should publish all censorship actions it has taken that affect
its customers.
And then says, Apple has threatened to withhold Twitter from its app store, but won't tell us why.
This is nonsense.
You know it when they're coming for you, just so you know.
And they may have been asking questions about safety like everybody else, including around
child safety.
And then tweets, do you know Apple puts a secret 30% tax on everything you buy through
the app store?
And this is the focus of a lot of regulatory scrutiny right now already having taken place for years, including during
the Trump administration. Okay, so this is a declaration of war by Elon against Apple and
Tim Cook. And Apple hasn't commented, hasn't been dragged in. Let's break it down. So the first
thing, he's saying Apple is threatening to remove Twitter without giving any reason. Is there any veracity to this idea? Obviously, Apple has removed Parler
before, but there was a smoking gun in that case. You and I know that.
And put it back on. Yeah, I know.
Yeah, they did.
They did. But they remove things all the time because of privacy issues, all kinds of issues.
And they're in constant touch with these companies. And it would be lax of them not to be watching what's happening in Twitter and worrying about the safety of the app.
And that's what they're doing.
But to link it to political things is just nonsense.
It's just they're doing – they can have too much power and it also not be because they're libs.
That's the story from Elon's lieutenant, whatever.
Yeah, because there was one Apple executive who I I think, closed his Twitter account after Trump was
re-platformed. But that's not, but that's, but that also, it's not what the company is going to do.
It's made up. It's made up. It's made up.
One thing I think he does have a point about, and I'm not an Elon Musk enthusiast or apologist here,
but I do think like calling for Apple to have more transparency about their guidelines is good,
because they kind of use that Justice Potter to stewards, I know it when I see it, philosophy to
regulation, saying the company will ban apps that are over the line. They're basically saying they're
going to make the call themselves. Oh, yeah. Huh. If only some reporters have been doing it for 10
years. Everybody's been doing it. It's just like, the penny just drops for this guy
is utterly performative.
Well, he has a big megaphone,
Karen, bigger than any reporter.
Yes, but he also has a platform
he is decimating around safety issues.
And so he wants to wave his hands over here
to distract the real problems on Twitter,
which we talk about with today's guests.
And so that's what he's doing.
He's not doing it because,
I'll rant about this later. I will rant about this later.
Okay, but I thought it was interesting that Tim Sweeney came in to back Elon in a big way on this.
He came in and he tweeted something like, we'll see Apple block Fortnite within a few weeks of
Epic to find their policy. Would they nuke Twitter, Spotify, Facebook, Netflix? At which
point does the whole rotten structure collapse? Is this fear mongering or is this Tim Sweeney? They will nuke it. It's fear-mongering. Tim Sweeney has a side.
Well, obviously, he has lots of agenda here. This has been a big fight brewing for many,
many years. Many people have been angry at Apple. As they should be.
Their power has only gotten bigger. I mean, this privacy tracking, which has
decimated advertising on places like Facebook and other platforms.
It's decimated it because it was creepy, creepy, creepy crawling.
And they are protecting consumers.
They are. And yet, it's a lot to rely on one company for.
Yeah. I don't think Tim Cook should be the regulator of the internet either.
I also don't think that Elon is an actor without his own interests at heart.
So does Elon coming in, though, change the fight?
Does it give them more power?
Because he's very effective at maneuvering government contracts,
government regulation.
Yeah, I think he's just, it's a good,
it'll be interesting to see what Facebook does if they join with Elon,
because Facebook, of course,
has been complaining about this for a long time.
But, you know, there's an expression,
when elephants fight, only the grass is crushed.
Guess who the grass is?
The people. It's all of us. All, only the grass is crushed. Guess who the grass is? The people.
It's all of us.
All of us.
That is correct.
All of us.
We're either going to be not safe or we're not going to be safe.
And or else we're going to be subjected to enormous toxic piles of hate speech on Twitter, for example.
How much do you think Apple is allowed to get away?
I mean, this point about transparency, there's a lot of trust in Apple.
By the way, as a consumer, I like it.
And I think the 30% is like a fair, like I would pay 30% to Apple to keep my phone secure and to keep my phone, you know, I don't care.
Like charge me 30% more than you're going to pay.
Maybe that's the way to go.
That might be the way to go.
Elon has said if Apple kicks him out, kicks Twitter off, he'll build his own phone. Do you think it would be a good phone? Sure, whatever. Do you think it'd be a good phone?
I mean, look, the guy's good at making products. Do you think he'd build a good phone? I do not.
I would certainly not buy it. I wouldn't trust it for because of the safety issues. There you go.
Yeah, because of the safety issues and how much he's in bed with different governments,
which is a problem for Apple too. Yeah. And I love the attacks on China when
Elon's up to his eyeballs in China the same way. Okay. You will not be buying the phone, but you did talk a lot about
these issues with today's guest, Yoel Roth, who is kind of more on Elon's side when it comes to
Apple. With Apple, yeah. This was his first public interview since he left Twitter. What were you
hoping to learn from him? He's a very measured person and I think he's fair. I think that's what
I want. I want to hear what happened.
This conversation was taped live in front of an audience that informed the Knight Foundation Conference. We'll have it run when we're back from the break.
Fox Creative. This is advertiser content from Zelle.
When you picture an online scammer, what do you see?
For the longest time, we have these images of somebody sitting crouched over their computer
with a hoodie on, just kind of typing away in the middle of the night.
And honestly, that's not what it is anymore.
That's Ian Mitchell, a banker turned fraud fighter.
These days, online scams look more like crime syndicates than individual con artists.
And they're making bank.
Last year, scammers made off with more than $10 billion.
It's mind-blowing to see the kind of infrastructure that's been built to facilitate scamming at scale.
There are hundreds, if not thousands, of scam centers all around the world.
These are very savvy business people. These are organized criminal rings. And so once we
understand the magnitude of this problem, we can protect people better. One challenge that fraud
fighters like Ian face is that scam victims sometimes feel too ashamed to discuss what
happened to them. But Ian says one of our
best defenses is simple. We need to talk to each other. We need to have those awkward conversations
around what do you do if you have text messages you don't recognize? What do you do if you start
getting asked to send information that's more sensitive? Even my own father fell victim to a,
thank goodness, a smaller dollar scam, but he fell victim. And we have these conversations all the time.
So we are all at risk and we all need to work together to protect each other.
Learn more about how to protect yourself at Vox.com slash Zelle.
And when using digital payment platforms, remember to only send money to people you know and trust.
I am super excited to do this discussion. Obviously,
Yoel and I know a lot about Elon Musk and about Twitter. He more about Twitter, me about Elon,
but I know a lot about Twitter too. And one thing that I thought was interesting in the piece you
just wrote in the New York Times, and I'm going to start off by talking about that, is the work
of online sanitation is unrelenting and contentious. So let's get right
into Elon, cleaning up this mess. So within the Elon regime, except for Elon, you were
more of the public space of Twitter right after the acquisition happened, and you stayed even
though senior executives were fired or left. Can you talk about that decision and give us an idea
of what your thinking was?
Sure. So I want to start going back months before the acquisition, and I'll share with everybody the advice that I shared with my team at the time, which was thinking about the experience of being
a frog in a pot of boiling water. And the question is, how do you prevent yourself from being boiled
alive? And the advice that I gave
my team, many of whom are highly values-driven, who have chosen to do the work of internet
sanitation because they care about harm reduction, they care about protecting people, what I told
them was, write down what your limits are. Write down how you will know if you are being boiled
alive. Write down when the moment will be that Twitter won't be the
place for you. And I hope those limits never arrive, but you should know them when they get
there. And I took my own advice and I wrote down what my limits were. And the acquisition happened.
It was all very sudden. There was a lot of change that happened. But in that moment, you can be
prone to very emotional decision making.
And instead of reacting to it emotionally, I said, look, I have goals.
There's a runoff election in Brazil with major risks of violence.
There's the midterms in the United States.
And I had a desire as a manager to protect my team.
And I knew my limits. I knew my objectives.
And for me, I stayed until those things were
no longer the factors that...
What's the number one thing on the red line for you?
I will not lie for him.
And one of the critiques that I've gotten from the many people who yell at me on the
internet is, why are you holding water for Elon Musk?
And my answer for myself was, I'm not holding water for Elon Musk? And my answer for myself was I'm not holding water for anybody.
I'm telling the truth about content moderation and about what's happening at Twitter. And I
worked at the company for almost eight years. I came to Twitter because I cared about the platform
deeply as a user. I've been on it for a very, very long time. And ultimately, I had an opportunity to
tell everybody on Twitter who was worried about the platform what was actually going on.
Which you did.
And I would not lie if asked.
That was the number one item on my list of limits.
And I stayed true to it.
Did he ask you to lie?
No.
No.
He did not.
So you didn't get to that red line.
That was not the breaking point.
What was the breaking point?
Lower down on the list, but I think really central to the trust and safety story is about
what I call procedural legitimacy. It's not the decisions you make, it's how you make those
decisions. And I think the art of trust and safety as a discipline is developing the procedures by
which really impossible, messy, squishy decisions
about content on the internet are made. And the whole profession is about
figuring out how you do that in a structured and disciplined way. And one
of my limits was if Twitter starts being ruled by dictatorial edict rather than
by policy, then that's a limit. Or at least there's no longer a need for
me in my role doing what I do if the decision is made because I said so. And it became fairly clear
in the way that Twitter was being managed that the work that we had put in over more than a decade
to developing a system of governance that was by no means perfect, by no means addressed every harm,
by no means protected every vulnerable person on the platform. And there were plenty of mistakes.
Of course. And I'm sure we'll talk about many of my mistakes. But we had a system of governance.
It was rules-based. We enforced our rules as written. We changed our rules in writing. We
did it transparently. And when that system of governance went away, you don't need a head of trust and safety anymore. So explain what that means. You're
saying it in a nice way, but one of the things that I've heard internally from a lot of people is
Elon likes to go around and say, I am the law. Is that correct? He didn't use those exact words,
but there was sort of a basic tension between some of what Elon said publicly in the days
following the acquisition and some of the ways that he operated as a leader and as a CEO.
What he said publicly, now infamously, was there will be a content moderation council.
Remember that from three weeks ago?
So there was going to be a content moderation council.
We were going to hopefully do it better than by the way,
Twitter's had a trust and safety council for like 10 years. And so that,
that was already a thing, but great.
We can improve the trust and safety council and we're going to make our
decisions by consulting with experts. It was going to be ideologically diverse.
I wanted to push for it to be globally diverse. Great.
And then when push came to shove,
when you buy a $44 billion thing, you get to have the final say in how that $44 billion thing is
governed. And there were decisions and requests that were quite top down and at odds sometimes
with this notion of we're not going to make big decisions until
we have this council, until we consult with people. I'm sorry, can I ask a question? You
actually believed him then? Because I think my tweet was there's a bridge that you might want
to buy at the time. You know, I think one of the things that is tricky about Elon in particular
is people really want him to be the villain of the story. And they
want him to be unequivocally wrong and bad. And everything he says is duplicitous. And I have to
say upfront, and this won't win me friends in this room, that wasn't my experience with him.
That's his character on Twitter, though. That is the character he plays on Twitter.
Certainly. Okay. I think a lot of it's performative, but go ahead.
That was my experience as well. And so I would have conversations with him or see him have
conversations with others where he would say things that were consistent with establishing
a moderation council, that were consistent with not making capricious unilateral decisions.
And I was optimistic on the basis of that. My optimism ultimately faded and I ultimately
left the company. But in the early days, there were sort of storm clouds and then they parted.
And there were multiple opportunities to make bad and damaging and unilateral decisions.
And he didn't. And maybe part of it is that I and others were able to influence him not to do so,
but he's not the unequivocal villain of the story. And I think it would be unfair to him and to the
history of it to suggest that he is. But he still made decisions unilaterally. He tended towards
the authoritarian versus the consensus-driven person. I think ultimately that was his default. And, you know, as the books
are written about this in the future, perhaps in the management book genre, they'll write about
sort of a style of executive leadership, and then also how you deal with very top-down directive
executives who say, this is what's going to happen, go do this. And I don't know that he has a lot of people around him
who push back on him.
I was telling you a little earlier,
I came into a lot of what happened at Twitter
fundamentally fearless,
not because the situation isn't intimidating,
but because the very worst thing that could happen
was getting fired.
And then, okay, I got fired.
Like that would suck. I really, I had my dream job at Twitter that I loved and I wanted to keep doing, but okay. The worst thing he could do to me was fire me. And so I felt comfortable pushing
back on some of those moments where I would get a direction to do something. And I'd say that,
you know, that doesn't make sense. Here are the risks of that. Here are the alternatives. Here's what I recommend instead. And if that sounds like completely basic executive
management, it totally is, but it worked. And I don't think there's a lot of people who do that.
And so I think the more you use the word authoritarian, I think it's the right one.
I think those impulses can sometimes win out in that situation.
Especially with enablers around them to say yes all day long and lick him up and down continually.
Not knowing exactly what they do behind closed doors, yes.
Yeah, okay. So he seemed to listen to you, but then didn't.
You know, there was, again, like one of the stories people really expect was a dramatic
blow up, right? There was some moment where there was a conflict and we yelled at each other and I quit and I, no, he's decidedly not. But there was also
never sort of a moment where things sort of exploded. It was never that dramatic. And I
recognize that doesn't make for as good a story, but the truth of the matter is, it felt like those last weeks at Twitter were standing in
front of a dam. And the dam springs a leak, and you sort of plug the leak with your finger. And
then there's another leak over somewhere else, and you plug that one with your finger. And even if
you assume that you have many, many fingers and you're able to plug many of these leaks,
eventually some of the water splashes onto you and it starts to damage your credibility and your
reputation. And in an industry that's so fundamentally built on reputation and on trust,
you at a certain point feel like you're spending all your time trying to avert disaster.
And there's always going to be the disasters that slip through. I think Twitter blue verification
is an example of one of those disasters that slipped through. A disaster means it happened by accident.
That did not happen by accident.
It was by decision.
No, it did not.
Right.
We'll get into that in a second.
But the people he fired included Parag Agrawal, which was not a surprise.
None of this was a surprise to me.
The head of Legal Policy, Trust, and Safety, your boss, Vijay Gade.
CFO Ned Siegel.
General Counsel Sean Edbitt.
Chief Privacy Officer Damian Kieran, who resigned,
Chief Compliance Officer Marian Fogary, who resigned. Why did he keep you? And he definitely
put you out as a face. And I was actually listening to that call that you were on with advertisers
trying to uninsult them, which you were trying to do a good job. And he sounded like he was on some
sort of calming, maybe some nice chamomile tea at the time,
which was not his natural state. Do you think he used you? Do you feel like that was the case?
I mean, because he really put you out there and all of his minions were retweeting you and go,
Yoel. You know, I think I made an awkward fit for them. Like, I was expecting to have been fired
the first day. And every day that I wasn't, it genuinely felt like a shock and a surprise.
But I think at a certain point, I was absolutely useful to them. I had been working in trust and
safety at Twitter for a long time. I had a reputation within the
industry of being a fairly straightforward person who tells it like it is for better and often for
worse. And I think at a certain point, there absolutely was a consideration that I could be
useful to them in dealing with the number one day zero existential threat, which was advertisers leaving
the platform. One of the things you said at one point, you tweeted that Twitter was actually safer
under Elon. Do you still feel that way? I don't. It's funny. In the days shortly after the
acquisition, a bunch of things happened. But one of them, predictably, because it's the internet,
was that a trolling campaign emerged.
And a number of trolls, you could sort of watch the organization happen on 4chan.
So this is all happening in public.
They were like, let's go to Twitter and test the new limits of Elon Musk.
Yeah, I'm quite enjoying my Chinese porn, but go ahead.
That's another thing.
Yeah, I'm aware.
But the troll campaign sort of emerged very rapidly.
And it, I think, was initially received as being an authentic surge in hateful activity on Twitter, which is a reasonable hypothesis.
It turns out not to have been borne out by what we were actually seeing on the service.
But a core, kind of taking a step back, a core principle for me of doing trust and safety work is you have to be able to measure it.
You have to understand what the shape of the problem is.
You have to be able to quantify it, which is really hard.
And you need to know if you're doing anything that's impactful.
And Twitter has struggled deeply with measurement in the trust and safety space for years.
space for years. But when we started thinking about this trolling campaign and about hateful conduct and the prevalence of racial slurs on the service, that's an empirical research question.
And we studied it. And we looked at it and we understood what the baseline prevalence of this
activity was. I tweeted a graph that showed clearly there was a thing that happened. The
thing that happened was a trolling campaign. And we shut down
the trolling campaign. And we took steps to build technology that addressed that type of conduct
automatically and proactively. And measurably, it reduced the prevalence of hateful conduct on
Twitter relative to the pre-troll campaign baseline. Sure. Which is great. Like that
unequivocally is a win for trust and safety.
Why is it not safe now? Trust and safety is an adaptive space. Nina Jankowicz has talked about
the concept of malign creativity, the notion that people are not sitting still, they are actively
devising new ways to be horrible on the internet. And the work of internet sanitation is trying to
understand that and ideally staying a couple of steps ahead of it.
Yeah, Steve Huffman of Reddit talked about this.
Exactly.
It's persistently malevolent people who do not have, that's their job.
Yes, mostly malevolent teenagers. Shane Huntley of Google has talked about APTs,
usually advanced persistent threats in the security world, as being advanced persistent
teenagers, and they truly are. But you can't rest on your laurels when it comes to that,
and that's why we have a trust and safety team. You can't use ML for all of it. You can't automate
it. There is no set it and forget it when it comes to trust and safety. And I think Twitter's
challenge going forward is not, can the platform build machine learning? Sure, they can. But are there enough people who understand the emergent malicious campaigns that happen on the service and understand it well enough to guide product strategy and policy direction? And I don't think that there are enough people left at the company who can do that work to keep pace with
what's happening. And what about when the company itself removes? They're not going to be doing
COVID and misinformation anymore. Yeah. It's just easier to do that, right? Like, oh, we're just not
going to watch for it. You know, one way of streamlining the work of trust and safety, I guess,
is to have fewer rules. I will like chalk up a minor win in that space. They announced that they were going
to do it. That's something. I wasn't really expecting clear announcements about policy
changes. We've seen one, which is good. Unfortunately, the policy change is really
bad and damaging. But you can certainly streamline things. But that doesn't mean that malicious
activity is going to get less complicated but that doesn't mean that malicious activity
is going to get less complicated.
It doesn't mean trolls are going to stop.
You can't bury your head in the sand.
Well, no, I don't think they're burying.
I think their policy is fuck it.
I don't care, like kind of thing.
That's my feeling on that.
I don't think that's going to be tenable going forward.
Right, so in my piece in the Times,
I was talking about like,
even if you wanted a policy that is fuck it,
you can't. Like, you simply cannot do that if you are operating what you want to be a commercially
viable consumer service. And the answer might be, you don't care and you're going to burn it to the
ground. But assuming your goal isn't to burn it to the ground, there are limits. There have to be
limits. Unless you're trying to burn it just a little bit in order to get the bankers out.
But that's a theory.
On November 9th, you and Elon seemed to align in that Twitter space.
As the next year had gone, what happened in that 24-hour period?
So there were a lot of things that were going on concurrently, right?
So if you look to the day that I resigned, what's more instructive is the week leading up to it.
So let's tally a couple of the things that happened in that week. Let's start with one
that was going to happen anyway. There were elections in the United States. And the hallmark
of doing a good job in trust and safety is nothing goes spectacularly wrong. And certainly,
the midterm elections were not free of disinformation. They were not free of malicious activity.
But nothing went spectacularly wrong on Twitter.
And that means that we did a good job.
So we were doing that work.
That was our focus area to begin with.
But then we'd also had massive layoffs.
Not as significant within my organization as the rest of the company, but there's an ambient
environment of that, of layoffs, of organizational change. So that's another one. There is the return
to office email. If you've ever wondered how the work of platform governance takes place within a
corporate setting, the answer is it's all of the worst of corporate environments and all of the
worst of the internet mushed together.
And so you have people who should be spending their time thinking about how to deal with
hateful conduct who instead are wondering, do I really need to go to the office?
I don't live anywhere near a Twitter office.
Am I still employed?
And so you have all of the HR-related stress on top of this.
And then there's verification.
We're going to get to that.
That product launches and adds additional layers and dimensions. Which then there's verification. We're going to get to that. So that product launches
and adds additional layers and dimensions. Which you warned them about. Which you warned
would have everybody warned them. It was obvious. And let me take a step back and again,
try to interpret this as generously as I can. There is underneath all of the work on verification,
a theory that if you increase the costs for
malicious activity, the pay me $8 thing, that some people will say, yeah, you know what,
what I'm doing here is not worth $8. That's the basic theory of spam fighting. It's as old as the
internet, and it's increased the costs on your adversary to a point where they go away. That is
absolutely right. The problem is that the way that it was rolled out
and the way that it was implemented,
and especially the dynamics of an extremely online,
trolling-heavy platform like Twitter,
is that it went exactly off the rails
in the way that we anticipated,
and there weren't the safeguards that needed to be in place
to address it up front.
And so then you have that as well.
How did you make the decision?
You go home, say to your spouse, you know, I
it was not an easy decision.
And I
I ultimately
I was weighing the pros and cons on an ongoing basis.
I knew what my limits were.
And by the time that I chose to leave,
I realized that even if I spent all day every day
trying to avert whatever the next disaster was,
there were going to be the ones that got through.
And blue verification got through over written advice
prepared by my team and others at Twitter.
We knew what was going to happen.
It's not that it was a surprise.
It failed in exactly the ways we said it would fail.
So it got through because Elon just wanted to do it
and the people around him supported him,
which they're called by a lot of people, the flying monkeys.
So a lot of the shit was thrown by the flying monkeys, correct?
And then it happened.
It happened because he willed it to happen.
And that sort of, I think in the case of Steve Jobs, monkeys, correct? And then it happened. It happened because he willed it to happen. And
that sort of, I think in the case of Steve Jobs, people called it the reality distortion field.
I think there was an element of that, but you can't distort the reality of what happened.
Steve Jobs made decisions with a group of people much more than people think.
And here, however it was made by whomever it was made it was a bad decision
and it was a decision made against who made it who made that decision i mean ultimately you when you
call yourself chief twit you're accountable for the decisions whether they're good or bad how did
elon take it you know i um again back to where i started i wish I could tell a story here that was about a grand fight
or about the villainization of him.
And that's not true to my experience.
Elon and his team expressed some amount of sadness
and disappointment that I was leaving.
They asked me if I wanted to be part of what they were building.
And at the end of the day, another
person in my situation could have very well made a different decision. But in that moment, the right
one for me was not to be there anymore. And then you left. No drama. Just thanks for your service.
And also no severance and no NDA. Right. which is fantastic for me. I love that. That's my favorite. That's my favorite
executive. So I'm sure you did okay, though. Anyway, let's talk about some of the controversial
content moderation decisions at Twitter that you were involved in. Some of them you weren't.
And I want you to talk about the reasoning and tell me, have you approached them differently
today? Now, you were not involved in Andrew Tate, so I'm not going to do that one. But the Hunter Biden laptop story, which is coming back
with a vengeance now. Talk about that. It was in 2020, September 14th, October 14th of 2020.
Please talk about that. So let me start off by debunking a little bit of misinformation
that's been widely reported.
It is widely reported that I personally directed the suppression of the Hunter Biden laptop story.
I read about that. That is not true.
It is absolutely unequivocally untrue.
And before we get into a witch hunt of, OK, who did make the decision, let's actually talk about how we landed in that situation in the first place. Let's go back to January 2017. The IC declassifies a
report about Russian interference in the 2016 elections. This is a watershed moment in the
history of content moderation and the internet. We learn, among other things, about the internet
research agency, the famous troll farm in St. Petersburg. We learn, among other things, about the Internet Research Agency, the famous troll
farm in St. Petersburg. We learn about these fake personas purporting to be Americans that had
infiltrated social media. And we learn about DC leaks. And we learn about the intersection between
APT28, a unit of Russian military intelligence, a hacking group, and some of these sites that
have been set up to leak embarrassing content about Hillary Clinton and the DNC, and some of these sites that had been set up to leak embarrassing content about
Hillary Clinton and the DNC, and ultimately propagated that content through WikiLeaks
into popular attention. And while the jury is still out how dispositive this was in 2016,
Kathleen Hall Jamieson has written that of all of the things you can look at about 2016 that shaped the electoral outcome, the one that makes the most sense is DC leaks and the emails. So there was that, right? There's that
context. And beginning in 2017, every platform, Twitter included, started to invest really heavily
in building out an election integrity function. This was what I spent my life
doing from the middle of 2017 onwards. We were focused on not just U.S. elections, but how do
you protect against malign interference in foreign elections? How do you think about different threat
actors in this space? And also critically, how do you think about what those threat actors might do?
And so as we are threat modeling the 2020 election, it's obvious to think about the
most influential thing that impacted the 2016 election, which was the hack and leak campaign
organized by the Russian government. And so we would have been stupid not to think about that
risk. Right. So what happened there? And so the morning of the Hunter Biden story in the New York Post happens.
And it was weird, right?
With distance and with what we know now, we forget some of the weirdness.
But do you remember the laptop repair guy?
Do you remember the uncertainty of the whole story?
We didn't know what to believe.
We didn't know what was true.
There was smoke.
And ultimately, for me, it didn't reach a place where I was comfortable
removing this content from Twitter,
but it set off every single one of my finely tuned
APT28 hack and leak campaign alarm bells.
Right, so it looked possibly problematic.
Everything about it looked like a hack and leak
and smelled like a hack and leak.
You did not want to do that.
But it didn't get there for
me. And this is, you know, the work of content moderation is write a policy, create a system
of governance, and then evaluate some new crazy situation against those standards.
Why the need to do it, given if it's a question of people getting killed, that's one thing,
this is just Hunter Biden, right? So why did they act then? I mean, ultimately, it's Jack Dorsey's responsibility. All of this, to me, I know he likes to say,
I didn't make the decision. I don't really care. He was the CEO. He could have stopped it.
And by not stopping it, that's a decision in and of itself. So I ultimately put it at his feet,
no matter what he tries to say. Why not wait on Hunter Biden? I want to get to the others,
too, just very briefly. Look, when you're
weeks out from an election, when you've seen what happened in 2016, and you've seen the immense
political and regulatory pressure to focus on election integrity, to address malign interference,
and when you feel a responsibility to protect the integrity of the conversations on a platform
from foreign governments expending their resources to interfere in an election, there were lots of reasons why the entire industry was on alert
and was nervous. But a mistake. And again, for me, even with all of those factors, it didn't get
there for me. But so it was a mistake. In my opinion, yes. Okay. Donald Trump. That one I
don't think was a mistake. January 6th. So it starts on the 6th, but it also starts prior to that.
That's correct.
In the weeks between election day and January 6th, Twitter moderated hundreds.
I think the final number ended up with like 140 separate tweets from just at real Donald Trump that violated various policies, including the
civic integrity policy. Every morning, it was a new tweet. Much of it was recirculating some of
the same narratives. And all of it was focused on the ultimately false claim that the 2020 election
had been stolen. And so we're going into the events of the 6th, and there's that context.
There's the centrality of his account in circulating this narrative.
So you let him get away with it for a long time, in other words.
Well, we'd been enforcing on it, right?
So we restricted the tweets.
We put warnings on them.
You couldn't like them.
You couldn't retweet them.
But we didn't ban him because it was a relevant part of a moment in American politics.
Right.
The events of the 6th happened.
And if you talk to content moderators who worked on January 6th, myself included, the word that nearly everybody uses is trauma.
We experienced those events, not some of us as Americans, but not just as Americans or as citizens.
not some of us as Americans, but not just as Americans or as citizens, but as people working on sort of how to prevent harm on the internet, we saw the clearest possible example of what it
looked like for things to move from online to off. We saw the way that rhetoric about a stolen
election was being mobilized on sites like thedonald.win. We saw the trafficking of this
content in the fringe parts of the internet, and we saw
people dead in the Capitol as a consequence of it.
Let me read you something. I'm going to read. I'm going to tell you when I wrote it,
but I'm going to read this piece to you. And I got yelled at by Facebook executives,
Twitter executives, lots of executives when I wrote it. They said it was inflammatory.
Okay. It so happens in a recent weeks, including at a fancy pants, Washington dinner party this past weekend, I've been testing my companions with a hypothetical
scenario. My premise has been to ask what Twitter management should do if Mr. Trump loses the 2020
election and tweets inaccurately the next day. And for weeks to come that there had been widespread
fraud and moreover, that people should rise up in armed insurrection to keep him in office.
Most people I pose this question, you have the same response. Throw Mr. Trump off Twitter for inciting violence. A few have said he should
only have temporarily suspended to quell any unrest. Very few said he should be allowed to
continue to use the service without repercussions if he is no longer president. One high-level
government official asked me what I would do. My answer, I would never let it get that bad to begin with. This was, I wrote this in 2019, mid-2019.
It's prescient.
Prescient, but why didn't you?
Why did you not do something before it got to that?
The first action that Twitter took to moderate Donald Trump's account was in 2020. It wasn't exactly the first, but among the first actions that we took on his
account was applying a misinformation label to one of his tweets about voting by mail.
The immediate aftermath of that was the personal targeting of me as sort of the face of alleged
censorship. And we can talk about that. But, you know, this was brand new for Twitter, and it was deeply uncomfortable.
There was grave discomfort from many of the people who now are allegedly the poster children of censorship to be using our judgment and applying it to a head of state. I believe it
was Angela Merkel who, after Twitter ultimately banned Donald Trump, said that that's a scary
amount of power for a platform to have. And only really two individuals, Mark Zuckerberg and Jack Dorsey.
And I agree that it's a terrifying amount of power.
And in that context, there was, in my view, understandable reticence.
And as the situation evolved, as our understanding of the harms evolved,
we ultimately had no choice but to take an action that was necessary,
was important, was the right thing to do to mitigate clearly observable potential harms.
Okay.
But that doesn't mean that it's an easy decision.
Right. Marjorie Taylor Greene, easier?
This is a case of the accurate procedural enforcement of our policies as written.
And you can like the policies or you can dislike the policies, but it's the same rules for everyone.
When you repeatedly tweet violations of a policy, in her account's case it is violations of our COVID-19 misinformation policy, there are consequences.
They include account timeouts, and ultimately they can lead to suspension
and they did right you you can change the law if you want to and and mr musk clearly did this week
but there was a written policy and it was enforced as written okay babylon b which is what got him
to buy the thing i think that's the that's the one which is which was not particularly funny. The Babylon Bee's man of the year is Rachel Levine.
Not funny.
Yeah.
I didn't agree they should have taken that down, but go ahead.
You know, it's interesting to think about what the competing tensions around that are.
And I want to start by acknowledging that the targeting and the victimization of the
trans community on Twitter
is very real, very life-threatening, and extraordinarily serious. We have seen from
a number of Twitter accounts, including libs of TikTok notably, that there are orchestrated
campaigns that particularly are singling out a group that is already particularly vulnerable
within society. And so, yeah, not only
is it not funny, but it is dangerous and it does contribute to an environment that makes people
unsafe in the world. So let's start from a premise that it's fucked up. But then again, let's look at
what Twitter's written policies are. Twitter's written policies prohibit misgendering, full stop. And the Babylon Bee, in the name of satire,
misgendered Admiral Rachel Levine.
Twitter wrote...
Satire.
Nominally, but it's still misgendering.
And, you know, you can...
There can be a very long and academic discussion of satire
and sort of the lines there.
Interestingly, Apple tried to tease out this question of satire and sort of the lines there. Interestingly, Apple tried to tease
out this question of satire and political commentary in their own guidelines, which I think
are also fraught. But, you know, we landed on the side of enforcing our rules as written.
And that's how it got bought by Elon Musk, just in case you're interested.
He was mad about that. I remember that. So he recently tweeted slides from the company talk,
and I want to talk about Twitter in general. With this small amount of people, he wants to create
an everything app. Is that a problematic thing from a trust and safety point of view?
Yeah. I mean, right now, Twitter is 280 characters and some video clips and GIFs.
If it's everything, imagine all of the attendant problems of literally
everything, right? So yeah, you can go on down the list. Payments, fraud, that's a whole thing.
And there are professionals who could do that work. But by the way, most of them don't work
at Twitter anymore. You've got all sorts of spam problems that already go with running a user
generated content platform. You've got IP violations. Yep. You've got all sorts of spam problems that already go with running a user-generated content platform. You've got IP violations.
Yep.
You've got all sorts of challenges that come from governments and government pressure to remove speech,
which is an extraordinarily fraught space for every company.
But the more industries you're in, the more your app does, the more that it insinuates itself into people's lives,
the more of a target you are.
Right.
And I think that's a risk that
few companies are prepared for. And you don't think this moderation council is going to materialize?
At this point, I think I'm coming around to the notion that I won't be taking you up on that
bridge you wanted to sell me. Okay. All right. Excellent. We'll be back in a minute.
Support for this show comes from Constant Contact. You know what's not easy? Marketing.
And when you're starting your small business, while you're so focused on the day-to-day, the personnel, and the finances,
marketing is the last thing on your mind. But if customers don't know about you,
the rest of it doesn't really matter. Luckily, there's Constant Contact. Constant Contact's
award-winning marketing platform can help your businesses stand out,
stay top of mind, and see big results. Sell more, raise more, and build more genuine relationships
with your audience through a suite of digital marketing tools made to fast track your growth.
With Constant Contact, you can get email marketing that helps you create and send the perfect email to every
customer and create, promote, and manage your events with ease all in one place. Get all the
automation, integration, and reporting tools that get your marketing running seamlessly,
all backed by Constant Contact's expert live customer support. Ready, set, grow. Go to I want to get your prediction.
In 12 months, Twitter will be what?
I don't think Twitter will fall over. I don't think there
will be a spectacular moment of failure. Maybe there will be. Maybe I will eat those words.
But Twitter has more than 15 years of some of the smartest engineers who have built a service that
is resilient to the absolute bonkers amount of traffic that people throw at it around the World Cup and other major
global events. There are incredibly talented people who built Twitter. And I think there are
going to be challenges, but I don't think it's going to fall over. But what I would encourage
folks to keep an eye out for are what are the canaries in the coal mine that suggests that
something's not right? And a couple of the things that I keep an eye out for are,
have core safety features stopped working
the way that you expect?
Does block still work?
Or do you start seeing blocked accounts in weird places
where you don't expect them?
Does mute still work?
And then this one's the real tricky one, protected tweets.
So most accounts on Twitter are fully public,
but you have the
option on Twitter of marking your tweets as private, only your followers can see them.
Oof, that's an easy one to screw up if you are building Twitter and you don't know what you're
doing. And there have been a number of historical privacy breaches that Twitter has had to deal with
related to protected tweets. Protected tweets stop working, run,
because that's a symptom that something is deeply wrong.
Okay.
Do you think he can build this social network
back to profitability?
He's got a big debt.
He likes to spend a lot of time tweeting.
I'm fundamentally an optimist about human behavior.
And that's like weird considering I spend all day,
every day looking at the absolute worst parts of what humans can do to each other on the internet.
Right. But I fundamentally believe that people can change. One of my favorite things that's
been written about the internet is Megan Phelps Roper talking about her experience leaving the
Westboro Baptist Church on the basis of interactions that she had on Twitter. And the amazing part of that story for me is that
people, a member of the Westboro Baptist Church, have the capacity to change their views through
conversation. And I like to think that Elon Musk loves Twitter. I think he loves it as much as I
do and as much as many of the folks in this room do. I like to think that somebody who loves a
product and wants it to be important and impactful in the world
can learn from their mistakes.
What do you make of his shift to the right wing?
I couldn't say.
I genuinely, I don't know what his politics were.
I don't know what his politics are.
They were centrist.
They were, you'd never know, but it was not this.
But how much of it is just tweets? How much of it is a show? How much of it is how much of it is just tweets how
much of it is a show how much of it is sincerely held beliefs it's hard to say
what do you think if I were again let's try to think of the most optimistic
interpretation okay let's try so let's say I were let's say I just bought a
social media platform that rightfully or wrongfully,
people believe is biased in one political direction. And by the way, all of the peer
reviewed research about bias suggests that actually Twitter amplifies right-wing voices
more than left. But let's set that aside. Perception is reality. Perception is Twitter
is biased to the left. Okay. I just bought this thing. Who do I
need to rebuild trust with first? It's the people on the right. And one interpretation of this
activity is that it is trying to rebuild trust with a constituency for a platform that rightfully
or wrongfully has felt maligned and mistreated. Now, I think in the process, you risk alienating all of your other
users, you risk alienating advertisers, you risk alienating regulators, and that's a problem.
But I think you can make a rational case for trying to win back parts of your community that
have felt underserved. That's the best interpretation I can come up with yeah no
I'm gathering it's uh I'm saying it I'm like yeah it feels like my mother in Fox News that's all I
have to say um but we'll see we never know you never know what he's going to do um I think he's
gone quite far more than it seems more than performative in some real ways it's victimization
I think part of it is petty grievance, as
I've called it, toward the Biden administration not giving him his due on electric cars, which
seems small-minded and tiny, but that's just the way people are.
Let me ask you a couple more questions about what you agree with him on Apple. Yesterday
he tweeted, Apple has threatened to withhold Twitter from its app store but won't tell
us why. Yes, they did tell you why. This kind of mirrors what you kind of wrote in an op-ed.
I understand people feel under the regime of Apple, which has some very strict rules
around privacy and behavior and things like that.
They have a lot of rules, although fraught, as you said.
He's also going to war over the 30% to take a cut in subscriptions.
How do you see this playing out?
You know, I think Apple is a very savvy
company. They're very strategic. And I think ultimately their primary focus is building
products that their customers love. Like they aren't just a software company. They sell a phone
that's in your pocket and a computer and you buy it because you like it. And you trust them. Yes.
And ultimately,
the theory of the App Store, and if you read Walter Isaacson's biography of Steve Jobs,
he writes a lot about this. The theory is that it was going to give people the best experience
as opposed to this malware-riddled, side-loaded app situation. So, okay, that's the theory of
the thing. I think in in that context it would require something
really dramatic to happen for apple to remove twitter from the app store and i don't i don't
i think both sides don't want that to happen but i think twitter needs apple a lot more than apple
needs twitter that is also true so do you think he will rally everybody behind he wasn't concerned
about it before.
You know, I never heard the word 30% come out of his mouth before this week. I think that's a convenient argument, given some of the financial pressures on the company.
But I don't want to lose the point that actually we should be really worried about app store
governance.
I own an iPhone.
I used to work at Apple.
I've owned every single iPhone since the very first
one. I've waited in line at midnight. Call me an Apple fanboy, but we should be worried about how
the app store works. There are billions of devices out there where decisions are being made about what
you can and can't download and what those procedures of governance are. And we've spent
since 2016 talking about accountability for platforms. App stores are a
platform. 100%. That said, you can do a web app. A lot of developers are moving that way, so you
can bypass that. There's always alternatives. And that's ultimately Apple's explanation. They say,
look, you can always just use the internet. But that's not satisfying. Like, I believe in Twitter having a content moderation
council. So should Apple. So should Google. If you are engaged in a moderation function,
you should do it in a transparent, procedurally legitimate way.
That's a fair thing. A couple things. Elon tweeted, the Twitter files on free speech
suppression soon to be published on Twitter itself. He liked to publish your emails,
by the way. The public deserves to know what really happened. What do you expect to be in this,
or is it just more gaslighting? I have no idea. You know, I'm ultimately,
I don't really have like a front stage, backstage persona. Like I sort of just am who I am.
And so if you're like looking for some emails with profanity, like you'll find them.
My corporate Gmail account at Twitter
was something like 160 gigabytes
by the time I left the company.
So there's a lot of email there.
Go nuts.
So there could be a, that Marjorie Taylor Greene
is really irritating on these Jewish space lasers,
for example.
I don't think I sent that email.
But, you know, like, the sausage making
of content moderation is deeply messy.
It's hard. It's often pretty thankless work. It's complicated. The sausage making of content moderation is deeply messy.
It's hard.
It's often pretty thankless work.
It's complicated.
But if you want to go and single out individual emails,
individual decisions, individual sentences and phrases and utterances,
you can do that.
But it's in bad faith.
Are you worried?
You've already undergone this.
Kellyanne Conway spelled out your twiddle hander on TV twice,
essentially sick the MAGA trolls on you.
This was because you called Mitch McConnell a bag of farts in a tweet,
which I see it. My most infamous tweet.
Yes, yes, yes. I've never done that. Congratulations.
Donald Trump erased his tangerine.
And then you said...
We're reliving my greatest moments.
It's okay.
I'm not doing your teenage ones.
Yes, the person in the pink hat
is clearly a bigger threat
to your brand of feminism
than actual Nazis in the White House.
So that was an experience you had.
You have, like Elon says, free speech.
Go for it.
But are you worried about these Twitter files coming out?
What was that experience like,
having Kellyanne, who's always in control of herself,
sticking this MAGA trolls on you?
It's terrifying.
I thought I was going to be a college professor for a living.
I got a PhD and was doing research that nobody cared about.
And then I was like, oh, you know, this platform thing is cool.
I can go and do research there. And then one was like, oh, you know, like this platform thing is cool. Like I can go and do research there. And then, you know, one thing led to another. And all of a sudden we apply a
misinformation label to Donald Trump's account and I'm on the cover of the New York Post.
And that is a deeply terrifying experience. And I say this from a position of unquestioned
privilege as a cis white male.
Like the internet is much scarier and much worse for lots of other people who aren't me,
but it was pretty fucking scary for a long time.
What was the scariest part?
You know, when you get targeted in some of these ways,
it's hard to differentiate between
what is somebody just online trying to rattle you and what's a real threat.
You see in things like Pizzagate that online conspiracies can mobilize very real and very
direct offline violence. And I worry about that. I had been doxxed years before by teenagers,
actually. They're always behind it. But, you know, I saw those harms. I experienced those harms.
And now it was those harms through a mainstream news outlet being held up in the Oval Office by
the former president of the United States. And that is deeply terrifying. And when you have 111
million Twitter followers, everything that you tweet out can mobilize
exactly some of those same scary people.
Does Elon understand that responsibility?
I think when you are the richest person on the planet, you don't always have a perspective
of what life is like for other people.
Yeah, I've noticed that.
Especially when it comes to safety and security.
Yeah.
What I always say is people who've never felt unsafe a day in their lives don't understand safety in any way.
I mean, that's the core of what the work of trust
and safety is, right?
How do you understand and empathize with the problems
and the harms and the risks that can come to people
whose lived experience is quite different from your own?
And how do you try to make a product like Twitter
more resilient to it?
But I don't,'t it's i think it
may be hard for him to understand the consequences that his tweets can have for the people that he
targets and i truly hope for my safety and for my family's safety that i'm not targeted again
is there one tweet the one about paul pelosi with the anti-gay trope was my limit for with him that
was not that was not a good one he deleted it but it's it's still unclear why but it was a truly
awful thing to do yes absolutely what are you going to do next you know i don't know um i've
been offered what have you been offered i feel like i uh i haven't really taken a vacation since
2017 like i ran ran our Russia war room
and sort of my life has been
at an increasingly frenzied pitch ever since.
So like, I would love to just take a vacation
and breathe a little bit and process
and think about all of the things
that have happened since 2017.
And then beyond that, I wanna do some writing.
I was once an academic, always an academic.
I processed the world by intellectualizing it.
And so I'll probably do a little bit of that
and then see what comes next.
Are you hopeful for Twitter?
I'm hopeful for the internet.
That's not what I asked.
Look, I got into what I do
because the internet shaped my life in really profound and fundamental
ways.
I'm openly gay.
I came to understand my identity and my community through the internet.
I met my husband online while I was researching dating apps for my PhD.
And research. And I think the transformative potential of the internet is limitless.
And I'm rooting for the internet to continue to be a place for, you know, gay kids like
me to find their community and for people who are interested in whatever they are or
whatever they're doing or whatever they want to do to be able to find the communities that
work for them. And making the internet safe for that requires constant work. And maybe it'll
be at Twitter. Maybe it'll be somewhere else. Very last question. Say something to Elon Musk.
What would you say right now? One piece of advice. Besides lose the flying monkeys, but go ahead.
Humility goes a really long way.
besides lose the flying monkeys, but go ahead.
Humility goes a really long way.
That one, I got to tell you, that's not going to happen.
I don't think so either.
All right.
Yoel Roth, thank you so much.
Thank you. Okay, Kara, you've reported on Elon for years.
You've reported on Twitter for years.
Was there anything that Yoel told you that surprised you?
No, I think he was depicting, you know, he and I were texting afterwards.
He's very much depicting the Elon I know, which is he was very reasonable at times, right?
And then this authoritarian side took over. You know, 90% of the time when you talk to Elon, he asks good questions. he was very reasonable at times, right? And then this authoritarian side took over,
you know, 90% of the time when you talk to Elon, he asks good questions. He's very reasonable. He
wants to debate. And then this other part takes over where he must decide everything. And I think
that's what I think he depicted absolutely correctly. And that's what I liked about it.
I thought he was very fair. He discussed his own mistakes around the Hunter Biden thing. He did not
make that call, as he noted. But, you know, he talked about his own mistakes around the Hunter Biden thing. He did not make that call, as he noted.
But, you know, he talked about his own mistakes.
And I think he's genuinely like a lot of people in trust and safety.
You know, what's really irritating is the denigration of these people by political people.
It's not.
These people actually care about the safety of people on their platforms, especially those who are marginalized, especially those who are politically vulnerable.
I always say, I think that hearing the personal stakes that these individuals face
is always still revelatory to me, even though we've done dozens, hundreds of these interviews
where we've heard people talk about being targeted.
He was targeted by Kellyanne Conway. It was very irresponsible.
And you're like a young person working in the Valley targeted by a big political figure. It's, you know, these are high stakes jobs. So that really stuck with me. And I liked his point about have a philosophy, have that philosophy guide you and be transparent about what that philosophy is. Because even, I mean, COVID, Twitter's decision to like not regulate COVID misinformation anymore. You know, Yoel saying, well, at least they told us they're going to not do it. I mean, that's a very big thing to give. What's the rules and you're
going to follow them? And I think that was really important. And the second part I thought was
important, I don't know if you did, was that you can do as much as you want with AI and machine
learning. But ultimately, you also need people who have seen this. It's like having cops on the
beat. They know the neighborhood. They understand the malevolent players. They know their tricks. They keep ahead of them. They try to keep ahead of them. They
don't always. And so I think that was important to understand how important this staffing is and how,
you know, he called himself a, you know, a trash man. I mean, a custodian.
Yeah. The reason why Apple has said it charges so much as a cut in the app store is because they
are paying humans to review all these apps. And Yoel's op-ed described this where like Apple employees are, you know, writing to him, reviewing the app,
raising questions. I mean, it's really people doing groundwork to keep you safe, informed,
etc. But that's a good segue to our rant today, Kara, which I think is about Apple.
Yes, it is indeed. Let me first say there's no question, as we just discussed, that most,
Yes, it is, indeed. Let me first say there's no question, as we just discussed, that the most valuable company in the world at $2.2 trillion, and it is Apple, has two very problematic issues that are obvious.
The first is its hegemony over the App Store and the fee it collects from developers as gatekeeper to its flagship iPhone.
While there are lots of good reasons to charge to ensure apps that billions of consumers use are safe from malware, privacy intrusions, and overall scammers. It holds unquestionable power in the equation, and it can make or break those within its purview. The same, by the way, is true of Alphabet and its Android app universe.
There have been complaints galore about this from developers like Spotify and Basecamp for years,
significant legal action with Epic, and also a slow-moving regulatory scrutiny in the U.S.,
a much faster-moving one in Europe, all of which will eventually force Apple and also Alphabet
to open themselves up and be subject to some rules they do not make themselves.
Second, Apple has long had major exposure in China over its critical manufacturing facilities
and also its tight relationship with the authoritarian government there.
Again, nothing new. It's been made worse by the pandemic crackdowns and the violence that is happening
there and increasingly hostile relationship between the U.S. and China. This is all well
known and ongoing, but suddenly it's a thing because Elon Musk has decided to press on these
issues as the new owner of Twitter, accusing Apple CEO Tim Cook of all manner of nefarious behaviors
and secretive
behaviors, leaving aside that Musk, via his company Tesla, has significant exposure himself
in China and has tweeted more lavish support of the regime there than any U.S. tech leader.
He's bizarrely accusing Apple of trying to limit speech because it has been limiting
advertising on his deteriorating platform.
As I noted, no advertiser likes to spend their marketing money
in the thunderdome of toxic assininity.
But no matter, since Musk never met an opportunity
to make baseless accusations when it advantaged him.
I don't ever recall him stepping up when many of us
were talking about the concentration of power in tech,
which is a business and innovation issue.
Instead, when it's in his financial interests,
he and his minions cynically
make it a fake political drama to gin up GOP rage. They could have cared less about who was taxing
who, a valid concern of developers, and now they preside over making Twitter a mosh pit for the
malevolent and worse that invites scrutiny from the app stores. That's when they cry victim and
pretend stone-cold capitalists are libs
to deflect their own problem.
Their performative hand-waving is depressing
since these are important issues that have long needed attention by regulators.
Instead, they're turning it into a red-pilled fever dream to benefit themselves.
Like the lapdogs they are, the screamers of the GOP,
such as the frequently wrong but never in doubt Senator Josh Hawley,
jump on board the fabulous wagon train taking at face value Musk claims that Apple has threatened to toss Twitter off the App Store.
Is that true? Unlikely. It might be if Twitter gets worse and unsafe.
While it's the job of app reviews to pay a lot of attention to problems on big sites like Twitter,
this is nonsense and only depends on how well or not well Musk runs the site. And right now, it's not very well run, as my interview just heard with its former head of trust and safety notes and as extensive reporting has shown. Not only are marginalized communities at risk, but so are children, as Musk has decimated
the staff that focuses on stopping the dissemination of child sexual abuse content,
leaving it less able to enforce its bans. This is something Apple and Google should not have to
police, but now it does. And those who should police it, Twitter and also legislators, are
instead engaged in another round of what can only be called, I'm sorry to say, propaganda. Safety and innovation do not have to be at odds,
and don't let the hypocrisy of the world's richest man who is fighting the world's most
valuable company trick you into thinking otherwise. All right. Very ranty, Cara, today.
Ranty. Come on. This is such crap. Anyway, and by the way, we should be looking at Apple.
It's just this is not a serious actor in that fight. Do You want to take us to credits, Cara? Yes, I shall. Ranch us through the credits.
Today's show was produced by Naima Raza, Blake Nishik, Christian Castro-Rossell, and Raffaella
Seward. Special thanks to Hayley Milliken and Alejandro de Onís, and the team at the Knight
Foundation. Thank you so much. You are great hosts. Our engineers are Fernando
Arruda and Rick Kwan, and our theme music is by Trackademics. If you're already following the
show, congratulations. You're not a bag of farts. If not, you're not a bag of farts either. But go
wherever you listen to podcasts, search for On with Kara Swisher and hit follow. Thanks for
listening to On with Kara Swisher from New York Magazine, the Vox Media Podcast Network, and us.
We'll be back Monday with more.