The Joe Rogan Experience - #1258 - Jack Dorsey, Vijaya Gadde & Tim Pool
Episode Date: March 5, 2019Jack Dorsey is a computer programmer and Internet entrepreneur who is co-founder and CEO of Twitter, and founder and CEO of Square, a mobile payments company. Vijaya Gadde serves as the global lead fo...r legal, policy, and trust and safety at Twitter. Tim Pool is an independent journalist. His work can currently be found at http://timcast.com
Transcript
Discussion (0)
Five, four, three, dos, uno.
Come on, TriCaster.
Live?
Yes.
All right.
We're live, ladies and gentlemen.
To my left, Tim, Tim Poole.
Everybody knows and loves him.
Vijay, what is it?
How do I pronounce your last name?
Vijay.
Vijay Gatti.
Vijay, not Vijay, Vijay.
Vijay. Gja. Vidja.
Gaddy.
Gaddy.
And your position at Twitter is?
I lead trust and safety, legal, and public policy.
That's a lot.
That's a lot.
And Jack Dorsey, ladies and gentlemen.
First of all, thank you everybody for doing this.
Appreciate it.
Thank you.
It feels all of a sudden there's tension in the room.
We're all loose.
We're all loosey-goosey just a few minutes ago.
There's no tension.
Now everyone's like, oh, this is really happening.
Here we go.
Before we get started, we should say, because there were some things that people wanted
to have us talk about.
One, that the Cash App is one of the sponsors of the podcast.
It's been a sponsor for a long time.
And also a giant supporter of my good friend Justin Wren's
Fight for the Forgotten Charity,
Building Wells for the Pygmies in the Congo.
This is very important to me,
and I'm very happy that you guys are a part of that
and you are connected to that.
I don't, that's, I mean, it's easy for someone to say
that doesn't have an influence on the way we discuss things,
but it doesn't.
So if it does, I don't know what to tell you.
I'm going to mention, too, just because I don't want people to come out and freak out
later, I actually have like 80 shares in Square, which isn't really that much.
But it's something.
It is.
It is.
So I don't want people to think, you know, whatever.
You're the CEO of Square, I think, right?
Yep.
Yeah, there you go.
There you go.
And we own the cash app.
And the reason why we decided to come together is I thought we had a great conversation last time
But there was a lot of people that were upset
That there were some issues that we didn't discuss
Or didn't discuss in depth enough
Or they felt that I didn't impress you enough
I talked to Tim
Because Tim and I have talked before
And he made a video about it
And I felt like his criticism was very valid
So we got on the phone and we talked
about it and I knew immediately within the first few minutes of the conversation that he was far
more educated about this than I was. So I said, would you be willing to do a podcast and perhaps
do a podcast with Jack? And he said, absolutely. So we did a podcast together. It was really well
received. People felt like we covered a lot of the issues that they felt like I didn't bring up.
And so then Jack and I discussed it.
And we said, well, let's bring Tim on and then have Vidya on as well.
I said that right?
Yep.
It's a hard one.
Sorry.
I'll get it right.
I promise.
So we're here.
We're here.
We're here.
Today, do you know who Sean Baker is?
He's a doctor who's a prominent proponent of the carnivore diet.
His post was, his account was frozen today.
I just sent it to you, Jamie.
It's just a screenshot.
Yeah.
His account was frozen today because of an image that he had,
because he's a proponent of the carnivore diet. There's a lot of people that believe that this elimination diet is very healthy for you,
and it's known to cure a lot of autoimmune
issues with certain people but some people ideologically oppose it because they think
it's bad for the environment or you shouldn't eat meat or whatever the reasons are this is huge in
the the bitcoin community yes yeah well it's for a lot of people that have autoimmune issues
particularly psoriasis and arthritis is a lifesaver it's crazy it's essentially it's an autoimmune issue
so because he has a photo of a lion in a header eating a looks like a wildebeest or something
like that his account was locked for violating his rules rules against graphic violence or adult
content in profile images that seems a little. And I wanted to just mention that right
away. Now, whose decision is something like that? Like who decides to lock a guy's account out
because it has a nature image of, you know, natural predatory behavior?
On this particular case, it's probably an algorithm that detected it and made some sort
of an assessment. But as a general rule, how we operate as a company is we rely on people to report information to us. So if you look at any
tweet, you can kind of pull down on the carrot on the right and you can say report the tweet,
and then you have a bunch of categories you can choose from what you want to report.
I think this one in particular, though, is probably an algorithm.
So how does it does he have the option to protest that or to ask someone to review it?
Absolutely.
And I'm guessing that people are already reviewing it, but there's a choice to appeal any action.
And that would go to a human to make sure that it is actually a violation of the rules.
Or in this case, if it's not, then it would be removed.
Is that a violation of the rules?
That image?
I don't think so.
I don't think that that would be what we're trying to capture in terms of graphic images in an avatar.
It's more about violence towards humans, unless it was some sort of cruelty depicting animals or something like that.
But this seems not the intention of the rule.
This is one of the reasons why I wanted to bring this up immediately.
Does this highlight a flaw in the system in that people can target an individual because with him he's a he
like i said he's a doctor and a proponent of this carnivore diet but he's also he ruthless in his
condemnation and mocking of vegans he does it all the time and so then they get it upset at him and
they can target posts and just report them in mass. And when they do that, then this becomes an issue.
I think this does reveal part of, you know, the challenges that we face as a global
platform at scale. And this, I don't, I don't know what happened in this case. So it's hard
for me to talk about it. But what I would say is that it doesn't really matter if one person
reports it or 10,000 people report it, like we we're going to review the reports and we're going to make an assessment.
And we're never going to, you know, kick someone off the platform finally and forever
without a person taking a look and making sure that it's an actual violation of the laws.
Okay.
But the mob reporting behavior does happen.
Yeah, it does.
It happens across the spectrum.
I'd have to assume it's going to be one direction.
I can't imagine he would target vegans, but vegans would target him, right?
Well, he might.
I mean, he doesn't.
Is he the kind of guy who's going to want to report vegans and get them banned from Twitter?
Or is he going to want to make fun of them?
He's going to make fun of them.
They're going to target him to try and get him removed by exploiting the system that you guys have.
It may not be him, though.
It could also be his followers.
It's a really complicated world out there.
So the motivations of why people mob report are different and it's not always under someone's
control it could even even be other carnivore diet proponents who are just jerks who don't
like him because he's getting all the love people are weird yeah but it's just the the idea though
is that it it does kind of highlight a bit of a flaw in that it's good that someone can,
because you might see something awful,
someone doxing someone or something like that,
and then you can take that and report it,
and then people can see it and get rid of it
and minimize the damage that's done.
There's another big problem here in that,
is the carnivore diet legitimately healthy?
Is it a threat to your health?
And if it is, what is Twitter's responsibility in controlling
that information? Right. So just to clarify, my opinion is, if he wants to be a proponent for the
carnivore diet, let him. But you've got people on YouTube who are being deranked for certain
beliefs about certain health issues that I don't agree with. And so one of the risks then is,
you know, we're coming towards a position where people think some ideas are better than others.
Therefore, as a company, we're going to restrict access to certain information.
You mean like anti-vax?
Exactly. So I guess I'm trying to say is, at what point would you guys restrict someone from sharing like false information about vaccines that could get someone hurt?
That is not a violation of Twitter's rules. No.
I think, I mean, I'd be interested to hear your ideas around this, but our perspective right now is around this concept of variety perspective.
Like, are we encouraging more echo chambers and filter bubbles, or are we at least showing people other information that might be counter to what they see?
And there's a bunch of research that would suggest that further emboldens their views.
There's also research that would suggest that it at least gives them a consideration
about what they currently believe.
So, sorry, given the dynamics of our network being completely public,
we're not organized around communities, we're not organized around topics,
we have a little bit more freedom to show more of the spectrum of any one particular issue
and i think that's how we would we would approach it from the start that said we haven't really
dealt much with misinformation more broadly across like these sorts of topics we've we've
focused our efforts on elections and well well, mainly elections right now.
You know, YouTube is a different animal.
It is.
Someone can really convince you that the earth is flat if you're gullible and you watch a 45-minute YouTube video.
Right.
It's kind of a different thing.
But I wanted to just kind of get into that statement you made about misinformation and whether or not you'll police it.
information and whether or not you'll police it so i i think that the tough part of this is really and love to have a discussion about this is do you really want corporations to police what's true
and not true absolutely not that's a really really tough position but you guys do that
we try not to do that we don't want to do that but you do in your rules but the places that we
focus on is where we think that people are going to be harmed by this in a direct intangible way
that we feel a responsibility to
correct. Not correct ourselves.
In your rules, Tim, what do you mean by that?
Dead naming and misgendering.
Dead naming and misgendering.
That's a specific ideology that's unique to a very small faction of people in this world that
you guys actually ban people for.
So the way I think of it is it's behavior-based. And I know you think of it as content,
and we can disagree on this point. But this is about why are you doing this to a trans person why are you calling them by this name when they've
chosen to go by a different name or why are you outing them in some way like what is your intent
and purpose behind that i don't mean to interrupt but in the interest of clarity i want to explain
what dead naming means right right so dead so uh a transgender individual changes their name when they transition.
A deadname would be their birth name or the name they went by before the transition.
Yeah, okay.
Because my mom's probably going, what?
I'm ready for the text.
What's a deadname?
And I will clarify, too, your rule specifically targeted misgendering and deadnaming, I believe
is correct, right?
So years ago, we passed a policy that we call our hateful conduct policy, and that prohibits targeting or attacking someone based on their belonging in any number of groups, whether it's because of their religion or their race or their gender, their sexual orientation, their gender identity.
So it was something that's broad based is that you can't choose to attack people because of these characteristics.
But you do have limits on what characteristics you police, right?
So you're not banning people for targeted trans-specieing others, right?
Well, we have also general abuse and harassment rules, right?
Which says you can't engage in abuse and harassment on the platform.
But you can't detonate someone, but you can call them stupid.
Generally.
I mean, if you created an account that only was there to call the same person stupid 5,000
times, we'd probably view
that as a, you know, targeted harassment.
It's a function of behavior, because people with our system can do this in massive velocity,
which would ultimately silence you from the platform or just say, like, I give up.
I don't want to deal with this thing.
I'm out.
So we can just get into all of the big examples.
I mean, starting with me.
I'd love to, Tim, but can we just take a step back and try to level set what we're trying to do with our policies?
Because I think it's worth doing that.
Yes, yes.
So as a high level, I personally, and this is my job to run the policy team, I believe that everyone has a voice and should be able to use it.
And I want them to be able to use it online.
I believe that everyone has a voice and should be able to use it.
And I want them to be able to use it online.
Now, where we draw a line is when people use their voice and use their platform to abuse and harass other people to silence them.
Because I think that that's what we've seen over the years is a number of people who have been silenced online because of the abuse and harassment they've received.
And they either stop talking or they leave the platform in its entirety. If you look at free expression and free speech laws around the world, they're not absolute.
They're not absolute. There's always limitations on what you can say. And it's when you're starting
to endanger other people. So my question then is, when I was physically threatened on Twitter,
you guys refused to take down the tweet. And I showed up in Berkeley and someone physically
threatened me because they were encouraged to. When I was in Venezuela, I was physically
threatened by high profile individual, 10,000 people tweeting at me because they were encouraged to. When I was in Venezuela, I was physically threatened by a high-profile individual,
10,000 people tweeting at me.
You guys do nothing, right?
So I guess there's the obvious question of why does it always feel like your policies
are going one direction politically?
You say it's about behavior.
You said it several times already.
But I've got tons of examples of that not being the case.
And you will always be able to find those examples.
The examples where you guys were alerted multiple times and did nothing,
like when Antifa doxed a bunch of law enforcement agents.
Some of the tweets were removed, but since September,
this tweet is still live with a list of private phone numbers, addresses, yet Kathy Griffin, she's fine.
The guy who threatened the lives of these kids in Covington and said,
lock them in the school and burn it down, you did nothing.
I mean, he got suspended. I take his tweets down. Was he banned for threatening the lives of these kids in Covington and said, lock them in the school and burn it down. You did nothing. I mean, he got suspended. I take his tweets down. Was he banned for
threatening the lives of kids? Absolutely not. So again, we have, and I'm happy to talk about
all these details. We have our policies that are meant to protect people and they're meant to
enable free expression as long as you're not trying to silence somebody else. Now we take a
variety of different enforcement mechanisms around that. Sometimes you get warned.
Sometimes your tweet is forced to be deleted.
It's a very rare occasion where we will outright suspend someone without any sort of warning or any sort of ability to understand what happened.
What did you guys do with Kathy Griffin when she was saying she wanted the names of those young kids wearing the MAGA hats, the Covington High School kids?
Yeah, that's a great example, Joe. So in that particular case, our doxing policy really focuses
on posting private information, which we don't consider names to be private. We consider your
home address, your home phone number, your mobile phone number, those types of things to be private.
So in that particular case, we took what I think now is probably a very literal interpretation of
our policy and said that that was not a doxing incident.
Do you think that was an error?
I think that it was short-sighted.
And given the context of what was going on there, that if I was doing this all over again, I would probably ask my team to look at that through the lens of what was the purpose behind that tweet.
if the purpose was in fact to identify these kids to either dox them or abuse and harass them,
which it probably was, then we should be taking a more expansive view of that policy and including that type of content. Especially considering the fact they're minors. I mean, I would think that
right away that would be the approach. So this is a trial and error, sort of learn and move on with
new information sort of a deal. Absolutely. We're going to learn. We're going to make a ton of
mistakes. We're trying to do this with hundreds of millions of accounts all around the world,
numerous languages.
We're going to make mistakes.
Even if we get better, there will always be mistakes.
But we're hoping to learn from those and to make ourselves better and to catch cases like
Tim's or others where we clearly may have made an error.
And I'm open to having those discussions.
I'm sorry, Tim, familiar with your specific cases. But I'd love to follow up with you and really try to understand.
Do you want to see the tweet?
We definitely can pull that up.
So it's bit.ly slash Antifa tweet, all lowercase.
This is also an evolution in prioritization as well. One of the things we've come to recently is we do need to prioritize these efforts,
both in terms of policy, enforcement, how we're thinking about evolving them. One of the things
that we want to focus on as number one is physical safety. And this leads you immediately to
something like doxing. And right now, the only way we take action on a doxing case is if it's
reported or not. What we want to move to is to be able to
recognize those in real time, at least in the English language, recognize those in real time
through our machine learning algorithms, and take the action before it has to be reported. So we're
focused purely right now on going after doxing cases with our algorithms so that we can be
proactive. That also requires a much more rigorous
appeals process to correct us when we're wrong, but we think it's tightly scoped enough. It impacts
the most important thing, which is someone's physical safety. Once we learn from that,
we can really look at the biggest issue with our system right now is all the burden is placed upon
the victim. So we only act based on reports. We don't
have a lot of enforcement, especially with more of the takedowns that are run through machine
learning and deep learning algorithms. But if something is reported, a human does review it
eventually, or are there a series of reports that you never get to? There's probably reports we
don't. I mean, we prioritize a cue queue based on severity and the thing that will mark severity is something like physical
safety or private information or whatnot so generally we try to get through everything
but we have to prioritize that queue even coming in so if someone threatened the lives of someone
else you would would you ban that account would you tell tell them? Let's say someone tweeted three times, kill these people.
I want them dead.
Three times.
Is that?
Yes, that's a violation.
You didn't ban him, though.
I don't know why that is.
Let's pull that up, Jamie.
I don't necessarily want to give out specific usernames because then people just point the finger at me and say, I'm getting these people banned.
But during Covington, this guy said multiple times he wanted his followers to go and kill these kids yeah and and we have to look at that but we also have to look
in the context because we also have i think we talked about this a little bit in the last podcast
but we we have gamers on the platform who are saying exactly that to their friends that they're
going to meet at the game in the game tonight and without the context of that relationship without
the context of the conversation that we're having, we would take the exact same action on them incorrectly. Yeah, absolutely. That I understand.
I think in the case of Covington, though, this user was so high profile. He's a verified user.
He's got something like 20,000 followers. And it was highlighted by numerous conservative media
outlets saying, wow, this guy's, it's screenshotted, it's being shared. I mean, you had a Disney
producer saying a picture of a wood chipper with a body being
thrown in it saying that's what he wanted to happen.
So I do know that some of these accounts got locked.
A Disney producer was doing that?
Well, I'll clarify.
Fact check me on that.
But that's basically the conversation that was had.
There's a guy at Disney, he posted a picture from Fargo of someone being tossed in a wood
chipper and he says, I want these maga kids you know done like this you had another guy who specifically said lock them
in the school burn it down said a bunch of disparaging things and then said if you see
them fire on them and he tweeted that more than once and that those accounts were those tweets
were taken down those were violations of our rules that's i'm pretty sure it's actually illegal to do
that right it's to to to tell to tell tell any individual to commit a felony is a crime, right?
Well, incitement of violence is certainly a crime in many places.
I just have to wonder.
I understand the context issue, but this is what I talk about.
Well, there's context and scale, too, though.
But, Tim, those accounts were actioned.
They may not have been actioned the way you wanted them to,
but the tweets were forced to be deleted,
and the account took a penalty for that. I understand that. What kind
of a penalty? Well, again, as I said earlier, Joe, we don't usually automatically suspend accounts
with one violation because we want people to learn. We want people to understand what they
did wrong and give them an opportunity not to do it again. And it's a big thing to kick someone
off the platform.
And I take that very, very seriously.
So I want to make sure that when someone violates our rules,
they understand what happened and they're given an opportunity
to get back on the platform and change their behavior.
And so in many of these cases, what happens is we will force someone
to acknowledge that their tweet violated our rules,
force them to delete that tweet before they can get back on the platform.
And in many cases, if they do it again,
we give them a timeout, which is like seven days,
and we say, look, you've done it again.
It's a temporary suspension.
If you do it again...
Timeout, you're a mom.
I'm totally a mom, exactly.
A new mom, too.
And if you do it again, then you're done.
So it's kind of like, you know, three strikes.
Sort of like baseball.
And so in some of these cases that Tim's referencing, I have to imagine, because these tweets were deleted.
They are violations of our rules.
People are upset that the account came back again and was allowed to say other things.
But we did take action on those tweets.
They were violations of our rules.
And then you have people like Milo, who is mean to a person, and you banned him permanently.
There's a little more to that.
Actually, Tim, let's talk about it.
I'm happy to talk about Milo, and I actually brought the tweets.
So let's preface that by saying the point I want to make sure is clear
is that you had somebody who actively called for the death of people.
I understand the context issue.
Maybe he's talking about video games.
Context and scale.
And scale.
So this is a verified user.
And that's just the complexity in acting.
It's not an excuse for why we didn't do it in a particular time.
There are a lot of other examples, too, that get into more egregious areas that I've prepared.
So here we have someone with over 20,000 followers.
He's verified numerous times, incites his followers to commit a crime against these kids.
The action taken against him is delete the tweets.
You get a suspension.
You get timeout.
Then you have people like Alex Jones who berated a cnn reporter permanently banned you get milo
yiannopoulos he was mean permanently banned but that's your impression that's not what happened
okay i'm here to talk about the details if you want to yeah let's please let's do this
one at a time let's start with milo so what was the details of milo so milo had a number of tweets
that violated our rules going back to 2014.
But I'm going to talk about the final three in this three-strike concept.
He claimed to be a BuzzFeed reporter in his bio, and he's a verified account.
So that is impersonation.
I'm not sure why he did that.
He did do that.
Well, BuzzFeed's a left-wing thing, so he was doing parody.
Potentially, but our parody rules are very specific,
that if you have an account that is a being uh is a parody account you need to say
that it is a parody account right but everybody who knows milo would know that he's not a buzzfeed
reporter but people who don't know milo will look at that verified account and say but he wasn't
verified after a while you removed his verification. Because he violated our rules around verification. So the verification was removed because of the BuzzFeed thing?
I believe so.
I can confirm that, but I believe so.
He also docked someone.
He posted private information about an individual.
So that was the second one.
He tweeted to somebody else,
If you were my child, I'd have dashed your head on a rock and tried again
which we viewed as a threat really that's that seems like he's saying like your mom should have
swallowed you you know it's like you know i'm saying he's like you're you're a mistake i don't
think that's a threat i understand why reasonable people would have different impressions of this
i'm just going through and telling you what they are just so we can have all the facts on the table and then we
can debate them and then the last one we found um a bunch of things that he posted that we viewed as
incitement of uh abuse against leslie jones so there's a bunch of them but the one that i like to
uh look at which really convinced me is he posted two doctored tweets that were supposedly by Leslie Jones.
They were fake tweets.
The first one said,
white people are getting on my nerves.
Like,
how can you call yourself human?
And then the second one said,
the goddamn slur for a Jewish person at Sony ain't paid me yet.
Damn Bix nude better pay up. so this was just a fake tweet that someone
had photoshopped they were two fake tweets and we know they were faked because we could still tell
from the the software that they were faked you can't always tell so it is possible that he didn't
know they were faked it's possible someone sent it to him and he didn't do his due diligence and looking it up and it is possible but it was pointed out to him that they were fake
because he left it on and not only did he leave it on he said don't tell me some mischievous
internet rascal made them up exclamation point so this in the context of a bunch of other things he
was saying towards leslie jones on twitter i and my team felt that this was taken
as a whole incitement of harassment against her wasn't there another issue with multiple accounts
that were connected to him uh there were a bunch of other issues on the background but these are
the three primary things that we looked at in terms of the other things that were in the background weren't they multiple accounts that were connected to him like i think that i'm not sure about that uh joe i think it was
more that we found him to be engaging in coordinated behavior and inciting people to to attack leslie
jones now with a case like him no i'm just going to be honest when i'm listening to those or
listening to you read those
tweets out they don't sound that bad and they certainly don't sound as bad as calling for the
death of a child who's wearing a maga hat and throw him into a wood shepherd the fact that that
guy's still out there tweeting and yet milo's not milo's initial the whole thing stemmed from
other than the buzzfeed thing stemmed from his legitimate criticism of a film. And he's a satirist.
He was mocking this film.
The doxing incident wasn't related to the film.
The doxing incident.
Why don't we –
I hope we all agree that doxing is something that Twitter should take action on.
A hundred percent.
And it can threaten people in real life.
And I take an enormous amount of responsibility for that
because I fear daily for the things that are
happening on the platform that are translating into the real world. So Milo is a contentious
figure. And there's certainly things you can pull up that I wouldn't agree with anything he did
there. I think those are horrible. I think Joe brought some really good points. But what about
Chuck Johnson? Why was Chuck Johnson banned? I don't have those details in front of me.
Chuck Johnson said that he was preparing something to take out deray mckesson and in the in a journalistic context people take
this to mean he was going to do a dossier or some kind of hit piece on deray he was permanently
banned and my understanding and it's been a long time since i've read this there was some leaked
emails i think from dick costolo where he said maybe it wasn't dick i don't want to drag dick
i don't know who it was exactly they said i don I don't care. Just get rid of him. And he was off. And again, maybe there's some hidden context there. I don't know.
The concern is that this is always leaning towards the left.
Oh, it absolutely is. And I'm not even getting started.
why you feel that way. I don't think that's true. I think we look at each individual instance of violations of our rules and try to make the best case that we can. And I'm not trying, and I do
think, Joe, just to say, I do think we've failed in a couple of ways. And I want to admit that.
Number one, we haven't done enough education about what our rules are. Because a lot of people
violate our rules, and they don't even know it. Like some of the statistics that we've looked at,
like for a lot of first time users of the platform, if they violate the rule once, almost two thirds of them
never violate the rules again. So we're not talking about like a bunch of people accidentally,
like if they know what the rules are, most people can avoid it.
And most people when they feel the sting of a violation, they go, okay, I don't want to lose
my rights to post. Exactly. And they're able to do it. So we have a lot of work to do on education.
So people really understand what the rules are in the first place. The other thing we have to do to address these
allegations that we're doing this from a biased perspective is to be really clear about what
types of behavior are caught by our rules and what types are not and to be transparent within
the product. So when a particular tweet is found to be in violation of our rules, being very,
very clear like this tweet was found to be in violation of our rules, being very, very clear, like this tweet was found
to be in violation of this particular rule. And that's all work that we're doing. Because we think
the combination of education and transparency is really important, particularly for an open
platform like Twitter. It's just part of who we are, and we have to build it into the product.
I appreciate that your particular thoughts, though, on those examples that he described,
when he's talking about someone saying they should throw these children into a wood chipper
versus Chuck Johnson saying
he should take this guy, he wants
to prepare a dossier to take this guy
out, or how did he say it? He said something like
I'm going to take out DeRay McKesson with
he said I'm preparing to take out DeRay, something like that
I can't remember. Preparing to take him out. I can understand
how. So it could be misconstrued as he was
trying to assassinate him. Right. You could
misconstrue it that. But not a direct threat.
But the other one's a direct threat.
One guy is banned for life.
The other guy is still posting.
And we can, I'm happy to follow up.
I just don't have all the Chuck Johnson.
It's not about one thing.
As I said, it's about a pattern and practice of violating our rules.
And we don't want to kick someone off for one thing.
But if there's a pattern and practice like there was for Milo, we are going to have to take action at some point because we can't sit back and let people be abused and harassed and silenced on the platform.
So one really important thing that needs to be stated is that Twitter, by definition, is a biased platform in favor of the left, period.
It's not a question.
I understand you might have your own interpretation, but it's very simple.
Conservatives do not agree with you on the definition of misgendering.
If you have a rule in place that specifically adheres to the left ideology,
you, by default, are enforcing rules from a biased perspective.
Well, Tim, there are a lot of people on the left who don't agree with how we're doing our job either.
For sure.
And those people think that we don't take enough action on abuse and harassment,
and we let far too much behavior go.
But that's a radical example, though.
I mean, what he's talking about, I mean, in terms of generalities,
in general, things lean far more left.
Would you agree to that?
I don't know what that means.
But in this particular case, it's how the speech is being used.
This is a new vector of attack that people have felt that,
I don't want to be on this platform anymore because I'm being harassed and abused,
and I need to get the hell out.
Well, people harass and abuse me all day and night.
You don't do anything about that.
My notification's permanently locked at 99.
You have it worse than I do. I mean, you've got substantially more followers. And I don't click the
notification tab anymore because it's basically just harassment.
So this is a really funny anecdote. I was covering a story in Berkeley, and
someone said, if you see him, attack him. I'm paraphrasing.
They said, basically, to swing at me, take my stuff, steal from me.
And Twitter told me after review, it was not a violation of their policy.
Somebody made an allusion to me being a homosexual.
And I reported that, instantly gone.
So when I show, so for me, I'm like, well, of course, of course, Twitter is going to
enforce the social justice aspect of their policy immediately, in my opinion, probably because you guys have PR constraints and you're probably nervous about that.
But when someone actually threatens me with a crime and incites their followers to do it, nothing got done.
And I'm not the only one who feels that way.
Well, Tim, that's a mistake.
If someone acts in that manner and threatens to hurt you, that's a violation of our rules.
Maybe there was a mistake there, and I'm happy to go and correct that, and we can do it offline, so
don't fear any sort of reprisal against you.
But that's a mistake. That's not
an agenda on my part or in the team's part.
Would this be a manual?
We don't have any PR constraints.
So why did you ban Alex Jones?
You want to get into that?
Absolutely. Are you ready for Alex Jones?
Oh, I've been ready for Alex Jones.
Let me pull this up.
Well, let me say this. The reason I bring him up is that Oliver Darcy,
one of the lead reporters covering Alex Jones and his content, said on CNN
that it was only after media pressure did these social networks take action.
So that's why I bring him up specifically, because it sort of implies you were under PR constraints
to get rid of him. I think if you look at the PR that Twitter went through in that incident, it wouldn't be that we looked good in it.
And that's not at all why we took action on this.
Sorry, but you have to look at the full context on the spectrum here.
Because one of the things that happened over a weekend is what Alex mentioned on your podcast with him.
He was removed from the iTunes podcast directory.
That was the linchpin for him because it drove all the traffic to what he said, basically zero.
Immediately after that, we saw our peer companies, Facebook, Spotify, YouTube, also take action.
We did not. We did not because when we looked at our service
and we looked at the reports on their service, we did not find anything in violation of our rules.
Then we got into a situation where suddenly a bunch of people were reporting content on
our platform, including CNN, who wrote an article about all the things that might violate
our rules that we looked into. And we gave him one of the warnings. And then we can get into
the actual details. But we did not follow, we resisted just being like a domino with our peers
because it wasn't consistent with our rules and the contract we put in before our customers.
So what was it that made you ban them?
So there were three separate incidents that came to our attention after the fact that were reported
to us by different users. There was a video that was uploaded that showed a child being
violently thrown to the ground and crying. So that was the first one. The second one was a video
that we viewed as incitement of violence. I can read it to you. It's a little bit of a transcript,
but now it's time to act on the enemy before they do a false flag. I know the Justice Department's
crippled a bunch of followers and cowards, but there's groups, there's grand juries,
there's you called for it. It's time politically, economically, and judiciously, and legally,
and criminally to move against these people.'s got to be done now get together the
people you know aren't traitors aren't cowards aren't helping their freaking bets hedging their
freaking bets like all these other assholes do and let's go let's do it so people need to have
their and then there's a bunch of other stuff but at the end so people need to have their battle
rifles ready and everything ready at their bedsides and you've got to be ready because the media is so disciplined in their deception.
So this is, you're saying that this is a call to violence against the media.
That's what it sounded like to us at the time.
And there have been a number of incidents of violence against the media.
And again, I take my responsibility for what happens on the platform and how that translates off platform very seriously.
And that felt like it was an incitement to violence.
So if he only tweeted the incitement to violence, he would have been fined?
If he only tweeted that transcript saying,
get your battle rifles ready, you wouldn't have deleted his account.
Again, context matters to him. It's not about one thing.
So we'd have to look at the entire context of what's going on.
So I'm asking, was that egregious enough for you to say that alone gets him banned?
That wasn't the final. Right, right. So then I guess the question is, what was the video
context of the kid being thrown to the ground? Was it newsworthy? We obviously didn't think so.
And depicting violence against a child is not something that we would allow on the platform,
even if it's news content. If it was, there are certain types of situations where if you were
reporting on, you know, war zone and things that might be happening, we would put an interstitial on that type of content that's graphic or violent.
But we didn't feel that that was the context here.
Well, there's a video that's been going around that was going around four or five weeks ago.
The one where the girls were yelling at that big giant guy and the guy punched that girl in the face and she was like 11 years old.
I saw that multiple times on Twitter.
That was one of the most violent things I've ever seen. This giant man punched this 11-year-old girl in the face and she was like 11 years old i saw that multiple times on twitter that was one of the most
violent things i've ever seen this giant man punched this 11 year old girl in the face and
that was was that removed from twitter i don't know i i would have to go see if anyone reported
it to us i think one of the issues here is too is you know you do you want me to get to the third
one yeah so the third strike um that week we looked at was a verbal altercation
that alex got into with a journalist and in that altercation there were which was uploaded to
twitter um there were a number of statements using eyes of the rat even more evil looking
person he's just scum you're a virus to america and freedom smelling like a possum that climbed
out of the rear end of a dead cow you look like a possum that got caught doing some really really nasty stuff in my view so it was a bunch of that's
enough really that's hilarious pattern in practice but it was a verbal altercation that was posted
on our platform so so we took the totality of this having been warned that we have rules against
abuse and harassment of individuals we saw this pattern in practice one strike two strike three strikes and we made a decision to permanently and so that last one was
on periscope is that what it was that he uh broadcast through um i think it was uh originally
on periscope but it was also reposted from multiple related accounts onto twitter so we can we can
agree with you when you say these things like, you know, Alex said this,
it sounds like a threat. He was berating this person saying awful things. But ultimately,
your judgment is the context. You say we have to pay attention to the context. We're just trusting
that you made the right decision. Well, I'm giving you as much facts as I can give you here. And I
think that this is a real hard part of content moderation at scale on global platforms.
It's not easy.
And I don't think Jack or I would tell you that it's easy.
It's a preposterous volume you guys have to deal with.
And that's one of the things that I wanted to get into with Jack when I first had him on.
Because when my thought, and I wasn't as concerned about the censorship as many people were.
My main concern was, what is it like to start this thing that's
kind of for fun and then all of a sudden it becomes the premier platform for free speech
on the planet earth so which is it is that but it's also a platform that's used to abuse and
harass a lot of people and used in ways that none of us want it to be used but nonetheless it happens
and i think it's an enormously complicated challenge uh for any company to do content moderation at scale and that's something that
we are sitting down thinking about how do we take this forward into the future because this is it
doesn't scale so so but so let's let's take the other context now we've heard what you said why
what alex jones did was bad and now we can look at it this way oliver darcy who has on numerous
occasions insulted conservatives recently on cnn called them gullible being sold red meat by grifters, repeatedly covers a story.
I'm going to do air quotes because I think to an extent he is allowed to cover these stories.
He keeps going after Alex Jones. He keeps digging through his history. Then he goes on TV and says,
we got him banned. Then Alex Jones confronts him in a very aggressive and mean way. And that's
your justification for, or I should say I inverted the timeline.
Basically, you have someone who's relentlessly digging through your stuff, insulting you, calling you names, sifting through your history, trying to find anything they can to get you terminated, going on TV, even writing numerous stories.
You confront them and say you're evil, and you say a bunch of really awful mean things, and then you ban him.
And then you post that information all over the internet.
Right.
But you have a journalist who recently went on TV and said CPAC is a bunch of gullible
conservatives being fed red meat by grifters.
You can tell this guy's not got an honest agenda.
So what you have, to me, it looks like the conservatives to an extent probably will try
and mass flag people on the left.
But from an ideological standpoint, you have the actual, you know, whatever people want to call it, sect of identitarian left that believe free speech is a problem, that have literally shown up in Berkeley burning free speech signs.
And then you have conservatives who are tweeting mean things.
And the conservatives are less likely, and I think it's fair to point out, less likely to try and get someone else banned
because they like playing off them, and the left is targeting them. So you end up having
disproportionate... I feel like there are a lot of assumptions in what you're saying, and I don't know
what basis you're saying those things. I mean, you have conservatives demanding free speech,
and you have liberals, I shouldn't say liberals, you have what people refer to as the regressive
left calling for the restrictions on speech. You have these... I don't know what those terms mean, to be honest with you.
We have people on all sides of the spectrum who believe in free speech.
And I believe that to be the case.
So your platform restricts speech.
Our platform promotes speech unless people violate our rules.
And in a specific direction.
In any direction.
But uncle, I don't want to say his name.
The guy who calls for death gets a suspension. The guy who insinuates death gets a specific direction. In any direction. But Uncle, I don't want to say his name, the guy who calls for death gets a suspension.
The guy who insinuates death gets a permanent ban.
But Tim, you're misinterpreting what I'm saying.
And I feel like you're doing it deliberately.
It's not about one particular thing.
It's about a pattern in practice of violating our rules.
And you have a pattern in practice of banning only one faction of people.
I don't agree with that.
I recently published an article where they looked at 22 high-profile bannings from 2015
and found 21 of them were only
on one side of the cultural debate. But I don't look at the political spectrum of people when I'm
looking at their tweets. Right, you have a bias. I don't know who they are. You're biased and you're
targeting specific individuals because your rules support this perspective. No, I don't agree with
that. So can you be clear though in like what rules support that perspective? Specifically,
the easiest one is misgendering, right? Because that's so clearly ideological.
If you ask a conservative, what is misgendering?
They'll say if someone is biologically male and you call them, you know, she, a biologically
male, you call them a she, that's misgendering.
That's a conservative view.
The progressive view is inverted.
So now you actually have in your policies a rule against the conservative perspective.
I have a rule against the abuse and harassment of trans people on our platform.
That's what my rules are.
Can we just give context in the background as to why that is?
Yeah, and the why that is.
And I brought some research.
So we obviously received a lot of feedback.
So we don't make these rules in a vacuum, just to be clear.
We have a bunch of people all around the world who give us context on the types of behavior they're seeing, how that translates into real world harm.
And they give us feedback and they tell us, like, you should consider different types of rules, different types of perspectives, different.
Like, for example, when we try to enforce hateful conduct in our hateful conduct policy in a particular country, we are not going to know all the slur words that are used to target people of a particular race or particular religion. So
we're going to rely on building out a team of experts all around the world who are going to
help us enforce our rules. So in the particular case of misgendering, I'm just trying to pull up
some of the studies that we looked at. But we looked at the American Association of Pediatrics and looked at the number of transgender youths that were committing suicide. It's an astronomical,
I'm sorry, I can't find it right now in front of me. It's a really, really high statistic. That's
like 10 times what the normal suicide rate is of normal teenagers. And we looked at the causes of
what that was happening. And a lot of it was not just violence towards those individuals,
but it was bullying behavior. And what were those bullying behaviors that were contributing to that? And that's why
we made this rule. Because we thought, and we believe that those types of behaviors were
happening on our platform, and we wanted to stop it. Now, there are exceptions to this rule.
We don't, and this is all this isn't about like public figures, and there's always gonna be public
figures that you're gonna want to talk about. And that's, that's fine. and this is all, this isn't about like public figures and there's always going to be public figures that you're going to want to talk about.
And that's, that's fine.
But this is about, are you doing something with the intention of abusing and harassing a trans person on the platform?
And are they viewing it that way and reporting it to us so that we take action?
So, so I will just state, I actually agree with the rule from my point of view.
I agree that bullying and harassing trans people is entirely wrong. I disagree with it. But I just want to make sure it's clear to everybody who's
listening. My point is simply that Ben Shapiro went on a talk show and absolutely refused.
And that's his shtick. And he's one of the biggest podcasts in the world. So if you have all of his
millions upon millions of followers who are looking at this rule saying this goes against
my view of the world, and it's literally 60 plus million in this country you do have a rule that's
ideologically bent and and it's it's true you you did the research you believe this well then you
have ben shapiro who did his research and doesn't believe it yeah and i relied on the american
association of pediatrics and uh you know human rights council and other and i'm sure he has his
sources too for when he gives his statements.
The point is,
but I just wonder if they have that context.
I mean,
and that's,
and that's where we have also failed as well as just explaining the why behind a lot of our policy and reasons.
I would agree.
And I think it's fine.
You did research and you found this to be true,
but we can't simply say maybe Ben Shapiro and the other conservatives who
feel this way,
don't know we have to, we can't, you know, the point and the other conservatives who feel this way don't know
we have to we can't you know the point i'm trying to make is it's simply whether you believe it
whether you justified or not is not the point the point is you do you do have this rule that rule is
at odds with conservatives period well i think i think that you're you're generalizing but i think
it is really important as jack said to the why behind these things the why is to protect people
from abuse
and harassment on our platform i understand but you essentially created a protected class
if this is the case because despite these studies and what you know these studies are showing
there's a gigantic suicide rate amongst trans people period it's a 40 it's it's outrageously
large now whether that is because of gender dysphoria
whether it's because of the complications from sexual surgery sexual transition surgery whether
it's because of bullying whether it's because of this awful feeling of being born in the wrong
gender whether that all that is yet to be determined the fact that they've shown that
there's a large amount of trans people that are committing suicide, I don't necessarily think that that makes sense in terms of people from someone's perspective like a Ben Shapiro saying that if you are biologically female, if you are born with a double X chromosome, you will never be XY.
If he says that, that's a violation of your policy.
And this is, you're creating a protected class.
To be fair, targeted.
If he wants to express that opinion, he is fully entitled to express that opinion.
If he's doing it in a manner that's targeted at an individual.
Repeatedly.
Repeatedly. Repeatedly.
And saying that.
Okay, but what about,
that's where the intent
and the behavior is at.
You know what's going on
with Martina Navatarova right now.
Martina Navatarova,
why can't I say her last name?
Yeah, I don't know.
Navatro,
I don't think I've ever said it.
Martina Navatalova.
Is it Talova or Trelova?
Trelova.
Anyway,
epic world class
legend tennis player, right? who happens to be a lesbian,
is being harassed because she says that she doesn't believe that trans women,
meaning someone who is biologically male, who transitions to a female,
should be able to compete in sports against biological females.
This is something that I agree with.
This is something I have personally experienced a
tremendous amount of harassment because i stood up when there was a woman who was a trans woman
who was fighting biological females in mixed martial arts fights and destroying these women
and i was saying would you just watch this and tell me this doesn't look crazy to you
um well go ahead well my point is you should be able to express yourself.
And if you say that you believe someone is biologically male, even though they identify as a female, that's a perspective that should be valid.
I mean, this is someone's, this is, first of all, it's biologically correct. So we have a problem in that if your standards and your policies are not biologically accurate,
then you're dealing with an ideological, you know, an ideological policy.
And just because, I mean, I don't want to target trans people.
I don't want to harass them.
I certainly, I'll call anybody whatever they want.
I mean, if you want to change your name to a woman's name and identify as a woman i'm 100%
cool with that but by saying i don't think that you should be able to compete as a woman this
opens me up for harassment and i never reported any of it i just don't pay attention to it but
going into like megan murphy for instance right you can call that
target harassment if megan murphy who is for those that don't know she's a radical feminist who
refuses to uh use the transgender pronouns if she's in an argument with a trans person over
whether or not they should be allowed in sports or in biologically female spaces and she refuses
to use their pronoun because of her ideology you'll ban them again it depends on
the context on the platform and it's also i want not banned permanently like you get warnings she
was banned permanently but let's be clear about what happened but she was explained because you
explained it to me what what did she actually do my understanding and i don't have the tweet by
tweet the way that i did for the others but my understanding is that she was warned multiple times for um misgendering an individual that she
was in an argument with and this individual is actually bringing a lawsuit against her
in canada as well so it is so you have an argument between two people again and you have a rule that
enforces only one side of the ideology and you've banned only one of those people we have a rule
that attempts to address what we have perceived to be instances of abuse and harassment it's your ideology right
but it is an ideology right if she's saying a man is never a woman if that's what she's saying
and then biologically she's correct we obviously have a debate here this is not a clear cut this
is not something like you could say water is wet you know this is dry it's this is not like
something you can prove this is something where you you have to acknowledge that there's an
understanding that if someone is a trans person we all agree to consider them a woman and to think
of them as a woman to talk to them and address them with their preferred name and their preferred
pronouns but biologically this is not accurate.
So we have a divide here.
We have a divide between the conservative estimation of what's happening and then the
definition that's the liberal definition of it.
I think that's right, Joe.
And I think what I'm trying to say is that it's not that you can't have those viewpoints.
It's that if you're taking those viewpoints and you're targeting them at a specific person
in a way that reflects your intent to abuse and harass them.
What if it's in the context of the conversation?
What if she's saying that I don't think that trans women should be allowed in these female spaces to make decisions for women?
And then this person's arguing and she says a woman is biologically female.
You are never going to be a woman.
She responded with men aren't women, though.
And that was her first in the series of events.
That's what got her the suspension and the warning.
That was one of many tweets that was part of providing context.
And that was actually the second, actually.
Strike is my understanding.
But why is that a strike?
Yeah, why is that a strike?
But again, like, it's the context of, I don't have all the tweets in front of me.
There were like 10 or 12 tweets going back and forth.
And my understanding is that in the context of all of those, she was misgendering a particular person. Not that she was holding a belief or statement.
It was a public figure though, wasn't it?
I don't know.
It was. So you're having an individual who is debating a high profile individual in her community, and she's expressing her ideology versus hers, and you have opted to ban one of those ideologies it's within
the context of this conversation this is this is what is being debated whether or not someone is
in fact a woman when they were born a male i understand that this is controversial i i do i
especially to a radical feminist i i understand why why people would not agree with the rule
but that being said it is a rule on our platform and once you're warned about
the rule to repeatedly post the same content is also going to be a violation of our rules right
but the rule it's this seems like a good example of an ideologically based rule if if you're if
she's saying that a man is never a woman though that is not in that context harassment that is a very specific opinion
that she has that happens to be biologically accurate now i don't you know i don't agree
with targeting harassment on anybody and i i targeted harassment on trans people or or straight
people or whatever i don't i don't agree with it i don't think you should do it. It's not something I want to do. But in this context, what she's saying is not just her expression, but it's accurate.
I think an important point is if I tweeted to you, Joe, Joe, you are not a hamster.
That's clearly not a violation of the rules.
However, there are –
What if I identify as a hamster?
Well, no, it wouldn't be.
Because I know people who have specifically begun using insults of animals to avoid getting kicked off the platform for breaking the rules.
Certain individuals who have been suspended now use certain small woodland creatures in place of slurs.
So they're not really insulting you.
And it's fine.
But there are people who consider themselves trans species.
Now, I'm not trying to belittle the trans community by no means.
I'm just trying to point out that you have a specific rule for one set of people. So there are people who have general body dysphoria. You don't have
rules on that. There are people who have actually amputated their own arms. You don't have rules on
that. You have a very specific rule set. And more importantly, in the context of a targeted
conversation, I can say a whole bunch of things that would never be considered a rule break,
but that one is, which is ideologically driven.
Yeah, thank you for the feedback.
I mean, we're, again, always learning and trying to understand different people's perspectives.
And all I'll say is that our intent is not to police ideology.
Our intent is to police behaviors that we view as abuse, movement, and harassment.
And I hear your point of view, and it's something that I'll definitely discuss with my team.
And even in this case, it wasn't just going against this particular rule, but also things that were more ban evasive as well, including taking a screenshot of the original tweet, reposting it, which is against our terms of service as well.
So it's more the actions.
It sounded like a protest against your rule.
I understand you could ban them for it.
But people can protest any one of our rules.
We can't let them do that.
No, no, no.
They can protest any of them.
I understand what you're saying, but I just want to make sure I point out she was clearly doing it as an effort to push back on what she viewed as an ideologically driven rule.
Well, the problem is this is a real debate in the
lgbt community this is a debate where there is a division and there's a division between people
that think that trans women are invading biological female spaces and making decisions
that don't benefit these biological females cisgender whatever you want to call them this
is an actual debate and it's a debate debate amongst progressive people, amongst left-wing people, and it's a debate amongst
liberals. This is, I mean, I would imagine the vast majority of people in the LBGT community
are, in fact, on the left. And this is one example of that. So you have a protected class
that's having an argument with a woman who feels like there's an ideological bent to this conversation that is not only not accurate but not fair.
And she feels like it's not fair for biological women.
The same as Martina.
Well, I'll take this to its logical conclusion.
I got sent a screenshot from somebody, and maybe it's faked.
I think it was real.
They were having an argument with someone on Twitter and responded with dude comma you don't know blah blah and they got a suspension
and a lockout had to delete the tweet because the individual using a cartoon avatar with the
eight with the name apparently was sam reported it and said that i'm transgender and he's calling
me dude and the dude and the twitter user actually got a suspension for it so i can understand
mistakes happen but when you have a rule that's like that,
there's colloquial terms that are like,
man, come on, don't say that.
Dude is like we say, like I asked you guys
when you were going to take a photo in front of this thing,
I said, guys, but I included you.
And I wasn't offended.
Thank you for not being offended.
And I wouldn't have reported you for it.
Thank you.
Yeah, it's tricky, but in this case of Megan Murphy, that's her name, right?
Yeah.
Yeah, that doesn't make any sense to me.
That seems like she should be allowed to express herself.
And this is not being, she's not being mean by saying a man is never a woman.
This is a perspective that is scientifically accurate.
And that's part of the problem.
I just don't want to run
into beating a dead horse so i think i want to move on it's a really important thing to go over
all the nuances of this particular subject because i think that one in particular highlights this
idea of where the problems lie in having a protected class and i think we should be compassionate we
have a lot of protected classes gender race nationality
like these are the protected but it's not for white people you when you say gender or race
when it's not all protected categories so you can't attack someone for their belonging to a
particular race or a particular religion but you can mock white people ad nauseum it's not a problem
it doesn't get it doesn't get removed i'm not talking
about mocking i'm talking about abusing and harassing somebody but i mean if you mock a
black person in the same way it would be considered targeted racism um again it's about targeted
harassment on the platform well i mean what is targeted harassment i mean but when you're okay
like if you have what is racism is is racism only i mean
there's this progressive perspective of racism that it's only possible if you're from a more
powerful class it's only punching down that's the only racism i don't think that makes any sense
i think racism is looking at someone that is from whatever whatever race and deciding that they are, in fact, less or less worthy or less valuable,
whatever it is.
That takes place across the platform against white people.
Now, I'm not saying white people need to be protected.
I know it's easier being a white person in America.
It's a fact.
But it's hypocritical to have a policy that only distinguishes you can make fun of white people all day long, but if you decide to make fun of Asian folks or fill in the blank, that is racist.
But making fun of white people isn't, and it doesn't get removed.
There are tons of – how about Sarah Jong from the New York Times?
Well, I can actually explain that one.
Please do.
My understanding is that you guys started banning people officially under these policies around 2015.
And all the tweets she made was prior to that.
And so you didn't enforce the old tweets. Yeah, so our hateful conduct policy, Joe, just to be clear, is across the board.
Meaning, like, it doesn't just protect women.
It protects men and women.
It protects all races.
It doesn't matter.
And this is how the law is set up in the United States, right?
You can't discriminate against white men.
You can't discriminate against black men.
Like, those are the laws, right?
Like, that's the structure it is.
It doesn't take into consideration power dynamics.
If someone says something about white people and mocks white people on Twitter, what do
you do about that?
If it's targeted harassment.
Targeted.
At a person.
So, just white people in general.
If you say something about white people in general, that's not an issue?
Well, I mean, we focus on targeted harassment, which is behavior that is targeted against an individual who belongs to that class.
Okay.
Because if you try to police every opinion that people have about different races or religions, like, obviously, that's a very different story.
So this is about if you target that to somebody who belongs to that class and that's reported to us, that is a violation of our rules.
and that's reported to us, that is a violation of our rules.
And so in the Sarah Geon case, we did see many tweets of that nature that were focused on people who are white or men.
And our rules in this area came into effect in 2015,
which was our hateful conduct policy.
And a lot of those tweets were from a time period where those rules weren't in effect.
And in her defense, she was actually supposedly responding to people that have – you don't believe that?
Oh, come on.
Over three years?
And she's tweeting blanket statements about white –
Yeah.
Sure, sure.
So I will say, too, obviously I've done a ton of –
Can I just finish on my one point?
So in that case, there were tweets from before the rules went into effect and tweets from after the rules went into effect.
And we did take action on the tweets from after the rules went into effect and tweets from after the rules went into effect and we did take action on the tweets from after the rules went into she's also pretty young
but so i want to point uh yeah she's in her 20s yeah so we're talking about something that might
have happened eight years ago right right it's a 20 it was like 2011 to 13 but i do want to point
this out um before coming on i've obviously did a i did a decent amount of research i searched for
slurs against white people black people latinos and i found copious just just tons and tons of
them now uh they don't go back most of what i found didn't go back too far because it does
seem like you guys are doing your best but there's a lot and it targets white people black people
jewish people it's everywhere and i and i can i can i can understand that you guys you got
hundreds of millions but uh let's try another subject.
Just to address that point, and I think Jack talked about this a little bit.
Like this is where right now we have a system that relies on people to report it to us, which is a huge burden on people.
And especially if you happen to be a high-profile person, and Tim, you would understand this.
You're not going to sit there and report every tweet.
And Joe, you'll understand this.
I ignore it.
Like it's not worth your time. You're not going to go through tweet by tweet as people respond to you and understand this. You're not going to sit there and report every tweet. And Joe, you'll understand this. I ignore it. Like, it's not worth your time.
You're not going to go through tweet by tweet
as people respond to you and report it.
People tell us this all the time.
So this is where we have to start getting better
at identifying when this is happening
and taking action on it
without waiting for somebody to tell us it's happening.
But using an algorithm, though,
do you not miss context?
I mean, it seems to me that there's a lot of people
that say things in humor.
Or slurs within particular communities, is perfectly reasonable right right so yes there
is a danger of the algorithm is missing context and that's why we we really want to go carefully
into this and this is why we've scoped it down first and foremost to doxing which is at least
first it hits our number one goal of protecting physical safety, like making sure that nothing done online will impact someone's physical safety offline on our platform in this case.
The second is that there are patterns around doxing that are much easier to see without having the context. There are exceptions, of course, because you could dox someone's public, you know,
a representative's public office phone number and email address, and the algorithm might catch that,
not have the context that this is a U.S. representative, and this information is already
public. So essentially, this just, it highlights how insanely difficult it is to monitor all of
these posts. And then what is the volume
like what are we dealing with like how many posts do you guys get a day uh hundreds of millions of
posts a day and how many human beings are manually reviewing any of these things i don't have that
that number a lot a lot thousands hundreds of thousands how many employees you guys have we have
uh four thousand employees around the world.
That's it?
Yeah.
We have 4,000 employees.
The reason –
That's crazy, though, but stop and think about that.
4,000 people that are monitoring hundreds of millions of tweets?
No, no, no.
We have a small team who's monitoring tweets, and some of them are employed by us.
Some of them are contractors throughout the world.
So 4,000 employees total?
4,000 employees who are engineers, who are designers, who are lawyers, policy experts.
So the number of people actually monitoring tweets is probably less than 1,000?
Well, the reason we don't give out specific numbers is we need to scale these dynamically.
If we see a particular event within a country, we might hire 100 more people on contract to deal with it.
Right.
Whereas they may not be full-time and with us the entire time.
And would they have the ability to take down tweets?
So as we get reports, it goes into a queue, and those are ranked by severity.
And then we have people who look at our rules and look at the tweets and look at the behavior
and the context around it.
And they have the ability to go down that enforcement spectrum that Vijay talked about.
One, make people log in, read why it's a violation over tweet and delete it.
Two, temporary suspensions.
And finally, a permanent suspension, which is the absolute last resort, which we ultimately
do not want to do.
We want to make sure that our rules
are also guided towards incentivizing more healthy conversation and more participation.
So let me ask you, the rules you have are not based in U.S. law, right? U.S. law doesn't
recognize restrictions on hate speech. It's considered free speech. So if you want to stand
in a street corner and yell the craziest things in the world, you're allowed to. On your platform,
Twitter, you're not allowed to. So even in that sense alone, your rules do have an ideology
behind them. I don't completely disagree. I think, you know, I don't want harassment.
But the reason I bring this up is getting into the discussion about democratic health of a nation.
So I think it can't be disputed at this point that Twitter is extremely powerful in influencing
elections.
You know, I'm pretty sure you guys published recently a bunch of tweets from foreign actors that were trying to meddle in elections. So even you as a company recognize that foreign entities
are trying to manipulate people using this platform. So there's a few things I want to ask
beyond this, but wouldn't it be important then to just, at a certain point, Twitter becomes so
powerful in influencing
elections and giving access to even the president's tweets that you should allow people to use
the platform based under the norms of U.S. law.
First Amendment, free speech, right to expression on the platform.
This is becoming too much of a, it's becoming too powerful in how our elections are taking
place.
So even if you are saying, well, hate speech is our rule and a lot of people agree with it,
if at any point one person disagrees, there's still an American who has a right to this,
you know, to access to the public discourse.
And you've essentially monopolized that and not completely, but for the most part.
So isn't there some responsibility on you to guarantee at a certain extent,
less regulation happen, right?
Like, look, if you recognize foreign governments are manipulating our elections, then shouldn't you guarantee the right to an American to access this platform to be involved in the electoral process?
I'm not sure I see the tie between those things, but I will address one of your points, which is we're a platform that serves the world.
So we're global.
Seventy-five percent of the users of Twitter are outside of the United States.
Right, right, right.
So we don't apply laws of just one country when we're thinking about it.
We think about how do you have a global standard
that can meet the threshold of as many countries as possible
because we want all the people in the world to be able to participate in this conversation. And also meet elections like the Indian election
coming up as well. Right. And my understanding is you were also accused of being biased against
conservatives in India recently. There was a report on that, as well as you held up a sign
that said something offensive about the Brahmin. Yeah. So in that sense, even in other countries,
you're accused of the same things that you're being accused of by American conservatives.
I think that the situations are very, very different.
And I don't think that the ideologies in play are the same at all.
Well, so the reason I bring up –
Can we clarify that? Because I'm not aware of the case.
I'm not sure what you're talking about, but we did have our vice president of public policy testify in front of Indian parliament a couple of weeks ago, and they were really focused on election integrity and safety and abuse and harassment of women and political figures and the likes.
So my concern, I guess, is I recognize you're a company that serves the world, but as an American, I have a concern that the democracy I live in, the Democratic Republic, I'm sorry, and the democratic functions are healthy. One of the biggest threats is, you know,
Russia, Iran, China, they're trying to meddle in our elections using your platform, and it's
effective so much so that you've actually come out and removed many people. You know, Covington
was apparently started by account based in Brazil. You know, the Covington scandal where this fake
news goes viral, was reported by CNN that it was a dummy account.
They were trying to prop it up, and they were pushing out this out-of-context information.
So they do this.
They use your platform to do it.
You've now got a platform that is so powerful in our American discourse that foreign governments are using it as weapons against us,
and you've taken a stance against the laws of the United States.
I don't mean like against, like you're breaking the law.
I mean you have rules that go beyond the laws of the United States. I don't mean like against, like you're breaking the law. I mean,
you have rules that go beyond the scope of the U S which will restrict
American citizens from being able to participate.
Meanwhile,
foreign actors are free to do so,
so long as they play by your rules.
So our elections are being threatened by the fact that if there's an
American citizen who says,
I do not believe in your misgendering policy and you ban them,
that person has been removed from public discourse on Twitter.
Right.
But they don't get banned for saying they don't agree with it.
They get banned for specifically violating it by targeting an individual.
Let's say in protest, an individual repeatedly says, no, I refuse to use your pronouns in
like Megan Murphy's case.
And she's Canadian, so I don't want to use her specifically.
The point I'm trying to make is at a certain level, there are going to be American citizens
who have been removed from this public discourse, which has become so absurdly powerful, foreign governments weaponize it,
because you have different rules than the American country has.
So just to be clear, my understanding, and I'm not expert on all the platforms,
is that foreign governments use multiple, multiple different ways to interfere in elections. It is
not limited to our platform, nor is it limited to social media.
But the president is on Twitter.
The president is on a lot of different platforms, as is the White House.
I think it's fair to point out the media coverage of his Twitter account is insane,
and they run news stories every time he tweets.
That's certainly undeniable. I'm just pointing out that there are a number of different avenues
for this, and individuals have choices in how they use the platform.
Yeah, he might have other platforms
but he uses twitter almost exclusively and what i'm trying to bring up is that if twitter refuses
to acknowledge this problem you are facing regulation i don't i don't know if you care
about that but at a certain point which which problem if if you're going to restrict american
citizens from participating on a platform where even the president speaks and and it's essentially
you have a private privately owned public space,
if I could use an analogy that would be most apt.
And you've set rules that are not recognized by the US.
In fact, when it came to a Supreme Court hearing,
they said hate speech is not a violation.
It's actually protected free speech.
So there's actual odds.
So there might be someone who says,
I refuse to live by any other means
than what the Supreme Court has set down.
That means I have a right to hate speech.
You will ban them.
That means your platform is so powerful, it's being used to manipulate elections, and you
have rules that are not recognized by the government to remove American citizens from
that discourse.
So as a private platform, you've become too powerful to not be regulated if you refuse
to allow people free speech.
But I'm trying to pick apart the connection.
I think, so yes, we do have an issue
with foreign entities and misinformation.
And this is an extremely complicated issue,
which we're just beginning to understand
and grasp and take action on. I don't think that
issue is solved purely by not being more aggressive on something else that is taking people off the
platform entirely as well, which is abuse and harassment. It's a cost-benefit analysis,
ultimately, and our rules are designed, again, and they don't always manifest this way in the outcomes.
But in terms of what we're trying to drive is opportunity for every single person to be able to speak freely on the platform.
That's absolutely not true.
You don't allow hate speech, so free speech is not on your platform. I said enable everyone, create the opportunity for everyone to speak on our service.
Unless they've, it's hate speech, right?
And in part of that, the recognition that we're taking action on is that when some people encounter particular conduct,
that we see them wanting to remove themselves from the platform
completely, which goes against that principle of enabling everyone to speak or giving people
the opportunity to speak.
Our rules are focused on the opportunities presented, and we have particular outcomes
to make sure that those opportunities are as large as possible.
Let's separate the first.
The point I made about foreign governments was just to explain the power that your platform holds and how it can be weaponized. We'll separate that now. When Antifa shows up to Berkeley and bashes a guy over there with a bike lock, that is suppressing legally allowed, right? So what you're saying is that if someone is engaging in behavior,
such as going on Twitter and shouting someone down relentlessly,
that's something external to what happens in the world under the U.S. government.
I am allowed to scream very close to you and not let you speak in public,
but on Twitter you don't allow that.
So there's a dramatic difference between what Twitter thinks is okay
and what the U.S. government thinks is okay, how our democracy functions and how Twitter functions.
The issue I'm pointing out is that we know Twitter is becoming extremely important in how our public discourse is occurring, how our culture is developing, and who even gets elected.
So if you have rules that are based on a global policy, that means American citizens who are abiding by all of the laws of our country
are being restricted from engaging in public discourse because you've monopolized it can i
counter that though because these foreign governments are restricted by the same rules
so if they violate those same rules they will be they will be removed so if they play within those
rules they can participate in the discourse even if they are just trying to manipulate our elections
on the other hand if the people that are on the platform play by those
rules they can also counteract unless their ideology goes in line with u.s law and is legally
allowed as opposed to what you allow so foreign governments can can absolutely keep making new
accounts and keep botting and keep manipulating they can even post things that'll go viral and
then get banned and not care right but a private american citizen can say here's my opinion i refuse to back down i see what you're saying you'll ban him so we can
see that at a certain point you have a lot you twitter is slowly gaining in in my opinion too
much control from your personal ideology based on what you've researched what you think is right
over american discourse if you if twitter and this is my opinion, I'm not a lawmaker, but I would have to assume if Twitter refuses to say in the United States, you are
allowed to say what is legally acceptable, period, then lawmakers only choice will be to enforce
regulation on your company. Actually, Tim, I spent quite a bit of time talking to lawmakers
as part of my role, had a public policy, spent a lot of time in D.C. I want to say that
Jack and I have both spent a lot of time in D.C. And I think from the perspective of lawmakers,
they, across the spectrum, are also in favor of policing abuse and harassment online and
bullying online. Those are things that people care about because they affect their children and they affect their communities and they affect individuals.
And so I don't think that as a private American business, we can have different standards than what an American government owned corporation or American government would have to institute.
Those are two different things.
And I understand your point about the influence,
and I'm not denying that. Certainly, Twitter is an influential platform. But like anything,
whether it's the American law or the rules of Twitter or the rules of Facebook or rules of
any platform, there are rules, and those rules have to be followed. So it is your choice whether
to follow those rules and to continue to participate in a civic dialogue, or it is your
choice to not do that. Absolutely. You've monopolized public discourse to an extreme degree, and you say,
my way or the highway. We are facing-
Tim, we haven't monopolized it. There are many different avenues for people to continue to have
a voice. There are many different platforms that offer that. We are a largely influential one. I'm
not trying to take away from that, and we're a very important one.
You don't need to be the most important. It's just that you are extremely important.
And that's a compliment.
Twitter has become extremely powerful.
But at a certain point, you should not have the right to control what people are allowed to say.
No private or look, I'm a social liberal.
I think we should regulate you guys because you are unelected officials running your system the way you see fit against the wishes of a democratic republic.
And there are people who disagree with you who are being excised from public discourse because of your
ideology that terrifies me and we can take it one step further just just so i understand so
are you suggesting that we don't have any policies around abuse and harassment on the platform
i'm trying to understand what it is i'm saying because i'm not i'm not sure i'm following you
so you you don't think we should have any rules about abuse and harassment.
So even the threats that you received that you mentioned that we did.
Under U.S. law.
But you mentioned a number of threats that you received and you were quite frustrated that we hadn't taken action on them.
You think we shouldn't have rules that.
I'm frustrated because of the hypocrisy.
When I see the flow of One Direction and then what I see are Republican politicians who, in my opinion, are just too ignorant to understand what the hell is going on around them. And I see people burning signs that say free speech. I see you openly saying we recognize the power of our platform and we're not going to abide by American norms.
of our elections, I see Democratic operatives in Alabama waging a false flag campaign using fake Russian accounts. And the guy who runs that company has not been banned from your platform,
even after it's been written by the New York Times he was doing this. So we know that not
only are people manipulating your platform, you have rules that remove honest American
citizens with bad opinions who have a right to engage in public discourse. And it's like you
recognize it, but you recognize it but you
like having the power i'm not quite sure at what point get back to my point so you believe that
twitter should not have any rules about abuse and harassment or any sort of hate speech on the
platform that that's your position well that's that's that's uh that's extremely reductive i
don't know that may be too simplistic the point i'm trying to make is but but that is a point
you're trying to make you're you're asking us to comply with the US law that would criminalize potential speech and put people in
jail for it. And you're asking us to enforce those standards. Well, I mean, if you incite death,
you will, it's a crime. You can go to jail for that. So at the very least, you could,
like when you have people on your platform who've committed a crime and you don't ban them,
I say, well, that's really weird. And then when you have people on your platform who say a bad, naughty word,
you do ban them.
I say, well, that's really weird.
I mean, I've seen people get banned for tweeting an N to you.
I understand what they're trying to do when they tweet letters at you, Jack.
But they get suspended for it, and they get a threat.
Let's talk about learn to code.
What do you mean by that?
I haven't seen that.
What are they trying to do?
There are people who know that they can tweet a single letter,
and the next person knows what letter they need to tweet.
You see what I'm saying?
So you'll see one user will say N, the next user will put an I,
the next user will put a G.
Yes.
And so they get suspended for doing so.
And these are the people who are trying to push the buttons on the rules, right?
They get suspended for that?
Absolutely.
But here's the thing.
I think your team understands
what they're doing. However,
you get really dangerous territory
if someone accidentally tweets an N
and you assume they're trying to engage in a harassment
campaign, which is why I said let's talk
about learn to code. But we do look at
coordination of accounts.
Do you do that through direct messages?
I don't know about direct messages. Do you read direct messages? I don't know about direct messages.
Do you read direct messages?
We don't read direct messages.
We don't read them unless someone reports a direct message to us that they have received.
And so you read their direct message that they send to you?
So if you have a direct message and someone says something terrible and then you receive
a death threat and you report that to us, then we would read it because you've reported it to us do does anyone in the company have access to direct
messages other than that um only in the context again of reviewing reports that other than that
they're not accessible um not to my knowledge i don't know what you mean like we're not reading
them we're not reading them is it possible that someone could go into tim's direct messages and
just read his direct messages?
I don't think so.
So if Tim writes an N and I write an I and Jamie writes a G, can you go into our direct messages and say, hey, let's fuck with Jack and we're going to write this stuff out and we're going to do it and let's see if they ban us.
You can't read that.
I don't think so.
So if that's the case, how would you know if there was a concerted effort
i think what he's saying is like if we if we do see those train of replies then that is that is
coordination you know what people are doing right the point is how do you prove it well i think
beyond the end like you know the first person to put the letter you can't prove he did it but
everybody else you kind of can't but i don't but i don't think we would well i've look i can say this i've been sent numerous screenshots from people
screenshots can be fake i recognize that but i i have seen people actually tweet and then i've seen
the tweet follow right after one one letter yeah someone tweeted at you uh someone decently high
profile like a big youtuber tweeted an n at you and then got like a 12-hour suspension
but let's talk about learn to code, right? And why are people being suspended
for tweeting hashtag learn to code?
Yep.
We did some research on this.
Yes, we did some research on this.
So there was a situation,
I guess about a month ago or so,
where a number of journalists
were receiving a variety of tweets,
some containing learn to code, some containing
a bunch of other coded language that was wishes of harm. These were thousands and thousands of
tweets being directed at a handful of journalists. And we did some research and what we found was
a number of the accounts that were engaging in this behavior, which is tweeting at the journalists
with this either learn to code or things like day of the rope and other coded language were actually ban evasion accounts.
That means accounts that had been previously suspended.
And we also learned that there was a targeted campaign being organized off our platform to abuse and harass these journalists.
That's not true.
See, here's the thing.
An activist who works for NBC wrote that story and then lobbied you.
You issued an official statement.
And then even the editor-in-chief of the Daily Caller got a suspension for tweeting Learn
to Code at the Daily Show.
So I have never talked to anybody from NBC about this issue, so I'm not sure.
No, so they report it.
Don't misrepresent me.
They report it.
The narrative goes far and wide amongst your circles. Then all of a sudden you're seeing high profile conservatives tweeting a joke, getting suspensions.
So again, some of these tweets actually contained death threats, wishes of harm, other coded language that we've seen to mean death to journalists. So it wasn't about just the learn to code.
It was about the context that we were seeing.
Can we clarify?
That's just not true.
That's just not true.
The editor-in-chief of the Daily Caller was suspended for tweeting nothing but hashtag
learn to code.
So, Tim, can I finish what I was saying?
Yeah.
So we were looking at the context, and what was happening is there were journalists receiving
hundreds of tweets.
Some had death threats.
Some had wishes of harm.
Some just learned to code. And in that particular context, we made a decision. We consider this
type of behavior dogpiling, which is when all of a sudden individuals are getting tons and tons of
tweets at them. They feel very abused or harassed on the platform.
Can we pause this because this is super confusing for people who don't know the context.
The learn to code thing is in response to people saying that people that
are losing their jobs like coal miners and truck drivers and things like that could learn to code
this was it was almost like in jest initially or if it wasn't in jest initially it was so poorly
thought out as a suggestion that people started mocking it right correct so the first stories
that came out were simply like can can minors learn to code?
It was no ill. Right, coal miners, right.
And the hashtag learn to code is just a meme.
It's not even necessarily a conservative one, though you will see more conservatives using it.
But people are using it to mock how stupid the idea of taking a person who's uneducated,
who's in their 50s, who should learn some new form of vocation, and then someone says learn to code.
And so then other people, when they're losing their job or when something's happening,
people would write learn to code because it's a meme.
Well, not even necessarily.
I would just characterize learn to code as a meme that represents the elitism of modern journalists
and how they target certain communities with disdain.
So to make that point, there are people who have been suspended for tweeting something like,
I'm not too happy
with how BuzzFeed
reported the story
hashtag learn to code.
Right?
Making representation of
these people are snooty elites
who live in ivory towers.
But again,
this is a meme
that has nothing to do
with harassment,
but some people
might be harassing somebody
and might tweet it.
Why would we expect to see,
even still today,
I'm still getting messages from people with screenshots saying,
I've been suspended for using a hashtag.
And the editor-in-chief of the Daily Caller, right,
he quote tweeted a video from the Daily Show with hashtag learn to code,
and he got a suspension for it.
So why learn to code?
Why is that alone so egregious?
And I don't think it is so egregious.
So is it just something that got stuck in an algorithm?
No, it was, again, a specific set of issues that we were seeing targeting a very specific set of journalists.
And it wasn't just the learn to code.
It was a couple of things going on.
A lot of the accounts tweeting learn to code were ban evaders, which means they've previously been suspended.
A lot of the accounts had other language in them.
Tweets had other language like day of the brick, day of the rope, oven ready.
These are all coded meanings for violence against people. And so, and the people who are receiving this were receiving hundreds of these in what appeared to us to be a coordinated harassment
campaign. And so we were trying to understand the context of what was
going on and take action on them. Because again, I don't know, Joe, if you've ever been the target
of a dogpiling event on Twitter, but it is not particularly fun when thousands of people or
hundreds of people are tweeting at you and saying things. And that can be viewed as a form of
harassment. It's not about the individual tweet. It is about the volume of
things that are being directed at you. And so in that particular case, we made the judgment call,
and it is a judgment call, to take down the tweets that were responding directly to these journalists
that were saying, learn to code, even if they didn't have a wish of harm specifically attached
to them because of what we viewed as coordinated attempt
to harass them. And again, like I was saying, some of the other signals and coded language,
and we were worried that learn to code was taking on a different meaning in that particular context.
So, but in and of itself, though, it still seems like there's alternative meanings to learn to code it still
could be used as tim was saying to mock a lib you know elite snooty speak truth to power yes
absolutely i agree with you so it's really about the context of what was happening in that situation
and all those other things i think in a very different situation we would not take action on
that okay but doesn't that seem like you're you're throwing a blanket over a very small issue?
Because learn to code in itself is very small.
The blanket is cast over racism.
The blanket is cast over all the other horrible things that are attached to it.
But the horrible things that are attached to it are the real issue.
This learn to code thing is kind of a legitimate protest in people saying
that these minors
should learn to code.
That's kind of preposterous.
Well, the first articles
weren't mean.
It was just,
learn to code kind of identified,
you have these journalists
who are so far removed
from middle America
that they think
you can take a 50-year-old man
who's never used a computer before
and put him in a, you know.
The stories, I think,
were legitimate.
Yes.
But the point more so
is it was a meme.
The hashtag, the idea of learn to code, condenses this idea, and it's easy to communicate, especially when you only have 280 characters, that there is a class of individual in this country.
I think you mentioned on, was it Sam Harris, that the left, these left liberal journalists only follow each other.
Yeah, in the run up to the 2016 elections.
Yeah.
And so, I mean, I still believe that to be true, and I've worked in these offices.
It has changed.
They've done the study again, the visualization, and now there is a lot more cross-pollination.
But what we saw is folks who were reporting on the left end of the spectrum mainly followed folks on the left, and folks on the right followed everyone.
What you were talking about earlier, that there's these bubbles.
There's bubbles,
and we've helped create them and maintain them.
So here's what ends up happening,
and this is one of the big problems
that people have.
With this story particularly,
you have a left-wing activist
who works for NBC News.
I'm not accusing you
of having read the article.
He spends like a day
lobbying to Twitter saying,
Guy, you have to do this.
You have to make these changes.
The next day he writes a story saying that 4chan is organizing these harassment campaigns and death threats.
And while 4chan was doing threads about it, you can't accuse 4chan simply for talking about it because Reddit was talking about it too, as was Twitter.
So then the next day, after he published his article, now he's getting threats.
And then Twitter issues a statement saying, we will take action. And to make matters worse, when John Levine, a writer for The Wrap,
got a statement from one of your spokespeople saying, yes, we are banning people for saying
learn to code. A bunch of journalists came out and then lied. I had no idea why saying this is
not true. This is fake news. Then a second statement was published by Twitter saying
it's part of a harassment campaign. And so then the mainstream narrative becomes, oh, they're only banning people who are part of a harassment campaign.
But you literally see legitimate high-profile individuals getting suspensions for joining in on a joke.
Oh, there are for sure probably mistakes in there.
I don't think that any of us are claiming that we got this 100% right.
And probably our team having a lack of context into actually what's happening as well.
probably our team having a lack of context into actually what's happening as well.
And we would fully admit we probably were way too aggressive when we first saw this as well and made mistakes.
I hope this clarifies then.
You have situations like this where you can see, you know, this journalist, I'm not going to name him, but he routinely has very, like, left-wing, I don't want to use overtly esoteric words,
but intersectional dogmatic points of view, right?
So this is-
What does that mean?
So like intersectional feminism is considered like a small ideology.
People refer to these groups as the regressive left or the identitarian left.
These are basically people who hold views that a person is judged based on the color
of their skin instead of the content of their character.
So you have the right-wing version, which is like the alt-right, the left-wing version, which is like intersectional feminism
is how it's simply referred to.
So you'll see people say things like, you know,
typically when they rag on white men or when they say like white feminism,
these are signals that they hold these particular views.
And these views are becoming more pervasive.
So what ends up happening is you have a journalist who clearly holds these views.
I don't even want to call him a journalist.
He writes extremely biased and out of context story. Twitter takes action in response, seemingly in response. Then we can look at what
happens with Oliver Darcy at CNN. He says, you know, the people at CPAC are the conservatives
are gullible eating red meat from grifters, among other things, disparaging comments about the right.
And he's the one who's primarily advocating for the removal of certain individuals who you then
remove. And then when Kathy Griffin calls for doxing, that's fine. When this guy calls
for the death of these kids, he gets a slap on the wrist. And look, I understand the context matters,
but grains of sand make a heap. And eventually you have all of these stories piling up and people
are asking you why it only flows in one direction. Because I got to be honest, I'd imagine that
calling for the death three times of any individual is a bannable offense, even without a warning. You just get rid of them.
But it didn't happen, right? We see these, you know, people say men aren't women, though,
and they get a suspension. We see people say the editor-in-chief of The Daily Caller may be the
best example. Hashtag learn to code, quoting The Daily Show, and he gets a suspension.
Threatening death and inciting death is a suspension suspension too. It feels like it's only going in one direction.
Yeah, I think we have a lot of work to do to explain more clearly when we're taking
action and why, and certainly looking into any mistakes we may have made in those particular
situations.
So would you guys agree that in tech, I think we can all agree this.
I would hope you agree.
Tech tends to lean left.
Like tech companies, Facebook facebook twitter google i i would be willing to bet that a conservative running a social network would not have a hate speech policy
i mean you look at gab and you look at mines and mine's not even right wing right they're not right
wing at all they're just they just staunchly support free speech and i don't think gab is
necessarily i don't think the owner is necessarily right wing either i don't know much about him i think he's like a libertarian i i don't want to
uh i don't want to yeah specify either i don't i don't know enough yeah i know that they're when
you read what they write they're just staunchly committed to free speech but they will stop
doxing they will they will do things to stop targeted harassment and doxing and things along
those lines sometimes slowly yeah admittedly yeah admittedly but they want they just want an open
platform what my point is is that i think a lot of people that are on the right feel disenfranchised
by these platforms that they use on a daily basis i don't know what the percentage are
the percentages are in terms of the number of people that are conservative
that use Twitter versus the number of people that are liberal, but I would imagine it's
probably pretty close, isn't it?
I don't know.
The numbers?
I don't know, because we don't ask people what their idea is.
We'd have to infer all that based on what they're saying.
So let's not even go there but then but the the the people that run
whether it's google or twitter or facebook any of these platforms youtube for sure
powerful leaning towards the left wouldn't we all agree to that well i we don't ask our employees
but my guess is that many employees attack companies are probably liberal it's really
fascinating but i also think i mean you point out all the companies you mentioned are in exactly the is that many employees at tech companies are probably liberal. It's really fascinating.
But I also think, I mean, you point out all the companies you mentioned are in exactly the same region as well.
Yes.
And we do have the challenge of some monocultural thinking as well.
And I have said publicly that, yes, we will have more of a liberal bias within our company.
I said this to CNN.
Right.
But that doesn't mean that we put that in our rules.
But hold on.
Because what I'm getting at is that at some point in time, things have to get down to a human being looking and reviewing at cases.
looking and reviewing at cases.
And if you guys are so left-wing in your staff and the area that you live in and all these things, things are almost naturally going to lean left.
Is that fair to say?
If we were purely looking at the content,
but a lot of this agent work is based on the behaviors,
all the things that we've been discussing in terms of the context of the actual content itself. That's what the rules are.
Except the misgendering policy, right? So your rules do reflect your bubble, right? Go to middle
America and go hang out at a conservative town. They're not going to agree with you.
Your rules are based on your bubble in San Francisco or whatever city.
I'm from middle America. I'm from St. Louis, Missouri. And I hear the point.
I definitely hear the point
in terms of like us putting this rule forth.
But we have to balance it with the fact
that people are being driven away from our platform.
I hear you.
And they may not disagree.
They may not agree with me on that,
my folks from Missouri,
but I think they would see some valid argument
in what we're trying to do to, again,
increase the opportunity for as many
people as possible to talk. That's it. It's not driving the outcomes that you're speaking to.
Where do you stop? What community is and isn't deserving of protection?
Are conservatives not deserving of protection for their opinions?
But I wanted to focus on individuals and increasing the absolute number of people
who have opportunity to speak on the platform in the first place. So then do you need a rule for body dysphoria? Do you need a rule for
other kin? Right? You see what I'm asking you? You have a specific... I see what you're asking,
but like, and this came from a call and research. And there's disagreement as to whether this is the
right outcome or not. And this is the right policy. And yes, our bias does influence looking in this direction,
and our bias does influence us putting a rule like this in place,
but it is with the understanding of creating as much opportunity as possible
for as many people to speak,
based on the actual data that we see of people leaving the platform
because of experiences they have.
So why did your research stop there?
It hasn't stopped.
Our rules aren't set in something that just stops and doesn't evolve.
We're going to constantly question.
We're going to constantly get feedback from people on every end of the spectrum of any
particular issue and make changes accordingly.
It doesn't stop. And to your credit, I really do appreciate the fact that you're very open about that
you have made mistakes and that you're continuing to learn and grow and that your company is
reviewing these things and trying to figure out which way to go.
And I think we all need to pay attention to the fact that this is a completely new road.
This road did not exist 15 years ago.
There was nothing there.
That is a tremendous responsibility for any company, any group of human beings, to be in control of public discourse on a scale unprecedented in human history.
And that's what we're dealing with here.
This is not a small thing.
And I know people that have been banned to them, this is a matter of ideology, this is a matter of this, this is a matter of that.
There's a lot of debate being going on here and this one of the reasons why i wanted to bring you
on because tim because you know so much about so many of these cases and so much because you are
a journalist and you're you're very aware of the implications and all the problems that have been
that maybe have slipped through my fingers so i do want to make one thing really clear though
i have a tremendous amount of respect and trust for you
when you say you wanted to solve this problem
simply because you're sitting here right now
and these other companies aren't, right?
Jack, you went on Sam Harris.
You were on Get With Gadsad.
And that says to me a good faith effort
to try and figure out how to do things right.
So as much as I'll apologize for getting kind of angry
and being emotional because...
I don't say it's angry.
Look, we also haven't been great at explaining our intent.
And there's a few things going on.
One, as Joe indicated, centralized global policy at scale is almost impossible.
And we realize this.
Different services have different answers to this.
Reddit has a community-based policy where each topic, And we realize this. Different services have different answers to this.
Reddit has a community-based policy where each topic, each subreddit has its own policy.
And, you know, there's some benefit to that.
So that's problem number one. We know that this very binary off or on platform isn't right and it doesn't scale.
And it ultimately goes against our key initiative of wanting to
promote more healthier conversation. I just don't think that's what you're doing.
And I hear you. I hear you. But like, we're not done. We're not done. We're not finished with
our work. And we need to, the reason I'm going on all these podcasts and having these conversations,
and ideally, Vijay's getting out there more often as well,
because we don't see enough and hear enough for her.
We need to have these conversations so we can learn.
We can,
we can get the feedback and also pay attention to where the technology is
going before the podcast.
We talked a little bit about,
and I talked about it on this,
our previous podcast and also Sam's that technology today is enabling content
to live forever in a way that was not possible before.
You can say that everything on the internet lives forever, but that's not,
it's generally not true because any host or any connection can take it down.
The blockchain changes all that. It can actually exist forever permanently without anyone being
able to touch it, government, company, individual. And that is a reality that we need to pay attention to and really understand our value.
And I believe a lot of our value in the future, not today, again, we have a ton of work,
is to take a strong stance of like we are going to be a company that given this entire corpus
of conversation and content within the world, we're going to work to promote healthy public conversation.
That's what we want.
That's what we want to do.
And if you disagree with it, you should be able to turn it off.
And you should be able to access anything that you want, as you would with the Internet.
But those are technologies that are just in the formative stages and presenting new opportunities to companies like ours.
And there's a ton of challenges with them and a ton of things that we've discussed over the past hour
that it doesn't solve and maybe exacerbates,
especially around things like election interference and some of the regulatory concerns that you're bringing.
So there's a few issues, right?
Your definition of what is or isn't healthy, right? and and we want that to be public like we want that we're going we we
have four indicators right now that we're working on with an external lab we want other labs to we
want to give it up open source make sure that people can comment on it that people can help
us define it we'll use that interpretation on our own algorithms and then push it but
that has to be But that has to
be open. That has to be transparent. Are we there today? Absolutely not. We're not there.
This course of action to me looks like a Fahrenheit 451 future where everything is so
offensive, everything must be restricted. That's the path I see that you're on. You want to have
a healthy conversation. You want to maximize the amount of people. That means you got to cut off
all the tall grass and level everything out. So if you've decided that this one rule needs to be enforced because certain things are offensive.
Well, but can I explain what health at least means to us in this particular case?
So like we talked a little bit about this on the previous podcast, but like
we have four indicators that we're trying to define and try to understand if there's
actually something there. One is shared attention. Is a conversation generally shared around the same objects or is it
disparate? So like as we're having a conversation, the four of us are having a conversation, are we
all focused on the same thing? Or is Joe on his phone, which you were earlier, or like whatever
is going on? Because more shared attention will lead to healthier conversation.
Number two is shared reality.
Not whether something is factual, but are we sharing the same facts?
Is the earth round?
Is the world flat?
So we can tell what facts are we sharing and what facts are we not sharing,
what percentage of the conversation.
So that's a second indicator.
Third is receptivity.
Are the participants receptive to debate and to civility and to expressing their opinion?
And even if it is something that might be hurtful, are people receptive to at least look at and be empathetic and look at what's behind
that? This is the one we have the most measurement around today. We can determine and predict when
someone might walk away from a Twitter conversation because they feel it's toxic.
I just ignore them all, basically.
So, and we see that in our data, right? And there's some conversations that you get into
and you persist. And then finally is variety of perspective. Are we, are we actually seeing the full spectrum of any topic
that's being talked about? And these are not meant to be taken as individual parts, but in unison,
how they play together. And we've written these out. We haven't gotten far enough in actually
defining what they look like and what they mean. And we certainly haven't gotten far enough in actually defining what they look like and what they mean.
And we certainly haven't gotten good enough at understanding when we deploy a solution like
being able to follow a hashtag, does that impact variety of perspective to the positive? Does it
impact shared reality to the negative, whatnot? So this is how we're thinking about it. And as we
think more about that, that influences our product, influences our enforcement, and influences our policy as well.
What you're describing sounds wildly different to what Twitter is, right? So you have a goal
for where you want to get with those metrics. So what confuses me then, when we talk about
someone like Megan Murphy, who, sure, she violated your rules. But in the context of
a conversation, you recognize people will sometimes get heated with each other.
If, you know, how do you do is a healthy conversation when no one is being negative?
What if people are yelling at each other and being mean and insulting or misgendering them?
I think it's a question of what thresholds you allow.
And the more control we can give people to vary the spectrum on what they want to see, that feels right to me.
I mean, Joe, in your Alex podcast, did exactly this thing. You're hosting a conversation.
You had both of your guests who started talking over each other. You pause the conversation. You
said, let's not get combative. Someone said, I'm not being combative. You said, you're all talking over each other.
And, and there's a dynamic that the conversation then shifted to that got to some deeper points,
right? Could have just said, let that happen and, and, and let it go. And that's fine too. It's,
it's, it's up to who is viewing and experiencing that conversation. And I agree with you.
It is completely far off from where we are today.
Not only have we had to address a lot of these issues that we're talking about at this table, but we've also had to turn the company around from a business standpoint.
We've had to fix all of our infrastructure that's over 10 years old. And we had to go through two layoffs because
the company was too large. So we have to prioritize our efforts. And I don't know any other way to do
this than be really specific about our intentions and our aspirations and the intent and the why
behind our actions. And not everyone's going to agree with it in the particular moment.
So I want to point this out before I make my next statement, though, just real quick.
It seems like the technology is moving faster than the culture.
So I do recognize you guys are in a rock and a hard place.
How do you get to a point where you can have that open source crypto blockchain technology that allows free and open speech?
At the same time, the technology exists.
Twitter has been replicated numerous times in different ways.
Mastodon, for instance.
exists. Twitter has been replicated numerous times in different ways. Mastodon, for instance.
What's disconcerting to me is, you know, and maybe you have research on this, which is why you've taken the decisions you have. But when you ban someone because they've said, you know,
bad opinions, misgendering, well, they're not going to go away. They're going to try and find
anywhere they can speak. So what effectively happens is you're taking all of these people
from a wide range of the most, to use a prison analogy, murderers all the way to pot smokers, and you're putting them in the same room with each other, and you're saying you're not welcome here.
Well, what happens when you take someone who smokes pot and put them in prison with a bunch of gangbangers and murderers?
They fall into that.
I totally get the point.
I'm hyper aware of our actions sending more and more things into the dark.
Well, this is something that I want to discuss.
This is really important in this vein of thinking.
What about roads to redemption?
What about someone like Megan Murphy?
What about anyone, Alex Jones, Milo?
Can we find a path for people to get back to the platform?
For good or for bad, like it or not,
there is one video platform that people give
a shit about and that's youtube you get kicked off of youtube you're doomed and that's just reality
you can go vimeo is wonderful there's a lot of great video platforms out there they have a
fucking tiny fraction of the views that youtube does that's just reality the same thing can be
said for twitter whether or not other platforms exist is that's
inconsequential the vast majority of people are on twitter the vast majority of people that are
making you know posts about the news and breaking information they do it on twitter
what can be set up and have you guys given consideration to some sort of a path to redemption? Yeah, there's redemption and there's rehabilitation.
Okay.
And, you know, we haven't done a great job at having a cohesive stance on rehabilitation
and redemption.
We haven't in part.
So the whole focus behind the temporary suspensions is to at least give people pause and think about
why they violated our, why and how they violated our particular rules that they signed up for when
they came in through our terms of service, right? Whether you agree with them or not,
like this is the agreement that we have with people.
You know, I'm just thinking this, I'm sorry to interrupt you, but it would be kind of hilarious if you guys had an option,
like a mode of Twitter, an angry mode.
Like, fuck, I'm angry right now, so I'm going to type some things,
and it says, hey, dude, why don't you just think about this?
We're going to hold it for you in the queue.
People do that in their drafts.
I'm sure they do.
I'm sure they do, but it would be funny if you had an angry mode.
Yeah, but I mean.
I notice you guys are using a lot of curse words and you're saying a lot of bad things.
We're going to put you in angry mode.
So think about this.
So you have to make several clicks if you want to post this.
And there is research to suggest that people expressing that actually tends to minimize more violent physical conduct.
Oh, for sure.
Well, everyone says that with emails.
If you're in the middle of the night
and someone sends you an email
and you find them insulting,
you type an email,
go to sleep.
Wake up in the morning like,
I'm going to say something nice.
That's how I wind up interacting with these people.
But what do you think can be done for people like,
let's say Megan Murphy,
because she seems one of the,
it's as easy to see her
perspective as any what do you think could be done for her i think i think you're right i think that
i would love to get to a point where we think of uh suspensions as temporary and she's banned for
life uh right now that's the only option that we've built into our rules but we have every
capability of changing that and that's something that I want my team to focus on is thinking about, as Jack said, not just coming back after some time bound period, but also like what more can and should we be doing within the product itself early on to version of the Twitter rules. That's two pages, not 20.
I've made sure that my lawyers don't write it,
and it's written in as plain English as we can.
We try to put examples in there.
And, like, really taking the time to educate people.
And I get people aren't always going to agree with those rules,
and we have to address that too.
But at least simplifying it and educating people
so that they don't even get to that stage.
But once they do, understanding that there are going to be different contexts in people's lives,
different times, they're going to say and do things that they may not agree with
and they don't deserve to be permanently suspended forever from a platform like Twitter.
Agreed. So how do you get to it?
So this is something that actually we just had a meeting on this earlier this week with our executive team and identifying kind of some of the principles by which we would want to think about time-bounding suspension.
So it's work.
We have to do it, and we're going to figure it out.
I'm not going to tell you it's coming out right away, but it's on our roadmap.
It's something we want to do.
Why don't you set up a jury system when someone reports something instead of you having to worry about it?
There would be no accusation of bias if 100,000 users were randomly selected to determine because Periscope does this.
Yeah, Periscope does this.
It's not a bad idea.
And we've learned –
Periscope does this?
Can you please explain that?
So Periscope has a content moderation jury. So we flag based on the machine learning algorithms and in some cases
reports particular replies. We send them to a small jury of folks to ask is this against our
firm service or is this something that you believe should be in the channel or not. Do you sign up
to be on the jury? No it's it's random. So you randomly get chosen and you decide whether or not you want
to participate? Yep. And it's good. It has some flaws. It has some gaming aspects to it as well.
But we do have a lot of experiments that we're testing and we want to build confidence and
it's actually driving the outcomes that we think are useful. And Periscope is a good playground for us across many regards.
I think ultimately, one of the greater philosophical challenges is that you are a
massively powerful corporation. You have international investors. I believe as a
Saudi prince owns what 6% of Twitter. So when I, is that true? I'm just want to make sure it's
We're a publicly traded corporation. So anybody can buy stock, but that doesn't mean they have
influence on day to day. Well, I think depending on which political faction you ask, they'll a publicly traded corporation, so anybody can buy stock, but that doesn't mean they have influence on day-to-day operations.
Well, I think depending on which political faction you ask, they'll say money is influence, right?
So I'm not going to say that the Saudi prince who invested in Twitter – because, again, I've only – it's been a while since I've read these stories – is like showing up to your meetings and throwing his weight around.
But at a certain point –
I'm definitely not doing that.
But, you know, do I have to trust you, right?
This is a guy who's thrown in over a billion dollars, I think, into Twitter.
Twitter has influence on our elections.
Foreign governments, foreign government actors have stake in Twitter.
It worries me then when you base your rules on your personal decisions, on an unelected group of people.
You have such tremendous power in this monopoly on public discourse, near monopoly.
Like he was saying, some platforms, Twitter has no real competition.
So I just have to hope and trust you have the best interest at heart but you at the end of the day it's it's
it's authoritarian no one chose you to be in charge of this i understand you mentioned you
discovered twitter but here i am looking at you know both of you who have this tremendous power
over whether or not someone can get elected you can choose to ban someone and tell me all day and
night you have a reason for doing it i just have to trust you that's terrifying there's no proof
there's no proof alex jones did any of these things other than things he's posted.
Right?
I understand that.
That's actually what I was on the phone with.
Alex was texting me saying that he never did anything to endanger any child and that he was disputing what people were saying about a video of a child getting harmed.
And so do we just trust an unelected – I mean extremely wealthy individuals individuals, Saudi princes, you know, it's a
publicly traded company, who knows where the influence is coming from, your rules are based
on a global policy. And I'm sitting here watching, wow, these people who are never chosen in this
position have too much power over my politics. I think that that's why it's so important that
we take the time to build transparency into what we're doing. And that's part of what we're trying
to do is not just in being here and talking to you guys,
but also building it into the product itself.
I think one of the things that I've really loved
about a new product launch,
what we've done is to disable any sort of ranking
in the home timeline if you want,
and you don't have to see our algorithms at play anymore.
These are the kinds of things that we're thinking about.
How do we give power back to the people using our service so that they can see what they want to see and they can
participate the way they want to participate? And this is long-term and I get that we're not there
yet, but this is how we're thinking about it. And you can imagine where that goes. I mean,
in just one switch and turning all the algorithms off, what does that do? What does that look like?
So these are the conversations that
we're having in the company, whether they be good ideas or bad ideas. We haven't determined that
just yet, but we definitely, look, I definitely understand the mistrust that people have in our
company, in myself, in the corporate structure, in all the variables that are associated with it,
in all the variables that are associated with it,
including who chooses to buy on the public market,
who chooses not to.
I get all of it.
And I grew up on the internet.
I'm a believer in the internet principles.
And I want to do everything in my power to make sure that we are consistent with those ideals.
At the same time, I want to make sure
that every single person and do everything in my power
has the opportunity to participate.
So let me ask you a question then. For your policy as it pertains to, say, Saudi Arabia,
right, do you enforce the same hate speech rules on Saudi Arabia?
Our rules are global. We enforce them against everyone.
So even in countries where it's criminal to be LGBT, you will still ban someone for saying something disparaging to or saying something to that to
that effect like let's say saudi arabia sent someone's to death for uh i don't want to call
it saudi arabia specifically let's call it iran because i believe that's the the big focus right
now with the trump administration iran it's my understanding it's still punishable by death i
could be wrong but it is criminal if someone then directly targets one of these individuals
will you ban them?
I mean, do you guys function in Iran?
We're blocked in Iran.
Yeah, that's what I figured.
But there are some countries where, for instance, Michelle Malkin recently got really angry because she received notice that she violated blasphemy laws in Pakistan.
So you do follow some laws in some countries, but it's not a violation.
I guess the question I'm asking is, in Pakistan, it's very clearly a different culture.
They don't agree with your rules.
We do have a per country takedown, meaning that content might be non-visible within that country, but visible throughout the rest of the world.
But so I guess the question.
Just to add on to what Jack's saying, we actually are very, very transparent about this. So we publish a transparency report every six months that details every single request that we get from every government around the world and the content that they ask us to remove. And we post
that to an independent third party site. So you could go right now and look and see every single
request that comes from the Pakistani government and what content they're trying to remove from
Pakistan. And I've seen a lot of conservatives get angry about this. And it's kind of confusing.
Confusing. I'm like, that's a really good thing. I would want to know if Pakistan wanted to kill me.
Why are they angry?
Blasphemy laws, posting pictures of Mohammed.
So it's a prime—
Are they angry about our transparency report or this particular one?
There's a perception that you sending that notice is like a threat against them for violating blasphemy laws,
whereas it's very clearly just letting you know a government has taken action against you.
It's saying that the government has restricted access to that content in that country. And the
reason we tell users or tell people that that's happened is because a lot of them may want to
file their own suit against the government, or a lot of them may be in danger if they happen to be
under that particular government's jurisdiction, and they may want to take action to protect
themselves if they know that the government is looking at the content in their accounts.
So we don't always know.
We send the notice to everybody.
We don't always know where you are or what country you live in.
And so we just send that notice to try to be as transparent as possible.
The main point I was trying to get to is your policies support a community,
but there may be laws in a certain country that does not support that community
and finds it criminal.
So your actions are now directly opposed to the culture of another,
of the country. I guess the point I'm trying to make is that if you enforce your values, which are, you know, perceivably not even the majority of this country, if you're, you know, consider
yourself more liberal leaning and you're half of the United States, but you're enforcing those
rules on the rest of the world that use the service, it's sort of forcing other cultures to adhere to yours.
So a lot of our rules are based in more of the UN declaration than just purely US.
Doesn't the UN declaration guarantee the right of all people through any medium to express their
opinion?
It does.
And why ban hate speech?
It also has conditions around particular speech inciting violence and some of the aspects that we speak to as well.
And it protects specific categories, whether it's religion, race, gender, sexual orientation.
Those are also protected under the UN covenant to protect human rights.
Ooh, look at that, a pause.
We've had a number of pauses. I'm sure we have many more things to talk about don't worry i don't want to just i'm just saying this i've got a bunch of other things uh that you know because here's a
thing there's a bunch of other issues having to do with bias and censorship and i feel like we've
kind of like beaten that horse relentlessly but i think that horse is good to beat and i think it's
also good to address why the horse is being beaten and why why it exists in the first place um and i i i really i want to
say this again i really appreciate the fact that you guys are so open and that you're willing to
come on here and talk about this because you don't have to this is your decision and especially you
jack after we had that first conversation and the theback was so hard, you wanted to come and clarify this.
And I think this is so important to give people a true understanding of what your intentions are versus what perceptions are.
Appreciate it.
And thank you for hosting this again.
Look, I think it's also important that the company is not just me.
it's also important that the company is not just me we have people in the company who are really good at this and um are making some really tough decisions and having tough conversations and
and getting pushback and getting feedback and they have the best intentions i don't so well so uh
let's i'll get back into the meat of things to get to beating the dead horse. I don't know if you have any data on why Jacob Wall was recently banned.
Do you have that?
I believe.
Who is Jacob Wall?
He's a – I don't know how to describe him.
He's a conservative personality, but he's very, very controversial for like fake news or something.
I don't know too much about him, so I don't want to accuse him of things because I don't know who he is.
about him so i don't want to accuse him of things because i don't know who he is but he was uh he was in something where he tried accusing muller of like sexual assaults and it turned out to be like
just completely fake ridiculous this is a gentleman that was in the usa today article where he admitted
that he was going to he had uh used tactics in the past to influence the election and he will
continue to do so using all of his channels. Yes.
And so when we saw that report, our team looked at his account, we noticed there were multiple accounts tied to his account. So fake accounts that he had created that were discussing political
issues and pretending to be other people.
How do you find that out?
We would have phone numbers, linking accounts together, or email addresses, in some cases,
IP addresses, other types of metadata that are associated with accounts. So we can link those accounts together.
And having multiple accounts in and of itself is not a violation of our rules,
because some people have their work account, their personal account. It's when you're
deliberately pretending to be someone else and manipulating a conversation about a political
issue. And those are exactly the types of things that we saw the Russians do, for example, in the 2016 election. So it was that playbook and that
type of activity that we saw about Jacob Wall. And that's why his accounts were suspended.
Did you investigate Jonathan Morgan?
I don't know who that is.
Why? That's the important question. Why?
I don't know who that is.
It may be that someone at Twitter investigated him. I I don't know who that is. Well, that's – It may be that someone at Twitter investigated him.
I personally don't know who that is.
So one of the issues that I think is really important to get to is you should know who he is.
He's more important than Jacob Wall is.
But for some reason you know about this conservative guy and not the Democrat who helped meddle in the Alabama election.
So, Jonathan, according to –
This is a sheer volume that they have to pay attention to in all fairness.
Right, right, right.
This is a sheer volume that they have to pay attention to in all fairness. Right, right, right.
But it's about grains of sand making a heap and the flow of a direction where we can see Jacob Wall has said he's done this, so you're like, we're going to investigate.
We ban him.
It was recently reported and covered by numerous outlets that a group called New Knowledge was meddling in the Alabama election by creating fake Russian accounts to manipulate national media into believing that Roy Moore was propped up by the Russians.
Facebook banned him as well as four other people, but Twitter didn't.
I believe we did ban the accounts that were engaged in the behavior.
I do remember sending this to Vijay and our team.
That's worse, though.
So you didn't ban the guy doing it, but you banned the people.
So in the case of Jacob Wall, we were able to directly attribute through email addresses and phone numbers
his direct connection to the accounts that were created to manipulate the election.
If we're not able to tie that direct connection on our platform
or law enforcement doesn't give us information to tie attribution,
we won't take action.
And it's not because of political ideology.
It's because we want to be damn sure before we take action on it.
So someone could use a VPN, perhaps, and maybe additional email accounts,
and they could game the system in that way.
There are certainly sophisticated ways that people can do things
to mask who they are and what accounts that they're controlling.
And just the internal conversation, Tim,
just to provide more light into what happens.
Like I got an email or a text from Vijay one morning and said,
we are going to permanently suspend this particular account. And it's not a,
you know, what do you think? It's, we are going to do this. And I then have an opportunity to
ask questions. I asked a question, why? She gave me a link back to the document of all the findings
and USA Today. We took the action. I was on Twitter. A bunch of people pointed me at this particular case, sent some of those tweets to her, what's going on.
So that's in the background.
Wouldn't you just terminate anybody associated with the company that was doing this?
I mean, keep in mind, too, at the time when this campaign was happening.
On what basis?
He admitted to engaging in the operation in a quote to New York Times, and you banned the accounts associated with it.
So if you know he's the one running the company, wouldn't you be like, okay, you're gone?
Do you want us to take every single newspaper account's attribution?
Because what we were able to do in the Jacob Wall situation was actually tie those accounts in our own systems.
Right.
That he actually controlled the accounts, not just take the word of a newspaper article.
You said you banned his accounts.
Yes.
And you know from his own statement and from his tweets that he was the one running the
company.
Jacob Wall.
No, no, no, no.
Jonathan Morgan.
Oh, sorry.
I'm getting confused about what we're talking about.
So Jacob Wall, it's announced in the USA Today, he says, I'm doing this.
And you're like, okay, we can look at his account.
We can see it.
We get rid of him.
With new knowledge, you said you did take those accounts down.
I believe we were able to take down a certain cluster of accounts
that we saw engaging in the behavior,
but we weren't necessarily able to tie it back to one person controlling those accounts.
Even if they say they did it?
And this is where I get back.
We like to have some sort of attribution that's direct that we can see.
Would we just take any newspaper or any article at face value and just action them?
Would you have to contact him and get some sort of a statement from him in order to
take down his account? I mean, I don't think he would admit to manipulating Twitter if Twitter
asked him. But if you could get the fact that he communicated with a newspaper, right?
To clarify what they said, what they claimed to the New York Times was that it was a false flag.
To clarify what they said, what they claimed to The New York Times was that it was a false flag.
New York Times said they reviewed internal documents that showed they admitted it was a false flag operation.
The guy who runs the company said, oh, his company does this.
He wasn't aware necessarily, but it was an experiment.
So he's given kind of, in my opinion, duplicitous, like, you know, not straightforward.
But at the time of this campaign, which he claims claims to know about he tweeted that it was real so during the roy moore
campaign he tweets wow look at the russians then it comes out later his company is the one that did
it so you're kind of like oh so this guy was propping up his own fake news right then when
they get busted he goes oh no it's just my company doing an experiment but you tweeted it was real
you use your verified twitter account to push the fake narrative your company was pumping on this Then when they get busted, he goes, oh, no, it's just my company doing an experiment. But you tweeted it was real.
You used your verified Twitter account to push the fake narrative your company was pumping on this platform.
And so the point I want to make, I guess, is – It sounds like we need to take a closer look at this one.
Ban them.
Bring back Morgan Murphy.
Well, Megan Murphy.
Megan Murphy.
Sorry.
Morgan Murphy is a friend of mine.
Sorry, Morgan.
So this is – I haven't read the story.
It's been like two months since the story broke.
So I could have my – I don't want to get sued and have my facts wrong.
But the reason I brought this up was not to accuse you of wrongdoing.
It was to point out that I don't think that the people who work at Twitter are twirling their mustaches, laughing, pressing the ban button whenever they see a conservative.
I think it's just there's a bias that's unintentional that flows in one direction. So you see the news about Jacob Wall. And I think
there's a reason for it, too. There's a couple of reasons. For one, your staff is likely more,
you've mentioned more likely to lean left and look at certain sources. So you're going to hear about
more things more often and take action on those things as opposed to the other side of the coin.
But we have to consider like where the actions are taking place.
I'm speaking more broadly to the 4,000 people that we have as a company versus the deliberateness
that we have on Vidya's team, for instance.
I just mean when we look at a company-wide average of all of your employees and the direction
they lean versus the news sources they're willing to read, you're going to see a flow
in one direction, whether it's intentional or not.
And so I think the challenge is...
But we don't generally rely on news sources to find manipulation of our platform.
We're looking at what we're seeing, the signals we can see.
And once in a while, we will get tipped off to something.
But for the most part, when we're looking at manipulation,
it's not like the New York Times can tell us what's going on on the platform.
We're the ones that have the metadata back accounts.
We're the ones that can see patterns of behavior at scale.
But I hear your point.
I knew one name and I didn't know another name.
And it was because Vigia said, you know, we're permanently banning this account.
And yes, we didn't have the same sort of findings in the other particular account, which I got feedback on, passed to her, and we didn't find what we needed to find.
But to be clear, the team had taken action on those stuff months ago
when it actually had happened.
Got it.
Yeah.
I think a lot of what people assume is malintent is sometimes fake news.
I think one of my biggest criticisms in terms of what's going on in our culture
is the news system is, like you pointed out, although it's changed,
left-wing journalists only follow themselves.
That's my experience.
I've worked for these companies.
And so they repeat these same narratives. They don't get out of their bubble.
Even today, they're still in a bubble and they're not seeing what's happening outside of it.
And then what happens is, you know, according to data, I think this is from Pew,
most new journalism jobs are in blue districts. So you've got people who only hear the same thing.
They only cover the same stories. So if, you know um we hear about jesse smollett we hear
about how the story goes it goes wild but there's like 800 instances of trump supporters wearing
maga hats getting beaten you know throughout the past couple years we had a guy show up to a school
in eugene oregon with a gun and fire two rounds at a cop wearing a smash the patriarchy and shill
shirt and those stories don't make the headlines so it's you know when the journalists are inherently
in a bubble the information you're going to get as a big company who follows these news organizations is going to be inherently, you know, one-sided as well.
And then the only action you're going to be able to take is what you know.
You can't ban someone if you don't know they're doing it.
I hear you.
I think our biggest issue and the thing that I want to fix the most is the fact that we create and sustain and maintain these echo chambers.
Yeah.
Well, you're rolling out that new feature that allows you to hide replies, right?
We're testing. We're experimenting with an ability to enable people to have more control
as you would expect a host over the conversation.
Like Facebook allows that.
Yeah, but I don't think they have the level of transparency that we want to put into it.
So we actually want to show whether a comment was moderated and then actually allow people to see those comments.
So both showing the action that this person moderated a particular comment, and then you can actually see the comment itself.
It's one click, one click over, one tap over.
That's how we're thinking about it. It might change in the future, but we can't do this without a level of transparency because we minimize something Vijay spoke to earlier, which is speaking truth to power, holding people to account.
Even things like the Fyre Festival, where you had these organizers who were deleting every single comment, moderating every single comment that called this thing a fraud and don't go here we can't we can't we can't reliably and and we like just from a responsibility standpoint
ever create a feature that enables more of that to happen and that's how we're thinking about even
features like this i'm going to jump right off into a different train card here has law enforcement
ever asked you to keep certain people on the platform even after they violated your rules?
Not that I'm aware.
So then this, you know, to the next question pertaining to bias, you have the issue of Antifa versus the Proud Boys and Patriot Prayer.
And Twitter permanently excised anyone associated with the Proud Boys, Antifa accounts who have broken the rules repeatedly,
branded known cells that have been involved in violence, all still active.
Is there a reason?
Well, with the Proud Boys, what we were able to do was
actually look at
documentation and announcements that
the leaders of that organization
had made and their use of violence in the real
world. So that was what we were focused on
and subsequent to our decision, I believe
the FBI also designated them. That's not true no okay no that's not true
yeah you know the proud boys started out as a joke gavin mcginnis uh anthony cumia who was a
part of opie and anthony now it's his own show told me about it it happened on his show because
there was a guy that was on the show and they made a joke about starting a gang based on him because he was a very effeminate guy, and they would call him the Proud Boys.
And they went into detail about how this thing became from a joke and saying that you could join the Proud Boys, and everyone was like being silly to people joining it.
The people joining it and then it becoming this thing to fight Antifa and then becoming infested with white nationalists and becoming this thing. Well, in many ways it was, but it's been documented how it started and what it was and misrepresented as to why it was started.
I think there's some things that should be clarified about them, but Gavin has made a bunch of statements that cross the line.
Yes.
He claims to be joking.
Well, he did on my podcast.
He was talking to me about Antifa, that when Antifa was blocking people like Ben Shapiro's speeches and things along those lines and stopping conservatives from speaking, you should just punch them in the face.
We're going to have to start kicking people's asses.
I was like, this is not just irresponsible, but foolish and short-sighted and just a dumb way to talk.
So then you have the Antifa groups that are engaging in the same thing.
The famous bike lock basher incident where a guy showed up, he hit seven people over there with a bike lock.
They subsequently released his name.
I'm going to leave that out for the time being.
You have other groups like, by any by any means necessary you have in um uh
portland for instance there are specific branded factions there's uh the the tweet i mentioned
earlier where they doxed ice agents and they said do whatever inspires you with this information
and i mean you're tagged in a million times i know you probably can't see it but you can actually see
that some of the tweets in the thread are removed but the main tweet itself from an anti-fascist account linking to a website straight up saying, like, here's the private home details, phone number, addresses of these law enforcement officers is not removed since September.
So what you end up seeing is, again, to point, I think one of the big problems in this country is the media, because it was reported that the FBI designated probably was an extremist group, but it was a misinterpretation based.
the FBI designated Proud Boys an extremist group, but it was a misinterpretation based,
a sheriff wrote a draft saying with, you know, the FBI considers them to be extremists.
The media then reported hearsay from the sheriff and the FBI came out and said, no, no, no, we never meant to do that.
That's not true.
We are just concerned about violence.
So the Proud Boys all get purged.
And again, I think, you know, Gavin's a different story, right?
If you want to go after the individuals who are associating with that group versus the
guy who goes on his show and says outrageous things and goes on Joe's show.
But then you have Antifa branded cells. What I mean by that is they have specific names,
they sell merchandise, and they're the ones showing up throwing mortar shells into crowds.
They're the ones showing up with crowbars and bats and whacking people. I was in Boston,
and there was a rally where conservatives were planning on putting on a rally. It was literally
just like libertarians and conservatives. Antifa shows up with crowbars, bats, and there was a rally where conservatives were planning on putting on a rally. It was literally just like libertarians and conservatives.
Antifa shows up with crowbars, bats, and balaclavas with weapons threatening them.
And so I have to wonder if these people are allowed to organize on your platform.
Are you concerned about that?
Why aren't they being banned when they violate the rules?
Yeah, absolutely.
We're concerned about that.
Has the FBI designated them as a domestic terrorist organization?
I'm sorry. Homeland Security in New Jersey has listed them under domestic terrorism.
OK, so so here I understand there's a conundrum in that the general concept of anti-fascism is a loose term that means you oppose fascism. Right. But Antifa is now they have a flag. They've had a flag since the Soviet, you know, Nazi Germany in the Soviet era.
And they've brought it back. There are specific groups that I'm not going to mention by name that have specific names and they sell merchandise they've appeared in
various news outlets they've expressed their desire to use violence to suppress speech
there was a is it a centralized organization the same way that i i hear you on proud boys but like
where they have like tenants that are written out and there's a leader and like uh not it's not the
same but there are specific
branded cells so that's why i bring them up specifically i realize you know someone showing
up to a rally wearing a black hoodie and sunglasses who are you going to ban but there are groups that
that organize specifically call for violence they they push the line as close as as lightly as
possible they advocate sabotage and things like this and you know when the proud boys go out and
get into fights they're not getting in fights with themselves.
And I should point out that they decided to call for violence based on Antifa calling for violence.
And based on Antifa actually actively committing violence against conservative people, they were there to see different people speak.
Well, it partly started because in Berkeley, there was a Trump rally.
So actually, after Milo got chased out of the Berkeley, there was $100,000 in damages.
I mean, there's a video of some guy in all black cracking someone on the back who's on the ground looking like they're unconscious.
So these conservatives see this, and they decide to hold a rally saying we won't back down.
They hold a rally in Berkeley, and then Antifa shows up.
Again, I understand you can't figure out who these people are for the most part.
They're decentralized.
But then this incites uh an escalation you then get the rise of the based
stick man they called it this guy shows up in armor with a stick and he starts swinging back
and now you have two factions forming so while i recognize it's much easier to ban a top-down group
there are you know the difference i guess is while you look at the proud boys it's straight
top-down vertical you look at antifa and there's different cells of varying size, and there are different accounts.
So I'd have to – I guess the argument I could make is if you're going to ban the Proud Boys, by all means, under your justification.
But if you look at a specific channel that's got 20,000 followers that cheers them on, these are people who throw mortar shells into crowds.
Isn't that advocating for terrorism, incitement to violence? Yeah, absolutely.
So I guess the question is, how come they don't get removed?
Well, in the past, when we've looked at Antifa, we ran into this decentralization issue,
which is we weren't able to find the same type of information that we were able to find about
Proud Boys, which was a centralized, leadership-based documentation of what they stand for.
But absolutely, I mean, it's something that we'll continue to look into.
And to the extent that they're using Twitter to organize any sort of offline violence,
that's completely prohibited under our rules, and we would absolutely take action.
Would I ask you why Gavin was banned?
Was there a specific thing that he did, or was it his association with the Proud Boys?
His association with the Proud Boys.
You know, he's abandoned that. He's not only only that he's disassociated himself with it and said that
it completely got out of hand he doesn't want to have anything to do with it yeah and i think this
is a great again test case for how we think about uh getting people back on the platform if yeah he
said he's an interesting case because he's a really a provocateur and he fancies himself you
know sort of a punk rocker and he just he likes
stirring shit i mean when he came on my show last time he was on he was dressed up like michael
douglas and falling down you know he did it on purpose he brought a briefcase and everything i'm
like what are you doing it's like michael douglas and falling down like he's he's a showman in many
ways and he did not mean for this to go the way it went he thought it would be
this sort of innocent fun thing to be a part of and then other people got involved in it and then
when people call for violence the problem is they think that you know you're going to just hit people
and that's going to solve a problem it just creates a much more more comprehensive problem. It's important to point out Gavin has said –
like he said things way worse than Alex Jones ever did.
Sure.
Whether you want to say it's a joke or not,
he said things like choke him, punch him directly.
Yeah, yeah, yeah.
Uh-huh, yep, he did.
But I guess was the primary reason for getting rid of them
was what you thought that the FBI had designated them an extremist group?
No, because we did it months in advance.
Oh, okay.
Yeah, I was just pointing that out.
So it was just his association with the Proud Boys?
I don't recall, and I would have to go back,
and I don't want to misstate things.
I don't recall whether those statements that you're referring to of Gavin's
were on Twitter.
So they weren't.
There's another, you know, when it comes to the weaponization of rules against,
like, Gavin isn't creating a compilation of things he's ever said
out of context, and then sending them around to get himself banned. Other people are doing that
to him activists who don't like him. And it's effective. In fact, I would actually like to
point out, there's one particular user who has repeatedly made fake videos attacking one of your
other high profile conservatives, so much so that he's had to file police reports, harassment
complaints, and it just doesn't stop. You know. So I guess I'll ask this to this regard.
If someone repeatedly makes videos of you out of context, fake audio, accusing you of
doing things you've never done, at what point is that bannable?
Yeah, and if it's targeted harassment and we can establish it, it's just a really hard
thing with us determining whether something is fake or not.
Well, it's also when things are out of context, you still have video of the person saying
that. or not well it's also when when things are out of context you still have video of the person saying that i agree that it's out of context and it's disingenuous but it's still the person saying
it and you're making a compilation of some pre-existing audio or video so i think in the
instance of gavin like one of the things he said was like a call to violence but he was talking
about uh like it was in the context of talking about a dog and being scolded yeah so he was like hit him just hit him and then it's like it turns out he's talking about, it was in the context of talking about a dog and being scolded. Yeah.
So he was like, hit him, just hit him.
And then it's like, it turns out he was talking about a dog
like doing something wrong.
Right.
And they take that and they snip it
and then it goes viral
and then everyone starts flagging,
saying you got to ban this guy.
So again, I understand, you know.
But I guess the issue is
if people keep doing that to destroy someone's life.
So I think there's a bigger discussion,
I think both of you could probably shed
some important light on too outside of Twitter.
This weaponization of content from platforms is being used to get people banned from their banking accounts.
We can talk about Patreon, for instance.
And again, this may just be something you could chime in on.
Patreon banned a man named Carl Benjamin, also known as Sargon of Akkad.
He's also banned from Twitter.
Why? Do you know why he got banned from Twitter?
I can see.
That's an interesting one.
I do have some of the details here.
Do you want me to read them?
Yeah, please.
Okay.
Looks like it's going to be gross.
It's not stuff that I love saying, but I will say it.
Want Jack to say it?
I should make Jack say it.
He doesn't like cursing either.
Let's see.
I curse more than he does, so I guess I should say it.
First strike.
Fuck white people.
Kill all men.
Die.
Cis cum.
None of the above qualify as hate speech.
When was that?
I don't have the dates.
I'm sorry.
But he's a white guy.
I mean, obviously,
he's joking around there.
Him saying,
fuck white people.
It also sounds like
he's trying to make a point
about your rules
and how you enforce them,
not actually.
Possibly.
Which is also exactly
why he got kicked off of Patreon.
Exactly.
Yeah.
Well, I know he also posted
a photo of interracial gay porn
at some white nationalists
to make them angry.
Yes.
Yeah, he's funny well he's funny sometimes i i can understand how uh posting that photo is an
egregious violation of the rules whether whether or not he was trying to insult some people that's
a very good point and i wanted to bring that up is porn a violation of the rules
uh porn generally no good really good for you oh so then why would because it happens in my feed
all the time i follow a couple naughty girls and occasionally they post pictures of themselves
engaging in intercourse i'm like yikes so then why what else were the other strikes for sargon
or carl um let's see um there was the use of a jewish slur. How do you use it?
To a person, you traitor, remainer, white genocide supporting Islamophile, Jewish slur lover.
That should keep you going.
Hashtag Hitler was right.
But these aren't general opinions.
These are targeted.
These are targeted at somebody.
That sounds like he's being
like he's making a joke you know i understand in context it sounds like the other one like in
context what he's saying particularly the fact that he's a white guy that doesn't sound like a
racial slur at all i mean he's saying fuck white people and he is white in context again these are
tied together right i always knew that person was not to be trusted that fucking jewish slur oh so there's
a lot there's a bunch of very specific person he's targeting he's being very trying to be very
provocative saying this about a specific jewish person i don't know the race of this person i'm
sorry and this is not okay but this is not this is not parody this is not joking around we didn't
view it that way i'm just telling i'm not trying to like re-litigate all this i'm just telling you what they were so you i knew he had done things that
were like egregious violations of the rules because you know plain and simple i didn't bring
him up to you know go through and try and figure out a feat but that it does sound like at least
the first one was meant to be a critique of yeah so potentially but there are a bunch of others if
you want to hear them more than sure uh that keep it rolling this is again targeted this is how i know one day that i'll be throwing you from a helicopter you're
the same kind of malignant cancer don't forget it um so there's just it's not one thing or two
things or three things this is like a bunch of them illusions of grandeur imagine thinking you
throw someone from a helicopter well he doesn't really get you in that helicopter but but
admittedly um and i so so he he's on youtube by the name of Sargon Avacati. He's a big account.
And I've criticized him for being overly mean in the past.
And I think it's exemplified.
He definitely gets angry.
But he is very different now.
And I guess the reason I brought him up was not to.
He's very different now.
How so?
Well, a lot of the content he makes is much calmer.
He's less likely to insult someone directly.
He's probably recognizing that he's on his last straw.
Oh, definitely.
I mean, he's been kicked off of Twitter.
He's on YouTube. He's probably got to mind's on his last straw. Oh, definitely. I mean, he's been kicked off of Twitter. He's on YouTube.
He's probably got to mind his P's and Q's.
Oh, but so the reason I brought him up, again, but we'll move on, was that activists found a live stream from eight months ago.
I totally forgot why I was bringing this up because we've moved so far away from where we were.
But they pulled a clip from an hour and a half or whatever into a two-hour live stream on a small channel that only had 2,000 views, sent it to Patreon, and then Patreon said, yep, that's a violation and banned him outright without warning, which, again, I understand is different from what you guys do.
You do suspensions first.
But I guess the reason I was bringing it up was to talk about a few things, why blocking isn't enough, why muting isn't enough.
And if you think that it's driving people off the platform,
people post my tweets on Reddit.
I block them, they use a dummy account,
load up my tweet, post it to Reddit,
and then spam me on Reddit.
So, you know, blocking and even leaving Twitter
would never do anything short of me shutting up.
There's nothing you can do to protect me or anyone else.
Look, I mean, these are exactly the conversations we're having.
One, the reason why I don't think blocking and muting are enough is, one, I don't think we've made mute powerful enough.
It's spread all over the service.
You can use it, and then you've got to go find where you actually muted these people or their profile page, and that's just, it's a disaster.
It just doesn't work in the same way that it should work in the same way that follow
works, which is just the inverse of that.
I noticed that now I get a notification that says
you can't see this tweet because you muted this person.
Before, I would just see a weird reply
and be like, oh, it's one of those.
Exactly. So there's also all this
infrastructure that we have to fix
in order to pass those through in terms
of what action you
took or what action someone else took to be transparent about what's happening on the network.
The second thing, block is really interesting.
I think it's, my own view is it's wholly unsatisfying
because what you're doing is you're blocking someone.
They get notification that you've blocked them, which may embolden them even more, which causes others around and ramifications from the network.
But also that person can log out of Twitter and then look at your tweets just on the public web because we're public.
So it doesn't feel as rigorous and as durable
as something like making mute much stronger.
But I guess the challenge is,
no matter what rule you put in place,
people are going to harass you.
If you're engaging in public discourse,
if I go out in the street and yell out my opinion,
somebody could get in my face.
If I get off Twitter, because I'm sick of it.
I mean, look, I'm sure you get it way worse than I do,
especially as the high profile. Probably getting it right now. Yeah, absolutely. Oh, look, you know, I'm sure you get it way worse than I do, especially as, you know, the high profile.
Probably getting it right now.
Yeah, absolutely.
Oh, me too.
God, I can only imagine.
So the only thing I can do is, look, we're not on Twitter right now.
We're on Joe Rogan's podcast, and they're still going to target you on Twitter.
They're still going to.
I guarantee we're all over Reddit.
The left is probably railing on me.
The right's railing on you guys.
So it seems like even if you try everything in your power to make Twitter healthier and better it's not going to change anything no i'm not sure
about that i'm not sure about that because one of the things that i do think is that just i'm not
in favor of a lot of this heavy-handed banning and a lot of the things that have been going on
particularly like a case like the megan murphy case but what I think that we are doing is we're exploring the idea of civil discourse.
We're trying to figure out what's acceptable and what's not acceptable.
And you're communicating about this on a very large scale.
And it's putting that out there, and then people are discussing it.
Whether they agree or disagree, whether they vehemently defend you or or hate you they're discussing this and this is i think this is how these things change
and they change over long periods of time think about words that were commonplace
just a few years ago that you literally can't say anymore right you know i mean there's so many of
them that were extremely commonplace or not even
thought to be offensive 10 years ago that now you can get banned off of platforms for but that's
that's a good point to argue against banning people and to uh cease enforcing hate speech
rules i agree with that as well i think it's both things let me let me let me tell you something
important i was in the uk at an event um for a man named Dankula, who I don't know if you've heard of.
Oh, sure.
Yeah.
Yeah, Dankula is the guy who got charged and convicted of making a joke where he had his pug do a Nazi salute.
But I was there and I was arguing that a certain white nationalist had used racial slurs on YouTube.
He has.
I don't want to name him.
And some guy in the UK said, that's not true.
He's never done that.
And I said, you're crazy.
Let me pull it up. Unfortunately, I don't know why, but when I did the Google search, nothing
came up. What I did notice was at the bottom of the page, it said due to, you know, UK law,
certain things have been removed. So I don't know if it's exactly why I couldn't pull up a video
proving or tweets or anything, because I think using these words gets stripped from the social
platforms. I could not prove to this man in that country in that in the uk that this man could use
a vpn and and get around that uh yeah i mean at the time i was just like trying to pull it up and
i'm like oh that's weird so now you have someone who doesn't realize he's a fan of a bigot because
the law has restricted the speech so there's a point to be made. If you, I understand you want a healthy,
like you want Twitter to grow.
You need it to grow.
The shareholders need it to grow.
The advertisers need to advertise.
So you've got all these restrictions,
but allowing people to say these awful things
makes sure we stay away from them.
And it allows us to avoid certain people.
And isn't it important to know
that these people hold these beliefs?
If you get rid of them,
you know, someone could walk into a business and you wouldn't even know that they were a neo-Nazi.
But if they were high profile saying their things, you'd be like, that's the guy I don't like.
You're absolutely right.
This is like one of my favorite sayings is that sunlight is the best disinfectant.
And it's so, so, so true.
Like one of the biggest problems with censorship is the fact that you push people underground and you don't know what's going on.
And this is something I worry about. It's not that I don't worry about it. Well, then why do you ban people underground and you don't know what's going on and this is something
i worry about it's not that i don't worry about it you ban people for these rules because i also
worry about driving people away from the platform and affecting their real lives so like we're
trying to find this right balance and i hear you like you may not think we're drawing the
lines in the right place and we get that feedback all the time and we're always trying to find the
right places to do this but i worry as much about like the underground and like being able to shine a light on these
things as anything else tim i think it's a cost benefit analysis and we have to constantly rehash
it and do it like we we have the technology we have today um and we are looking at technologies which open up the aperture even more. And we all agree that a binary on or off is not the right answer and is not scalable.
We have started getting into nuance within our enforcement.
And we've also started getting into nuance with the presentation of content. So, you know, one path might have been for some of your
replies for us to just remove that, those, you know, offensive replies completely. We don't do
that. We hide it behind an interstitial to protect the original tweeter and also folks who don't want
to see that. They can still see everything. They just have to do one more tap. So that tap so that's one solution ranking is another solution but as technology gets better and we get better at
applying to it we have a lot more optionality whereas we don't we don't have that as much today
i i feel like you know i'm just going to reiterate an earlier point though you know if you recognize
sunlight is the best disinfectant you're it's like you're chasing after a goal that can never be met
if you want to if you want to protect all speech and then you start banning certain individuals, you want to
increase the amount of healthy conversations, but you're banning some people. Well, how long until
this group is now offended by that group? How long until you've banned everybody?
I hear you. I don't believe a permanent ban promotes health.
Oh, okay.
I don't believe that, but we have to work with the technologies, tools, and conditions that we have today and evolve over time to where we can see examples like this woman at the Westboro Baptist Church who was using Twitter every single day to spread hate against the LGBTQA community.
against the LGBTQA community.
And over time, we had, I think it was three or four folks on Twitter who would engage her every single day about what she was doing,
and she actually left the church.
That's Megan Phelps.
She's on that podcast.
She's amazing.
And she's now pulling her family out of that as well.
And you could make the argument that if we banned that account early on,
she would have never left the church.
I completely hear that.
We get it.
It's just –
Well, so let's – I just want to make sure we're advancing the conversation too and not just going to go back.
So I'll just ask you this.
Have you considered allowing some of these people permanently banned back on with some restrictions?
Maybe you can only tweet twice per day.
Maybe you can't retweet or something to that effect.
I think we're very early in our thinking here.
So we're open-minded to how to do this. I think we agree philosophically that permanent bans are an extreme
case scenario and it shouldn't be one of our, you know, regularly used tools in our tool chest.
So how we do that, I think is something that we're actively talking about today.
Is there a timeline that we can, so, so look, you know, a lot of-
I think that would fix a lot of problems. You think so?
Yes, i really do
even if it's like i'm just curious like are you thinking like bans of a year or five years ten
years like i'm just curious like what is what is a reasonable ban in this kind of context well i i
think reasonably someone should have to state their case as to why they want to be unbanned
like someone should have to have a like a well-measured, considerate response to what they did wrong. Do they agree
with what they did wrong? Maybe perhaps
saying why they don't think they did
anything wrong, and you could review
it from there. I think
one of the challenges is we have the benefit in English
common law of hundreds of years of precedent
and developing new rules and figuring out what
works and doesn't. Twitter's very different.
So I think with the technology, I don't know
if you need permanent bans
or even suspensions at all.
You could literally just,
I mean,
lock someone's account
is essentially suspending them.
But again,
I wouldn't claim to know anything
about the things you go through,
but what if you just restricted
most of what they could say?
You blocked certain words
in a certain dictionary
if someone's been,
if someone received...
That seems such a greased hill.
Well, but no, but think about it this way is it better that they're permanently
banned or no it's not better but it's not it's not good either no no think about it this way
instead of being suspended for 72 hours you get a dictionary block from hate speech words right
does that not make sense but people just use coded language this is what we see all the time yeah i
don't think that's a good move what do you think about perhaps instead of – is it possible to have levels of Twitter,
like a completely uncensored, unmoderated level of Twitter,
and then have like a rated R and then have like a PG-13?
I mean, I don't think that's a bad idea.
We have those levels in place today, but you don't really see them.
One, we have a not safe for work today, but you don't really see them. One, we have a
not safe for work switch,
which you can turn on or off. Oh, really?
Not safe for work switch?
I think you have it off, Joe. Do I?
You think so? Just based on other
things you've said. Based on what you're seeing, you have it off.
I don't even know it's there.
So we have that, and then as
Vidya pointed out earlier, we have the timeline.
We started ranking the timeline
about three years ago.
We enable people today to turn that off
completely and see
the reverse cron of everything they follow.
You can imagine
a world where that switch has
a lot more power over more of
our algorithms throughout more of the surface areas.
You can imagine that. So these are
all the questions that are on the table. You asked about timeline, and this is a challenging one. I don't
know about timeline because first, we've decided that our priority right now is going to be on
proactively enforcing a lot of this content specifically around anything that impacts
physical safety like doxing. Right, but there's so many examples of what you specifically around anything that impacts physical safety, like doxing.
Right, but there's so many examples of what you guys not doing that.
I know, but that's what we're fixing right now.
That's a prioritization.
Yeah, I think from your own personal perspective.
We think more in terms of milestones than the particular timeline.
We're going to move as fast as we can, but some of it's a function of of our infrastructure of the technology we have to we have to bring to bear do you guys have conversations about trying to shift the public
perception of having this left-wing bias maybe possibly addressing it yeah all the time going
right now right yeah i mean i i went on the sean hannity show i you know we we how was that we
brought ourselves before it was to bring a lot of sunlight? It was short. Well, it was short, and there weren't a lot of really tough questions,
and that was the feedback as well.
I get it.
Look, again, I'm from Missouri.
My dad is a Republican.
He listened to Hannity.
He listened to Rush Limbaugh.
My mom was a Democrat.
And I feel extremely fortunate that I was able to first see that spectrum
but also feel safe enough to express my own point of view.
But when I go on someone like Hannity, I'm not talking to Hannity.
I'm talking to people like my dad who listen to him.
Right.
And I want to get across how we think and also that our thinking evolves
and here's the challenges we're seeing and, like, this is our intent.
This is what we're trying to protect, and we're going to make some mistakes along the way. And we're going to admit
to them. We didn't admit to them in the past. We'd admit to a lot more over the past three years.
But, you know, I don't know any other way to address some of these issues. It all goes back
to trust. Like one of our core operating principles is earning
trust. How do we earn more trust? And there are people in the world who do not trust us at all.
And there are some people who trust us a little bit more. But this is the thing that we want to
measure. This is the thing that we want to get better at. I saw you had a conversation with,
I think, Katie Herzog. No, no, no. Who was it? That was the wrong person. You had a Twitter
conversation with Kara Swisher. Wow. Wrong person, but someone just got a shout out. And I see that the left goes at you in the opposite direction. They want more. They want more banning. They want more restrictions. And then the right is saying less.
Can you tell us what that conversation was about?
Do you want to summarize?
Because the thing I was pointing out specifically was that you were being asked to do more in terms of controlling.
Well, it wasn't just more, but to be a lot more specific about what actions we've taken to promote more health on the platform. Like what products did we change?
What policies did we introduce in the past two years?
So she was asking questions.
Every question she asked,
she wanted me to be a lot more specific.
And some of these things
have something that is very specific.
Some are directional right now
because we have to prioritize the direction.
And I talked about,
we've decided that physical safety
is going to be a priority for us.
And to us, that means being a whole lot more proactive around things like doxing.
So two suggestions, I guess.
I'm not going to imply that you have unlimited funding, but we did mention the peer review.
We don't.
Right, right.
And you mentioned earlier layoffs and retraction.
Peer review, which we mentioned, but have you just considered opening an office,
even a small one, for trust and safety in an area that's not predominantly blue so that at least you have, like, you can have some pushback?
And what does learn to code mean?
And then they could tell you.
Absolutely.
So that's great feedback.
And just so you know, the trust and safety team is also a global team.
And the enforcement team is a global team.
So it's not like people from California who are looking at everything, making decisions.
They're global.
Now, I hear your point about who trains them and the materials they have and all that.
And we have to think about that.
And that's one thing that Jack has really been pushing us to think about is how do we
decentralize our workforce?
Out of San Francisco.
Out of San Francisco in particular.
So this is something he's very focused on.
What about publishing evidence of wrongdoing in a banning?
So when people say, you know, what did Alex Jones really do?
Maybe a lot of people didn't realize what you saw.
And again, it's an issue of trust.
Yeah, I love this, Tim.
I'm a lawyer.
So by training, we're thinking of doing something we call case studies.
But essentially, like, this is our case law.
This is what we use.
And so high profile cases, cases people ask us about, like to actually publish this so that we can go through, you know, tweet by tweet just like this.
Because I think a lot of people just don't understand and they don't believe us when we're saying these things.
So to put that out there so people can see.
And again, they may disagree with the calls that we're making, but we at least want them to see why we're making these calls.
I think.
And that I do want to do.
I want to at least start that by the end of this year.
I think and that that I do want to do I want to at least start that by the end of this year So I I think you know
Ultimately my main criticism stands and I don't see a solution to in that twitter is an unelected, you know
Unaccountable as far as i'm concerned when it comes to public discourse
You have rules that are very clearly at odds as we discussed
I don't see a solution to that and I think in my opinion we can have this kind of like we've toned things down
We've had some interesting conversations, but ultimately unless you're willing to allow people
to just speak speak entirely freely you are in we have an unelected group with a near monopoly
on public discourse in many capacities and i understand it's not everything reddit is big too
and it's you know what i see is you are going to dictate policy whether you realize it or not
and that's going to terrify people and it's going to make violence happen.
It's going to make things worse.
You know,
the,
the,
I,
I,
I,
I,
I hate bringing up this example on the,
on the rule for misgendering because I'm actually,
I understand it and I can agree with it to a certain extent.
And I have,
you know,
nothing but nothing but respect for the trans community.
But I also recognize we've seen an escalation in street violence.
We see a continually disenfranchised large faction of individuals in this country. We then see only one of those
factions banned. We then see a massive multinational billion-dollar corporation with
foreign investors. And it looks to me like if foreign governments are trying to manipulate us,
I don't see a direct solution to that problem, that you have political views you do enforce them and that means that americans who
are abiding by american rule are being excised from political discourse and that's the future
that's it yeah we we do have views on on the approach and and again like we we ground this in
creating as much opportunity as possible for the largest number of people right that's that's where
it starts and where we are today um will That's where it starts. And where we are
today will certainly evolve. But like that, that is what we are trying to base our rules and
judgments. And I get that that's an ideology. I completely understand it. But we also have to
be free to experiment with solutions and experiment with evolving policy and putting
something out there that might look right at the time and evolving. I'm not saying this is it, but
like we look to research, we look to our experience and data on the platform and we make a call. And
if we get it wrong, we're going to admit it and we're going to evolve it.
But I guess, do you understand my point?
I understand the point.
That there are American citizens abiding by the law who have a right to speak and be involved
in public discourse that you have decided aren't allowed to.
Yeah, and I think we've discussed, like, we don't see that as a win.
We see that as not promoting health ultimately over time.
But it's ultimately, what is your priority?
Do you have it prioritized in terms of what you guys would like to change?
I think Jack has said it a couple times,
but the first thing we're going to do is prioritize people's physical safety
because that's got to be understanding.
You already have done that pretty much, right?
No.
You do that more? We've prioritized it okay we're doing the work i don't think companies like ours make
the link enough between online and offline ramifications what's the main criticism what's
the main criticism you guys is it censorship that you guys experience is it censorship is it banning
like what is it what is what do you get the most it depends on every person has a different criticism, so I don't think there's a universal opinion.
I mean, you just painted the picture.
Right.
Between the left end of the spectrum is asking for more.
Sure.
And the right is asking for less.
That's very simplified just for this country.
But at a high level, yeah, that's consistent.
I mean, my opinion would be as much as I don't like a lot of what people say about me, what they do, the rules you've enforced on Twitter have done nothing to stop harassment towards me or anyone else.
I swear to God, my Twitter, I mean, my Reddit is probably 50 messages from various far left and left wing subreddits lying about me, calling me horrible names, quote tweeting me.
And these people are blocked.
And I never
used to block people because I thought it was silly because they can get around it anyway.
But I decided to at one point because out of sight, out of mind. If they see my tweets less,
they'll probably interact with me less. But they do this. And they lie about what I believe. They
lie about what I stand for. And they're trying to destroy everything about me. And they do this to
other people. I recognize that. So ultimately, I say, well, what can you do? It's going to happen
on one of these platforms. The internet is a thing.
As they say on the internet, welcome to the internet.
So, you know, to me, I see Twitter trying to enforce all these rules to maximize good,
and all you end up doing is stripping people from the platform, putting them in dark corners of the web where they get worse,
and then you don't actually solve the harassment problem.
Reddit's hardly a dark corner of the web, right?
No, I'm not talking – but there are dark corners of Reddit.
There are alternatives.
I mean the internet isn't going to go away, and people have found alternatives.
And here's the other thing that's really disconcerting.
We can see a trend among all these different big Silicon Valley tech companies.
They hold a similar view to you guys.
They ban similar ideology, and they're creating a parallel society.
ideology and they're creating a parallel society you've got alternative social networks popping up that are taking the dregs of the of the mainstream and giving them a place to flourish grow make
money now we're seeing people be banned from mastercard from banned from paypal even banned
from chase bank because they all hold the same similar ideology to you oh it's it's you know in
some capacities uh i don't know exactly why chase does it i assume it's because you'll get some
activists who will lie explain what you're talking about so there have been a series of individuals
banned from chase bank um like their accounts have been yes their accounts were closed one i think
maybe the most notable might be martina marcotta i don't know much about her i follow her on twitter
and her tweets are typical conservative fare and she created a comic i think it's called lady
alchemy she's a trump supporter and she got
a notice that her business account was terminated you then have joe biggs who uh previously worked
with info wars i don't know much about this i didn't follow up but he tweeted out chase has
shuttered my account and then you have the new chairman of the proud boys enrique i forgot his
last name tario or something and so it was really white oh no he's a he's afro-cuban i know that's what's
hilarious but you know so so what i see across the board it's not just and this is what i wanted to
bring up before about a perspective on these things you guys are like we're going to do this
one thing and no snowflake blames itself for the avalanche but now what do we have we have
conservatives being stripped from paypal we have certain i'll just say individuals stripped from
paypal patreon financing so they set up alternatives.
Now we're seeing people who have, like, you mentioned Westboro Baptist Church, and she's been de-radicalized by being on the platform.
But now we have people who are being radicalized by being pushed into the dark corners, and they're building, and they're growing.
And they're growing because there's this idea that you can control this, and you can't.
Because there's this idea that you can control this and you can't.
You know, I think you mentioned earlier that there are studies showing, and also counter studies, but people exposed to each other is better.
I found something really interesting, because I have most, whether or not people want to believe this, all of my friends are on the left. And some of them are even like socialists.
And they're absolutely terrified to say, to talk talk because they know they'll get attacked by the
people who call for censorship and try to get them fired. And when I talked to them, I was talking to
a friend of mine in LA and she said, is there a reason to vote for Trump? And I explained a very
simple thing about Trump supporters. This was back in 2016. I said, oh, well, you know, you've got a
lot of people who are concerned about the free trade agreements, sending jobs overseas. So they
don't know much about Trump, but they're going gonna vote for him because he supported that. And so did Bernie. And then the response is, really? I didn't know
that. And so you have this, this ever expanding narrative that Trump supporters are, you know,
Nazis, and the MAGA head is the KKK hood. And a lot of this rhetoric, you know, emerges on Twitter.
But when when a lot of these people are getting excised, then you can't actually meet these people
and see that they're actually people and they may be mean. They may be mean people.
They may be awful people, but they're still people.
And even if they have bad opinions, sometimes you actually, I think in most instances, you find they're regular people.
Well, there's a part of the problem of calling for censorship and banning people in that it is sometimes effective
and that people don't want to be thought of as being racist or in support of racism or in support of nationalism or any of these horrible things.
So you feel like if you support these bannings, you support positive discourse and a good society and all these different things.
What you don't realize is what you're saying is that this does create these dark corners of the web and these other social media platforms evolve and have farm i mean when you're talking
about bubbles and about these uh the these group think bubbles the worst kind of group think
bubbles is a bunch of hateful people that get together and decide their post they've been
persecuted instead of like we were talking about with megan phelps having an opportunity to maybe
reshape their views by having discourse with people who choose to or not choose to engage with them well let's let's think about the the logical end
of where this is all going you want healthier conversation so you're willing to get rid of
some people who then feel persecuted and have no choice but to band together with others
mastercard chase patreon they all do it facebook does it they're growing these platforms are
growing they're getting more users they're expanding they're showing up in real life and they're they're you know even if these people
who are banned aren't the neo-nazi evil they're just regular people who have banded together
that forms a parallel finance system a parallel economy you've got patron alternatives emerging
where people are saying you know we reject you and now on a platform where people say the most
ridiculous things now they have money it normalizes that as well that's also that's what i mean by parallel society to them everything they're doing is just
and and right yes and you can't stop them anymore and it develops hate for the opposing viewpoint
you start hating people that are progressive because these are the people that like you and
i've talked about the data in society report that labeled us as alt-right adjacent or whatever now
more fake news coming out about it right well it's ridiculous they connected because you and i have talked to people that are uh on the right or far right that somehow or another
were secretly far right and that there's this influence network of people together and it's
just fake well it's it's a schizophrenic connection it's like one of those weird things where people
draw a circle oh you talk to this guy and this guy talked to that guy therefore you know that
guy well so so so here's an expanded part of this problem. So you're probably not familiar,
but a group called Data and Society published what's entirely fake report labeling 81 alt-right
adjacent or whatever they want to call it. YouTube channels included Joe Rogan and me. It's fake.
But you know what? A couple dozen news outlets wrote about it as if it was fact. You believe
the Proud Boys were labeled by the FBI as extremists when they actually weren't. It was a sheriff's report from someone
not affiliated with the FBI. But there are activists within media who have an agenda.
And we saw this with Learn to Code. It was an NBC reporter who very clearly is, you know,
left-wing identitarian, writing a story for NBC. Then your average American sees that NBC story,
thinks it's factual. Then everyone talks about it, then your people hear about it, then you start banning people.
So, you know, I guess to drive the point home, the snowflake won't blame itself for the avalanche.
You guys are doing what you think is right, so is Facebook, YouTube, Patreon, all these
platforms, and it's all going to result in one thing.
It's going to result in groups like Patriot Prayer and the Proud Boys saying, I refuse
to back down, showing up. It's going to result in Antifa showing up. It's going to result in groups like Patriot Prayer and the Proud Boys saying, I refuse to back down, showing up. It's going to result in Antifa showing up. It's going to result in more extremism.
You've got an Antifa account that published the home addresses and phone numbers that hasn't been
banned. That's going to further show conservatives that the policing is asymmetrical, whether it is
or isn't. And I think the only outcome to this on the current course of action is insurgency.
We've seen people
planting bombs in houston try to blow up a statue we saw someone plant a bomb at a police station
in eugene oregon two weeks before that a guy showed up with a gun and fired two rounds at a
cop wearing a smash the patriarchy and shill shirt so you know so so that happens then a week later
they say you killed our comrade then a week later a bomb is planted i don't believe it's coincidence
maybe it is but i lived in New York. I got out. Too
many people knew who I was. And there was people sending me emails with threats. And I'm like,
this is escalating. You know, we've seen for the past years with Trump, we've seen
Breitbart has a list of 640 instances of Trump supporters being physically attacked
or harassed in some way. There was a story the other day about an 81-year-old man who was attacked.
And it seems like everything's flowing in one direction.
And nobody wants to take responsibility and say maybe we're doing something wrong, right?
That's why I brought up earlier regulation is, in my opinion, inevitable.
Yeah, I mean, I don't think it's going to be the responsibility of any one company.
We have a desire.
Let me be clear that we have a desire to promote health in public conversation.
a desire let me be clear that we have a desire to promote health in in public conversation and
as we've said like i i don't think over time a permanent ban promotes health i i i don't but
we we have we have to we have to get there and there are exceptions to the rule of course but like we we we just have work to do and i i the the benefit of conversations like this is we're talking about it more, but the people will naturally call us out.
Like, you got to show it as well.
Do you fear regulation?
I don't fear regulation if we're talking about regulation in the—
Government intervention.
In the job of—if a regulator's job is to protect the individual and make sure that they level the playing field and they're not pushed by any particular special interest,
like companies like ours who might work with a regulator to protect our own interest, that I think is incorrect.
I agree that we should have an agency that can help us protect the individual and level the playing field.
So I think oftentimes companies see themselves as reacting to regulation,
and I think we need to take more of an education role.
So I don't fear it.
I want to make sure that we're educating regulators on what's possible, what we're seeing, and where we could go.
When you say educating regulators, that's initiating a regulation.
I mean, you're...
Not necessarily.
I mean, we might just be talking...
By educating regulators, who are these regulators?
These are folks who might be tasked with coming up with a proposal for particular legislation
or laws to present to legislators.
So it's making sure that we are educating to the best of our ability.
This is what we are.
This is what we see.
This is where technology is going.
Do you think you can hold off regulation, though?
Do you think that by these approaches and by being proactive and by taking a stand and
perhaps offering up a road to redemption to these people and making clear distinctions between what you're allowing, what you're not allowing, you can hold off regulation?
Or do you disagree with what he's saying about regulation?
No, I don't believe that should be our goal is to hold off regulation.
I believe we should participate like any other citizen, whether it be a corporate citizen or an individual citizen, in helping to guide the right regulation.
it be a corporate citizen or an individual citizen in helping to guide the right regulation.
So are you familiar, and I could be wrong on this because it's been like 15 years since I've done this.
Are you familiar with the Clean Water Restoration Act at all?
I don't expect you to be.
It's a very specific thing.
So it was at some point in like the early 70s, there was a river in Ohio.
And again, I could be wrong.
It's been 15 years.
I used to work for an environmental organization.
Started on fire.
And what was typically told to us was that all of these different companies said we're doing the
right thing but this like as i mentioned the snowflake doesn't blame itself so over time the
river was so polluted it became sludge and lit on fire and so someone said if all of these companies
think they're doing the right thing and they've all just contributed to this nightmare we need
to tell them blanket regulation.
And so what I see with these companies like banking institutions, public discourse platforms, video distribution, I actually – I'm really worried about what regulation will look like because I think the government is going to screw everything up.
But I think there's going to be a recoil of – first, I think the Republicans – because I watched the testimony you had in Congress, and I thought they had no idea what they were talking about, nor did they care. There was like a couple people who made good points. But for the most part, they were like, I don't whatever.
And they asked about Russia and stuff. So they have no idea what's going on. But there'll come
a time when, you know, for instance, one of the one of the great things they brought up was that
by default, when someone in DC signs up, they see way more Democrats than Republicans.
Right? You remember that when you testified? Yeah. So, well, there's an issue.
And I don't think, I believe you when you say it's algorithmic, that these are prominent
individuals, so they get automatically recommended.
But then they're, you know, so again, the solution to that, like how do you regulate
a programmer to create an algorithm to solve that problem is crazy.
You're regulating someone to invent technology.
But I feel like there will be a backlash when too many, right now we now we're seeing the reason one of the reasons we're having this conversation is that
conservatives feel like they're being persecuted and repressed so then it's going to escalate for
me it's not going to stop with these conversations and so that we've been having a lot of talks about
this particularly around algorithms and um one of the things that we're really focused on is not
just fairness and outcomes but also explainability of algorithms and i know jack you you love this. So I don't know if you want to talk a little bit about our work there.
Yeah, I mean, so there's two fields of research within artificial intelligence that are rather
new, but I think really impactful for our industry. One is fairness and ML.
Fairness in what?
Fairness in machine learning and deep learning. So looking at everything from what data set is fed to an algorithm,
so like the training data set,
all the way to how the algorithm actually behaves on that data set,
making sure that it does not develop bias
over the longevity of the algorithm's use case.
So that's one area that we want to lead in,
and we've been working with some of the
leading researchers in the industry to do that because the reality is a lot of this human
judgment is moving algorithms. And the second issue with it moving algorithms is algorithms
today can't necessarily explain the decision making criteria that they use. So they can't
explain in the way that you make a decision, you explain why you make that decision. Algorithms today are not being programmed
in such a way that they can even explain that. You may wear an Apple Watch, for instance, it might
tell you to stand every now and then. Right now, those algorithms can't explain why they're doing
that, right? That's a bad example because it does it every 50 minutes. But as we
offload more and more of these decisions, both internally and also individually to watches and
to cars and whatnot, there's no ability right now for that algorithm to actually go through and
list out the criteria used to make that decision. So this is another area that we'd like to get
really good at if we want to continue to be transparent around our actions, because a lot of these
things are just black boxes and they're being built in that way because there's been no research
into like, well, how do we get these algorithms to explain what their decision is? That question
hasn't been asked. My fear is it's technology that you need to build, but the public discourse
is there. we know that
foreign governments are doing this we know that democratic operatives in alabama did this and so
i imagine that you know with donald trump i i you know he talked about an executive order for free
speech on college campuses so that the chattering is is here someone's going to take a sledgehammer
to twitter to facebook to youtube and just be like, not understanding the technology
behind it, not willing to give you the benefit of the doubt, and just saying, I don't care
why you're doing it, we are mad.
You know what I mean?
And then pass some bills, and then it's over.
Again, clarifying, I think you guys are biased, and I think what you're doing is dangerous,
but I think that doesn't matter.
It doesn't matter what you think is right.
It matters that all of these companies are doing similar things, and it's already terrifying people.
I mean, look, when I saw somebody got banned from their bank account, that's terrifying.
And PayPal has done this for a long time.
That seems like more egregious than being banned from any social media platform.
That seems to me to be worthy of a boycott.
Patreon issued a statement about a man. I believe his name is Robert Spencer,
and they said MasterCard instructed us to ban him.
And you know what?
I'll say this too.
Me mentioning Chase, PayPal, MasterCard terrifies me.
I'm on the Joe Rogan podcast right now calling out these big companies in defiance.
And we've already seen –
I would like to know all the specifics of why they chose to do that,
and I would hope that they would release some sort of a statement explaining why they chose to do that.
Maybe there's something we don't know.
There was a reporter – and I could be getting this wrong because I didn't follow it very much – with Big League Politics who said that after reporting on PayPal negatively, they banned him.
That's terrifying.
Just reporting on it in what way?
Like reporting on the Sargon of Akkad issue?
No, apparently he was a journalist. he wrote about something bad PayPal did.
Big League Politics is conservative.
And so all of a sudden he got a notification that they can't tell him why, but he's gone.
So I see these big tech monopolies.
I see YouTube, Facebook, Twitter.
I see PayPal, MasterCard.
And they're doing it.
And they all say they're doing the right thing.
But all of these little things they're doing are adding up to something nightmarish.
And some legislator is going to show up in a matter of time with a sledgehammer and just he's gonna
whack your algorithm well it's really the same stupid logic where i was talking about where
you know gavin was saying punch people when you punch people it doesn't end there oh yeah
ban them ban it doesn't end there it doesn't end there yeah you have to realize also
uh twitter is how old now 11 years old old? 12 years old? 13 years old?
13 this month. 13 years old.
Well, 13 years from now, what are the odds that there's not going to be something else just like it?
Pretty slim.
Depends on how we do.
So listen, let's talk about the incestuous relationship that a lot of these journalists have in defending the policies you guys push.
Gab, a study was done, I talked about this last time where they found five
percent of the tweet of the i don't say tweets but the posts on gab or hate speech compared to
twitter's like 2.4 so it's a marginal increase yet gab is called the white supremacy network
of course you go on it and yeah absolutely it exists they say that synagogue shooter oh he was
a gab user he was a twitter user too he posted on twitter all the time so why the media is is
targeting it's it's such a crazy
reality it is reductive narrative when when the guardian uh i believe was the daily mail called
count dankula a nazi hate criminal it's like i saw that dude literally made a joke on youtube
and he's being he was arrested i'm i thank god every day we have the first amendment in this
country well there's a cover of a newspaper that was called because he got a new job somewhere
he got fired for that he got kicked off the show wow yeah so so you have because of trying to let me look let me ask you
another thing do you guys do you guys take the advice of the southern poverty law center
do we take the advice of like so it's it's widely circulated the splc lobbies various social
platforms to ban certain people they advise uh it's been reported they advised youtube as is
the anti-defamation league do you use them in your decision-making process, rule development?
We're very aware of flaws with certain of their research, and we're very careful about
who we take advice from.
But do you take advice from them?
I think that they have certainly reached out to our team members, but there's certainly
nothing definitive that we take from them.
We don't take direction.
You never take an action based on information received from them?
No.
The reason I bring them up specifically is that they're cited often in the United States.
There's other groups like Hope Not Hate in the UK, and now they're all going to point
their figurative guns at me for saying this.
But the Southern Poverty Law Center wrote an article where they claimed, I went to Iran
for a Holocaust deniers conference, and I've never been to Iran.
And their evidence was this guy found an archived website from a Holocaust denier with my name on it,
and that was their proof. And there are people who have been labeled extremists by this organization
that have been... Sam Harris.
Sam Harris was... Yeah.
Didn't they lose a big lawsuit around this as well?
They settled. Yeah.
So again, not to imply that you guys do use it, but I asked specifically because it's been reported other organizations do.
So we have activist organizations. We have journalists that I can attest are absolutely
activists because I've worked for, I worked for Vice. I worked for Fusion. I was told
implicitly, not explicitly to lie, to side with the audience as it were. I've seen the narratives
they push and I've had conversations with people
that I'm not going to I'm going to keep relatively off the record, journalists who are terrified,
because they said the narrative is real, right? One journalist in particular said that he had
he had evidence of, you know, essentially, he had reason to believe there was a wrongdoing,
but if he talks about it, he could lose his job. And there was a journalist who reported to me that Data & Society admitted their report was incorrect.
And now you've got organizations lobbying for terminating Joe and I because of this stuff.
So this narrative persists.
Then you see all the actions I mentioned before and all the organizations saying we're doing the right thing.
And I got to say, like, we're living in a – I mean, I feel like we're looking at the doorway to the nightmare dystopia of.
I just want to clarify, like, I don't know if we're going around saying we're necessarily doing the right thing.
We're saying why we're doing what we're doing.
Right, right.
That's what we need to get better at.
And I don't want to hide behind what we believe is, like, the right thing.
We have to clearly rationalize why we're making the
decision we're making and more of that. That to me is the prevention from this snowflake
avalanche metaphor. Well, but I think it's just obvious to point out, again, I said this before,
we can have the calm conversation and I can understand you. But from where I'm sitting,
you hold a vastly different ideology than I do
and you have substantially more power
in controlling my government.
That terrifies me.
And what makes it worse
is that a Saudi prince owns,
as it was reported,
that a Saudi prince owns a portion of that company.
So I'm sitting here like just a little American,
can't do anything to stop it.
I'm just watching this unaccountable machine churn away
and you're just one snowflake in that avalanche.
All these other companies are as well.
And I'm like, well, here we go. This is going to be a ride as vijay said that saudi prince doesn't have any influence but but am i supposed to trust that that's that's the issue
right i'm not i'm not trying to insinuate he's showing up to your meetings and telling you what
to do but when someone dumps a billion dollars in your company i think it's silly to to imply that
they don't at least have some influence but but regardless and and unlike the internet within a
company like ours you don't necessarily see the protocol you don't see least have some influence, but regardless. And unlike the internet, within a company like ours, you don't necessarily see the protocol.
You don't see the processes, and that is an area where we can do a lot much better.
I guess, you know, beat it over the head a million times, beat the dead horse.
I think ultimately, yeah, I get what you're doing.
I think it's wrong.
I think it's terrifying, and I think we're looking – we're on the avalanche already.
It's happened, and we're heading down to this nightmare scenario of a future where it terrifies me when i see people
who claim to be supporting liberal ideology burning signs that say free speech threatening
violence against other people you have these journalists who do the same thing they accuse
everybody of being a nazi everybody of being a fascist joe rogan and you're like you're like
a socialist as far as i know you're like UBI proponent. Well, I wouldn't necessarily say I'm very liberal.
I'm being facetious.
I'm very liberal except for Second Amendment.
That's probably the only thing that I disagree with a lot of liberals on.
And then you see what the media says about everybody.
You see how they call Jordan Peterson all day and night alt-right.
The alt-right hates him.
And this narrative is used to strip people of their income, to remove them from public discourse.
Well, it's foolish because it in the ultimately upon
examination like you're saying that sunlight is the best disinfection absolutely and upon
examination you realize that this is not true at all and that these people look foolish like the
data in society uh article no no no all these organizations publish that as fact without
looking at any data maybe some did but anybody dozens but but yeah and no no no they're still
talking about millions and millions of people who's who are these people that are still citing
it guardian will climb the bell and start yelling shame but so so so look foolish we now have uh
and this which makes things muddier is we now have a guy who's claiming that he did an uh this you're
gonna love this there's a guy claiming that the 81 accounts listed on this thing as alt-right have been, are no longer being recommended on YouTube.
And so I looked at the statistics for various people on this channel,
because first of all, my channel is doing great.
My recommendations are way up, as are yours.
A lot of people are growing.
And I did a comparison like, subscribers are up, views are up.
What's this guy claiming?
And apparently, he did a big sampling of videos where he, for some reason, thought you, Joe Rogan, were 12% of all videos in this network.
And then when his data stopped working, he claimed that everything stopped.
So he actually produced a graph of primarily your channel, and then when his system stopped working, he published that, and it was picked up by CNET.
And now you have people claiming the alt-right has been banned from from youtube and it's more fake news based off fake news based off fake
news i don't understand what you're saying so so basically there's a guy claiming that because of
data and society we have been stripped of recommendations on youtube that you do i'll
tell you one thing that is true though we don't trend like alex jones was saying like uh the video
we did got nine million views, but it's not trending.
And I said, well, it's because my videos never trend.
They just don't trend.
But I think it's probably because of the language that's used.
I think that's part of the issue.
It's a subject matter in language.
I think they have a bias against swearing and extreme topics and subjects.
against swearing and you know extreme topics and subjects I don't think that's true because you've had like late-night TV hosts talk about really messed up
things they don't swear they don't swear though that it's not this it's not a
matter of and what they talk about whether that's messed up in comparison
what we talked about it's probably pretty different you know what man I'm
I'm fairly resigned to this future happening no matter what
we do about it and so i bought a van and i'm gonna convert it to it jesus christ well i'm coming to
a workstation right you're gonna be a prepper bro no no no no no no no um first of all i will say
it's hilarious to me like that people have band-aids they never use but they don't store
like at least one emergency like food supply it's like you never use band-aids why do you have them
but uh no i i
do i i see this every day it was a couple years ago i said wow i see what's happening on social
media we're going to see violence boom violence happened i said oh it's going to escalate someone's
going to kill boom charlottesville happened and it's like i i've there have been um statements
from foreign security advisors international security experts saying we're facing down high
probability of civil war and i know it sounds crazy it's not going to look like what you think it looks like it may not be as extreme as it was in the 1800s
but this was i think i think it was in the atlantic where they surveyed something like 10
different international security experts who said based on what the platforms are doing based on how
the people are responding one guy said it was like 90 chance but the average was really high well
let's if let's look outside of the idea of physical war
and let's look at the war of information what we're talking about what's happening with foreign
entities invading uh social media platforms and trying to influence our elections and our
democracy that is a war of information that is that war is already going on if you're looking
at something like data in society that's sort of an act of war in that regard.
Right, right, yeah.
It's an information war tactic.
An attempt to lie to people to strip out their ideological opponents.
And it's also one of the women who wrote that said
that it's been proven over and over again
that deplatforming is an effective way to silence people.
And then called for us to be banned.
Yeah, it's kind of hilarious.
I don't think she was saying that we should be banned.
I don't think she said that I should be banned.
She said something to the effect of YouTube has to take action to prevent this from, you know.
Well, you know, when people see someone saying things that they don't agree with,
it's very important for people to understand where silencing people leads to.
And I don't think they do.
I think people have these very simplistic ideologies and these very, very narrow-minded perceptions of what is good and what is wrong.
And I think, and I've been saying this over and over again,
but I think it's one of the most important things to state.
People need to learn to be reasonable.
They need to learn to be reasonable in civil discourse.
Civil discourse is extremely important.
And think over the long term.
Yes, think over the long term and understand it.
You're playing chess.
Yeah.
We did three hours and 30 minutes.
Nobody had to pee.
Amazing. I'm proud of all of you.
I don't know if nobody had to pee.
We did start a little late.
I think we're like 3.15.
I guess the last thing I could say is
I think we had a good conversation.
I think we did too.
Honestly, I don't think we've solved anything.
Do you think we could do this again in six months and see where you guys are at in terms of what I think is important is I honestly I don't think we've solved anything I don't think there's been any do you think we could do this again
in like six months
and see where you guys
are at in terms of like
what I think is important
is the road to redemption
I think that would open up
a lot of doors
for a lot of people
to appreciate you
we're going to need
more than six months
Jesus Christ
why don't you let me do it
here's the scary thing
the information
travels faster than you can
right
and that's the point
I was making
our culture is going
to evolve faster than you can catch up to that problem's the point I was making. Our culture is going to evolve faster
than you can catch up to that problem
because there's a problem.
And I don't, you know, technology took a big leap.
Twitter existed, the internet existed.
Now we're all talking so quickly,
you can't actually solve the problem
before the people get outraged by it.
So-
No, I get it.
I mean, there was an early phrase in the internet
by some of the earliest internet engineers
and designers, which is
code is law.
And a lot of what companies like ours and startups and random folks who are individuals
who are contributing to the internet will change parts of society and some for the positive
and some for the positive and some for the negative and the most i think the
most important thing that we need to do is to as we just said shine a bunch of light on it make
sure that people know where we stand and where we're trying to go and what bridges we might need
to build from our current state to the future state and and and be open about the fact that like we're not going to
and this is to your other point we're not going to get to a perfect answer here like it it's it's
just going to be steps and steps and steps and steps and the what we need to build is agility
what we need to build is an ability to experiment very very quickly and take in all these feedback loops that we get, some feedback
loops like this, some within the numbers itself, and integrate them much faster.
What's wrong with the jury system on Twitter? Why wouldn't that work?
I don't know why it wouldn't work. I'm not saying we wouldn't test that.
Like we're testing it in Periscope and I don't have a reason, a compelling reason,
why we wouldn't do it within Twitter either. I don't.
So we likely will.
But, you know, again, we're a company of so many resources, finite resources, finite people, and we need to prioritize. And we've decided, you may disagree with this decision, but we've decided that physical safety and the admission of off-platform ramifications is critical for us.
of off-platform ramifications is critical for us.
And we need to be able to be a lot more proactive in our enforcement,
which will lead to stronger answers.
And we want to focus on the physical safety aspect.
And doxing is a perfect example that has patterns that are recognizable and that we can move on.
I hear it.
And I just feel like the conclusion I can come to in the conversation is you're going to do what you think needs to be done. I think it. And I just feel like, you know, the conclusion I can come from the
conversation is you're going to do what you think needs to be done. I think what you're doing is
wrong. And ultimately, nothing's going to change. I get it. You're going to you're going to try new
technologies, you're going to try and do new systems. From the from where I see it, I think
you have an ideology diametrically opposed to mine. I mean, not to an extreme degree. I think
there are people who are more like I'm not conservative. There are a lot of people who are
who are probably think, you know, I'll say this too.
You're a symbol for a lot of them.
And so I can definitely respect you having a conversation.
There are so many different companies that do things that piss people off.
You sitting here right now, I'm sure there's a ton of conservatives who are pointing all of their anger at you because you are here.
But, you know, ultimately, I just feel like I don't think anything's going to change.
I think you're on your path.
You know what you need to do, and you're trying to justify it.
And I'm looking at what Twitter is doing as very wrong, and it's oppressive and ideologically driven.
And I'm trying to justify why you shouldn't do it, but nothing's going to change.
My intention is to build a platform that gives as many people as possible an opportunity to freely express themselves.
And some people believe the United States has already done that.
And Twitter is now going against what the U.S. has developed
over hundreds of years.
But the United States doesn't have a platform.
The United States doesn't have a platform to do that.
Twitter is... When you're talking about the internet,
the United States, if they want to come up with
the United States Twitter,
like a solution or an alternative
that the government runs and they
use it use free speech to govern that good luck good luck with that well it's a huge it's a huge
challenge and also i recognize it's not just huge almost insurmountable i mean they have the dummies
that are in charge of uh the the united states government this is why i said regulation is scary
yeah you know it's a terrible idea but so but you know and i i think
it's important to point out too that a lot of people don't realize you guys have to contend
with profits you have to be able to make money to pay your staff there's no like you don't get
free money to run your company so aside from the fact that you have advertisers want to be on the
platform i i imagine a lot of these companies are enforcing hate speech policies because advertisers
don't want to be associated with certain things.
So that creates, through advertisement, cultural restrictions.
That's 100% the problem.
Right. 100% the problem with most of these platforms, including YouTube.
Absolutely.
Yeah.
I mean, when the PewDiePie thing happened and all of these restrictions came down on advertising and content creators, that's where it comes from.
It all comes from money.
It's why –
But those – just to be clear, those can be
segmented as well.
Advertisers can choose
where they want to be placed.
Certainly, but the platform
recognizes there's a huge blowback
and they're losing money.
I mean, look at the
pedo scandal that just happened on YouTube. It was people
posting comments with timestamps. They weren't even breaking
the rules. Advertisers pulled off the platform and youtube didn't realize
because they weren't breaking the rules they're just creepy dudes so creepy people well also
they were putting comments and so one of the most preposterous responses to that
was that content creators are going to be responsible for their comments well they
just turned them off well the problem with the sledgehammer people like me is that i
put out a lot of content and there's millions of views and it's impossible to moderate all those
comments and we don't moderate them at all right about youtube banned only on videos with minors
so they deleted all comments on videos with mine yeah videos where they say youth but you know i'm
saying if you put a youtube video on you have a bunch of people that say a bunch of racist things in your youtube comments you could
be held responsible and get a fuck no no no youtube clarified that they clarified that when
recently they said afterwards but the first initial statement was that you were going to
be responsible for your comments and then they said it's only people like philip defranco and
a lot of people freaked out and then they qualified but so the reason i bring that up is just because there's going to be things that even if you segment your advertisers from – look, I pointed out I think the Democrats are in a really dangerous position because outrage culture, although it exists in all factions, is predominantly on one faction.
And so when Trump comes out and says something really offensive, grab him by the, you know what I'm talking about? The Trump supporters laugh. They bought
t-shirts that said it. The people on the left, the Democrat types, they got angry.
So what happens now? You see Bernie Sanders, he's being dragged. The media is looking for blood and
they're desperate. They're laying people off, they're dying, and they will do whatever it
takes to get those clicks. What does that have to do with Twitter though? It has to do with the
fact that someone's
going to find something on your platform, and they're going to call up your advertiser and say,
look what Twitter's doing. And you're going to be like, oh, we had no idea. And they're too bad.
Canceled all ads. Your money's dried up. And so the reason I bring that up is I recognize
Twitter, YouTube, Facebook, these other platforms are worried. Money has to come from somewhere to
pay people. So you also have to realize you've got the press that's salivating, looking for that
juicy story where they can accuse you of wrongdoing because it'll get them clicks they'll make money
and that means even though youtube did nothing wrong with these comments it was just a creepy
group of people who didn't break the rules who figured out how to manipulate the system youtube
ate like youtube you had to take the take that one the advertisers pulled out youtube lost money
so youtube then panics sledgehammers comments just wipes them out. That could happen to anybody, right?
We're in a really dangerous time with –
Well, also in their defense, though, they have to deal with that.
I mean they have a bunch of pedophiles that are posting comments.
No, for sure.
I mean what do you do about that?
What do you do?
Other than hire millions of people to moderate every single video that's put on YouTube, which is almost impossible.
The point I'm trying to bring up is that even if Twitter wanted to say,
you know what, we're going to allow free speech, what happens?
Advertisers are like, later.
Even if you segment it, they're going to be threatened by it,
and so the restrictions are going to come from whether or not you can make money doing it.
I don't know about that.
I think that that is changing, and I think that is changing primarily because of the internet.
If you look at what was acceptable in terms of people discussing that
would get advertisement it was network television standards now that's changing i mean there's
there's going to be there's ads on a lot of videos that i put out that have pretty extreme content
it's because advertisers are changing their perspective do you i don't think so they're
shifting they're 100 shifting that's why this this podcast has ads sure sure i mean i don't think so. They're shifting. They're 100% shifting. That's why this podcast has ads.
Sure, sure.
I mean, I don't think it's to the point where everyone's lost all ads.
But look, you think George Carlin would be allowed to do his bit today?
Yes.
No way.
No, come on, man.
You're not right.
He would be able to do it.
Listen, there's stuff like that on Netflix specials that are out right now.
Things are changing.
It's just in the process of this transformation where people are understanding that because
of the internet, first of all, if you look at late night conversations, how about Colbert
saying that President Trump has Putin's dick in his mouth?
How about him saying that on television?
Do you really think that would have been done 10 years ago?
It wouldn't have been.
Or 15 years ago or 20 years.
Impossible.
Not possible. Standards are changing because of the internet.
So things that were impossible to say on network television just 10 years ago, you can say now.
Kevin Hart lost his Oscar hosting gig because of jokes from 10 years ago.
Right.
But do you know why he lost it?
He lost it because people were complaining.
Right.
Because people who were activists were complaining that he had said some homophobic things.
And they do it every day.
That he had subsequently apologized for before they ever said that.
Count Dankula as a comedian. Okay, look, you were to discuss this. I'm with you. And I understand what you're saying. But I'm a comedian. And I understand where things are going. The demise of free speech is greatly exaggerated. That's what I'm saying. I'm saying there's a lot of people out there that are complaining,
but the problem is not necessarily that
there's so many people that are complaining. The problem is
that people are reacting to those complaints.
The vast majority of the population
is recognizing that there is an evolution
of free speech that's occurring.
In our culture, in all cultures around the world.
But this is a slow process that when you're
in the middle of it, it's almost like evolution.
You're in the middle of it, you don't like evolution well you're in the middle of it you don't think anything is happening but it's fucking happening no so i agree with you i agree with you that the majority of people are like
that's funny i don't care but the minority is kind of dictating things right now for now
and not even dictating things they're just making a lot of noise and that noise is having an effect
that's what data in society was an attempt at. I don't think it was effective.
That's why we're still here talking right now.
It was one attack.
But there's many of them, man.
There's hundreds of articles
that are written about
all sorts of things
that are inaccurate
or misleading.
And some people have been
stripping their bank accounts
and some people have been kicked off.
Yes.
And this is why it's important
to have this conversation
and conversations like this.
So here's what I'll say.
I cross my fingers
and I wait for when you implement
blockchain technology. Get that van ready, bro. Well, the van is going say. I cross my fingers and I wait for when you implement blockchain technology.
Get that van ready, bro.
Well, the van is going to be
a mobile production studio
so I can travel around
when things are getting crazy.
With a lot of water.
Dried food.
And more than just band-aids.
I'm putting a shower in it.
Okay, good.
It's going to be like
I'm going to have a computer
and monitors
and I'm going to be able
to do video.
Sounds cool.
So I can travel around
when everything's happening.
Let's wrap this up.
I want to see the blockchain
version of Twitter where it says it exists. That up. I want to see the blockchain version of Twitter
where it says it exists.
That's what I want to see. It's going to happen whether we like it or not.
Vija, any last thoughts?
No. I just want to thank you, Joe.
This has been great. And Tim, thanks for your
feedback. We're always listening and I've learned
a lot today. Thank you. I really appreciate you guys.
Thank you, Jack. Any last things?
No. I think we've said it all.
That's a wrap, folks. No more ear beatings good night everybody that was awesome thank you thank you oh thanks for
for talking i really do appreciate it hey could you i just want to follow up on a couple things
because they worry me the you mentioned the nancy's account that doc's policeman Docs, policemen, can you please just send that over to me? Bit.ly slash Antifa tweet.
Bit.ly slash Antifa tweet.
And then would you DM me?
I'll follow you.
Would you DM me the accounts that you said had threatened you?
No. Why not?
I believe in minimizing harm.
And if I – so when Patreon –
I won't – how about this?
I won't take action on them, but I want about this? I won't take action on them,
but I want to understand why you didn't take action on them,
and I can't learn from that unless you.
So when Lawrence Southern got banned from Patreon,
a lot of people were.
Stop.
Everybody out of the room.
This is streaming and it's frozen.
Oh. Thank you.