Your Undivided Attention - The Bully’s Pulpit — with Fadi Quran
Episode Date: June 22, 2020The sound of bullies on social media can be deafening, but what about their victims? “They're just sitting there being pummeled and pummeled and pummeled,” says Fadi Quran. As the campaign directo...r of Avaaz, a platform for 62 million activists worldwide, Fadi and his team go to great lengths to figure out exactly how social media is being weaponized against vulnerable communities, including those who have no voice online at all. “They can't report it. They’re not online.” Fadi says. “They can't even have a conversation about it.” But by bringing these voices of survivors to Silicon Valley, Fadi says, tech companies can not just hear the lethal consequences of algorithmic abuse, they can start hacking away at a system that Fadi argues was “designed for bullies.”
Transcript
Discussion (0)
There was this far-right candidate, Bolsonaro, who's now the president.
But he was a backbencher.
The majority of Brazilian said they would never vote for him.
And he spoke a lot about basically cutting down big chunks of the Amazon
and attacks on indigenous communities and on women.
That's Fadi Kuran, an activist and campaign director at Avaz,
a platform where 62 million activists gather online to push for change.
For years, these activists have been warning Fadi
that disinformation is hindering their ability to organize.
Whether there causes climate change or human rights or vaccinations,
they can't seem to get people to agree on basic facts.
So Fadi and his team began studying how disinformation
poisons the atmosphere for activism.
That's why they had a front row seat to the Brazilian elections in 2018.
Our members in Brazil were some of the first
who began saying, this thing is becoming more serious.
And although six months ahead of the elections in Brazil,
So about 66% of Brazilians said they would never vote for this guy.
What we began seeing was that more and more people were beginning to support him
as the social media environment became more and more toxic in Brazil.
The decisions made in conference rooms in Silicon Valley reverberate throughout the world.
We've talked about this before in the podcast, but our guests today can really show us those effects
and tell us about the people who experience them firsthand.
They're just sitting there being pummeled and pummeled and pummeled.
by this horrendous content that's making people literally go out and look for them in the streets and beat them up and bully them.
In the last few weeks, we've watched as anger, frustration, indignation, and so many other emotions
that have been building for centuries in the United States rose to the surface of our national consciousness.
At your undivided attention and the Center for Humane Technology,
it's felt like a time to listen and to reflect.
We actually held back many episodes that didn't meet the present moment,
but we think now is the time to put this interview out there,
because social media has played a major role
in the Black Lives Matter movement and the coronavirus pandemic.
Of course, it hasn't been all bad.
It has exposed incidents and issues
that most of us would otherwise not have seen.
It's enabled people to come together into the streets.
But the harms have also been enormous.
It's also enabled mass hate,
and the people with the power to reduce the harm
aren't close enough to feel the pain,
to feel the urgency, to fix it.
We hope that this interview gives you a better sense
of just how global the reach of these harms
and what we could do to repair the damage.
But we've seen signs of progress on that in the last few weeks, too.
Some Facebook employees have staged a virtual walkout
in response to Mark Zuckerberg's decision not to enforce content moderation policy
on posts that incite violence.
A few have even quit.
More than 140 scientists from the Chan Zuckerberg Initiative
said in an open letter to Mark Zuckerberg
that Facebook's practices are directly antithetical
to CZI's goal of building a more inclusive, just, and healthy future for everyone.
We all deserve tools that not only bring us together, but also keep us safe and bring out the best in us as humans.
We are lending our platform to elevate more voices as we listen and learn. Fadis is just one of them.
I'm Tristan Harris, and this is your undividing attention.
So, Fadi, thank you so much for coming on the podcast.
Why don't you give people a little bit of background on Avaz and some of the work you're doing
and then we can lead people into a deeper conversation?
So the best way we describe Avaz is as a movement of people all over the world coming together
based on the belief that we have more in common than that, which divides us.
And most of our campaigns focus on issues such as human rights in the Middle East,
climate change, pushing for democracy.
But recently, what we realized was that the disinformation environment
and the larger social media environment was creating an ecosystem that made it hard
to achieve the goals that would make the world a better place.
And our members, particularly in 2016 and 2017, began kind of urging us to work on this issue.
And the people we were in touch with, whether there were communities in Assam in India
or women rights movements in Saudi Arabia or average U.S. citizens who were facing kind of different health crises
were telling us that this problem was becoming more and more, not just serious,
but becoming more of an existential threat
to even the small campaigns
that people were organizing on the ground.
And so how did this first get on your radar
from some of the communities who are most affected?
So we do a lot of work in Brazil.
About 10 million of our members are based there,
and we campaign a lot on issues such as defending the Amazon
and the indigenous communities that are there.
And as the Brazilian elections kind of began moving forward,
We built a kind of team to begin monitoring this.
And one of the things that we found is that about 89% of those who voted for Bolsonaro eventually in the elections had believed one of the top ten crazy stories that were put out against his opponent.
And this just shocked us.
And then we began looking in other places.
So one of the stories and issues that we campaigned on when the genocide was happening in Myanmar,
Our community mobilized millions of dollars to help those who were running away from the genocide
to cross into other countries and to get away from it.
And as we were working with these victims on the ground in Myanmar,
a lot of the stories that were coming out of them also were indicating to us that social media,
particularly Facebook in this case, was inciting violence against them.
And the stories that we began hearing were stories of, for example, a woman,
refugee whose husband was slaughtered in front of her and who was raped because we're not sure
but it's likely the government had spread pictures of her husband and their village saying that
they were terrorists and they were planning attacks on the nearby communities. And so as we began
to hear these stories and then as we also began to see things such as, you know, Brexit and what
happened there, Joe Cox, may she rest in peace, the member of parliament who,
was murdered there was a close friend of mine and many of the people on the Avaz team. And the
person who killed her was clear that social media at least played the role in further radicalizing
him and his beliefs. And so all of these different stories began to come together to indicate
that something's really wrong is happening. As we dug deeper and as we pulled our members,
we also began to realize it's not just about these communities.
It's the regular kid next door who may be being bullied at school, who suddenly finds himself
sucked into this information environment that's just full of false information and is either
becoming socially isolated or becoming more and more violent due to what they're reading.
And the existential threat comes in at two places.
The first step is the kind of direct dangers that we see happening to individuals, whether
it's what you speak a lot about the different kind of rabbit holes that people can fall into
in terms of radicalization, but also it makes it difficult for us as humanity to connect and
find deliberative, nuanced, intelligent solutions to challenges such as climate change, having
access to facts, having access to a community that can engage on these issues with complexity
and with hope. And that means that will be in very, very deep trouble that could, and I'm not
exaggerating here, that could and I believe will threaten the human race. Yeah, I agree with
everything you've just shared. I mean, I was just in Washington, D.C. meeting with and briefing several
senators. And one of the things that they were talking about is that they go home to these
town hall meetings and people are believing crazier and crazier things. You know, oftentimes
we've said that this is the problem beneath all other problems because if we can't agree on
what's true, everything is gone. And so right there with you that I think people, you know,
need to hear and understand step by step why it's an existential threat. I think the other example
you mentioned of Brazil is so fascinating and alarming to me. And I think I would be great if you could
share more about it because it's only two steps from social media, like you said, you know,
before social media and people weren't saying they weren't going to vote for Bolsonaro. And then
suddenly you say 85 or something percent of people who voted for him believed one of the fake news
stories that had been spread about him, when he gets elected and then he makes decisions to start
burning a huge chunk of the rainforest, which is the lungs of the earth, it's only two steps
from social media electing a far right leader, which then leads to changing the entire future
of humanity because he's hollowing out the lungs of the earth. So that's an existential sort of
decision-making cascade right there. Could you say anything more about what you saw happening in
Brazil? Yeah, definitely. We built this kind of small team of elves, and we just began collecting
what was spreading on social media
and flagging the different pages
and actors that were spreading it.
And we realized very quickly
that this was coordinated activity
by these malicious actors, by Bolsonaro and of his allies,
and more importantly that
their abuse of the algorithm
was in itself than being amplified
by how the algorithm is already built,
whether it was Facebook or YouTube
which also played a big role.
And essentially, the narrative slowly began to shift in the country.
One thing they often teach in political science classes
is in any elections people at max can remember three things
about the candidate that's running.
So if you can just basically stick one negative idea
into a voter's head about a certain candidate,
you can shift their voting behavior.
And if you can then also shift the narrative of the elections, what the elections are about,
then you can win that election.
And first of all, we began flagging some of these pages to Facebook and other social media platforms.
The pages that they took down that we flagged had about 12.5 million views, and for an election in Brazil, that's significant.
But then we wanted to see, does the social media environment actually influence elections?
So we ran this poll, we chose the most viral stories that had been flagged by fact checkers in the country.
And then we just asked people, of these stories, did you see any of them and did you believe any of them?
And it was a shock to us that the voters in Brazil that voted for Bolsonaro of that subset,
89% of them had believed at least one of those fake news stories.
We also found that WhatsApp was being used to then spam people,
with this disinformation.
And it's important to understand how these different platforms,
Facebook, WhatsApp, Twitter, YouTube, interact with one another,
where, you know, these actors are so sophisticated
that they can use Twitter to test very quickly what memes catch on,
what disinformation catches on.
And then they can move that to other platforms such as Facebook,
create, you know, groups and saturate them with the most effective memes
and have them share them across society.
And then they can use kind of WhatsApp when those memes are, let's say, caught by fact checkers or downgraded to further spread them into society.
And you create this vicious loop that in the end will allow for dictatorial leaders such as Bolsonaro, people who have these far right beliefs to win elections.
And I think the next important step here that we need to discuss is how these platforms become the best weapons.
for bullies. If any bully who would want to design a platform where they can bully kind of
the most marginal, the weak, the average person, these platforms are just the best tool to do that
because they can use tactics such as censorship through noise, such as threats, such as outrageous
content, and to just keep slamming people with it until basically you have this kind of
triggered reaction from people where they either become despondent, depressed, they don't feel
they can impact politics, they don't believe in institutions, they believe everybody's corrupt,
or you move them towards the direction of becoming more and more radical, far right, and extreme.
And if you do that to a society and you end up getting a leader of one of the, basically the most
important country in Latin America, Brazil, to then move on policies such as taking down the rainforest to
Amazon or attacking women's rights or attacking indigenous communities, then the society begins
to slowly collapse, unless we, of course, act together to break this kind of hold and rebuild
things in a more hopeful way.
So why are some of these places more vulnerable to this?
I think people, you know, coming from a Western audience where we have reasonably functional
institutions, you know, many different media sources, paint a picture of what might be different
in some of these places with regard to the strength of institutions
or how much their media is really impacted by social media.
Yeah, so an important caveat to my answer here is that,
and I want to keep re-emphasizing this point for our listeners,
is we're all vulnerable because there's this assumption that, you know,
disinformation, toxic social media is more harmful in some places than others
or a lot of the listeners may be like, yeah, but I would never fall for that.
You know, I would never fall for that fake news.
But the truth is all of our societies are vulnerable.
Now, our responses, the responses in some places, like on health issues, for example,
the response in, let's say the U.S. may be better than the response in Brazil,
because U.S. health institutions may have more resources, more access, more doctors, and so forth.
But in terms of vulnerability to it, I think what we're seeing across the globe, firstly, is that a lot of the news, of course, environment where people get their news from, especially younger and younger generations, is coming through social media because that's where people are spending a lot of their times.
But in Brazil specifically, Brazil actually has a relatively healthy media environment.
It's not as bad as a place like Myanmar where you would have like government-controlled, basically, media.
But in Brazil, what you do have is cases of corruption that were in the news consistently
where people didn't trust their politicians.
That trust was broken already because of the mass scale of corruption that was being reported on.
So people began to just look and want to engage the news from other sources,
whether that be YouTube or Facebook, you know, the average person in your community and what they post.
And that becomes dangerous when people make the wrong assumption that our politicians and our media is fully corrupt.
They kind of extrapolate from cases to define the whole environment that they're in as fully corrupt.
And then they make the wrong assumption here is that they make the assumption that what we're seeing on Facebook, what we're seeing on YouTube, is the truth.
And what will happen then there is, and Tristan, you speak a lot about it, this is when people,
began to use these platforms more and more, the kind of recommendation algorithm, the news feeds,
began to move people towards, because of the addictive model, towards more and more kind of outrageous
and radical models. And so people began to be sucked into that environment. The other thing
you have is in Brazil, Brazil is a big country, it has some urban areas, but it also has a
significant amount of its population in rural areas. And in those rural areas, the kind of
local media wasn't, you know, as strong, wasn't as big, and most importantly, wasn't
resourced enough to move quickly to kind of how to use social media to spread their
messages. And that created the dynamic there where you can imagine, I'll give you a story
here. There's a woman whose name is Fabriana de Jesus, and there was a fake news story
spreading about a woman in one of the villages in Brazil who was kidnapping children and hurting
them and there was a sketch of this woman and the story went pretty viral in this community
and one of the guys saw this mother of three walking down the street and he felt that she
looked just like the picture of the woman that was spreading on social media and she was
murdered she was beaten to death in this tree and because you don't have in those communities
an effective police force that could quickly engage and shut down this type of fake news media
because you have this belief that the official media is wrong and what we see on social media is right.
It led to the death of this mother.
What's happening now, even in the U.S., you know, as we look at the U.S. political environment,
is pretty similar in terms of how people are engaging with some local, you know, fake websites or fake local news sources,
believing it more than they believe, you know, there's this attack on like the official media, fake news, you know, all that type of rhetoric.
And they're beginning to believe these fringe outlets that are moving people more and more to the extreme.
You know, it's interesting because we talk often about the power of social media to generate Me Too movement or, you know, positive social activist campaigns, Black Lives Matter.
Even Avaz is obviously a organization that uses social media to try to generate positive change.
And I think an issue that comes up often here
is the temptation to say, well, there's all these goods
and then yeah, there's all these bads,
and then we kind of throw our hands up and say,
well, I guess we just don't know.
It seems like it has goods and bads.
Let's just leave it alone.
And I think the thing that brings your perspective and mind together
is that if you look at the balance sheet of harm
and the harm actually having a secondary effect
of debasing the entire trust in a society
that can send you into a kind of dark ages
where people mostly trust the people,
around them for what's true because they don't trust any of the media sources and they trust
social media, what people are sending them, that balance sheet of a digital dark age, people
not knowing what's true at all, or being apathetic, or being unable to take powerful
positive actions as opposed to being passively outraged yelling at your screen, that is an existential
issue. Yeah. It becomes an existential threat because you began having measles outbreaks. You
began having other diseases spread and we saw the same thing in the US. And that puts everyone's
children at risk. I also want to mention a point connected to the inability to have sophisticated
conversations. So Facebook and Twitter and others, they say that if people report misinformation
or hateful content, then they have teams that look at that. There's a whole other argument about
how effective these teams have been. But let's just say that these teams work perfectly. The
truth is you have a lot of communities who don't have access to social media, who don't have
access to technology.
The best example is the indigenous community in Brazil or the Muslim community in Assam in India
who are much poorer who have less access to these technologies.
So you will have this cycle, this bullying cycle essentially, where even if for people who
are not online, they're being attacked and targeted online on social media.
by these bad actors, and they can't report it.
They're not online.
They can't respond to it and spread their own stories.
They can't even have a conversation about it.
They're just sitting there being pummeled and pummeled and pummeled by this horrendous content
that's making people literally go out and look for them in the streets and beat them up and bully them
because they're being called rapists, and they just have no response at all.
So there's this assumption that like social media, and this is an argument that we hear a lot,
that social media opens the conversation for everybody.
Everyone has freedom of expression as if it's this perfectly fair environment.
But it's actually skewed.
Not only is the algorithm skewed, but the privilege of being on social media is skewed
and it's skewed to benefit bullies.
And that kind of slope is just getting more and more intense.
And I, you know, I started a lot of my work in activism, and you mentioned that also the work we do at a vase, when I started organizing online in 2011 around the Arab Spring for nonviolent protests, I remember, you know, being arrested by the intelligence here. And they took me in, and they made me open my laptop, and they said, you're going to delete this Facebook event now. They had no clue what Facebook was, but they were like, you need to delete it. And I realized they didn't know what they were talking about at the time.
So I said, you can't delete an event on Facebook.
That's impossible.
I'm sorry.
And they were like, well, you're going to have to put a post that says that this event is canceled.
So I looked at them, and then I just wrote a post saying, sorry, the intelligence are at my house,
and they're demanding that I canceled this event, knowing that it would actually lead more people to come to the street.
And then they were happy with that, and they, like, released me.
But they had no understanding at the time.
And at that time, it did benefit organizing online social media, benefited that.
But today, we're in a whole different environment where governments, they're the ones that have the power and privilege and control and access and resources to really define the debate online through fake accounts, through a million and one means, you know, political advertising now which Facebook decides not to fact check.
So the whole environment has just skewed more and more towards authoritarian powers.
And it's not the social media that was promised.
It's not the social media that we hear about, you know, on these about pages of these platforms.
It's not even serving the mission of the social media founders anymore.
It's serving the exact opposite mission, which is allowing powerful bullying actors to shut down the debate using many malicious means.
Yeah. It's so hard to hear about these things.
I'd love, Fadi, for you to talk about a meeting that you had in May of 2019,
where you brought an incredible set of people who have been affected by these problems from around the world
and flew them in from Myanmar and Finland, New Jersey, and Sandy Hook to actually talk to the tech companies.
Could you talk about how, when you present these issues to tech companies, they've responded in those meetings?
Yeah, definitely.
Yeah, so to take a step back here, I studied at Stanford.
So a lot of, as did you, and a lot of the people now working at these companies are,
people that are my friends, you know, people that I've been close with. And one of the things that I
and our team realized is there's a disconnect between the kind of work environment and what
executives and employees that these companies believe that they're doing, believe that their
platforms are doing. There's kind of a narrative about that and between what's actually happening to
people as a result of these platforms and their algorithms. And the idea was, they're still good
people. You know, the people I studied with, the majority of people that work on these companies
are not bad people. And our hope was that human connection, you know, the thing we want to build,
the thing I think you and I and many of those listening to this podcast believe is that
human connection can actually heal the world if done well. So we spoke with a number of
key victims of disinformation. For example, Lenny Posner, who's a father of
one of the Sandy Hook children who was murdered and who, until today, eight years later,
is in hiding because of the disinformation that was spread against him.
I mean, number one, just think about that.
This wonderful man, his family, cannot go out in public
because so much filth and malicious lies were spread about them on social media
that they're under attack and harassed by gun lobby supporters
who basically believe that he's an actor and the whole Sandy Hook thing was an act.
And he has a powerful story.
We spoke with a leader from Burma, from the Rohingya community named Tunkin.
Jessica Arro, who's a Finnish journalist who uncovered Russian disinformation
and became the victim of troll attacks and threats on the street.
Ethan Lindelberger, who is a wonderful, wonderful young man high school student
whose mother believes in anti-vaccination theories
and who has tried to convince her to vaccinate his brother and their family.
but she's been sucked into that toxic environment.
And we brought this group together,
and we said we're going to go and meet with executives
at these companies and product designers and engineers.
And when we walked into the room,
and they just shared their stories at Twitter, Facebook, Google,
people in the room were in tears.
You could see them shaken to the core by what they were hearing,
and they couldn't argue that these were just fringe issues
because they saw the pain on the faces of those who were in the room
and they heard their stories.
And we believe it, and we saw that it had an impact.
YouTube, for example, after our visits,
decided to take more stringent action
against Sandy Hook conspiracy theories on their platform.
Twitter removed one of the key war criminals
who designed the genocide against the Rohingya.
And the conversations were, we didn't get to meet,
and I think this is key, the CEOs,
and that's why we're planning another survivor's visit
to Silicon Valley within the next two months and to D.C.
focused on getting these stories to the right people
because we believe that they can have impact.
And, you know, for everyone listening to this podcast, I think one of the ways to fix this problem is actually to get those who are working on these platforms, those who are designing their interfaces, to just see the stories and consequences of what's happening on the ground to normal people just like them because of these kind monster platforms that have been created today.
But I also want to say another thing that's key in our discussion with many of the people in the room.
These decisions, for the most part, are not being made by the normal employees at these companies.
I mean, for those living in California, for example, the threat of climate change is clear.
Last year, millions of families couldn't take their kids to school because the air pollution was so bad due to the fires in California.
So people not only understand this threat, they feel it, they know the data,
danger. And they want to do more, they want to make sure that the companies they work at do
more. They don't want to be a part of the problem. They want to be a part of the solution.
But you have executives at these companies that I think deserve to be called out because
they're often, based on again, what we've heard from meetings inside these companies and
what we've heard from speaking with different leaders around the world who've met with them.
And just from what we're seeing on the ground, who know that they can do more, who have
solutions on the table, such as, you know, correcting the records, such as detoxing their
algorithms, such as creating more humane algorithms, but they're not implementing them.
I think for two reasons, oftentimes it's out of fear of the political repercussions of the
exact type of leaders that came to power because of the toxicity of these algorithms, whether
we're speaking about Trump or Moody or Bolsonaro or others, you know, MBS in Saudi Arabia,
others around the world who are toxic leaders that came to power through social media,
at least in a big part, and who these platforms don't want to challenge largely because
they may be afraid of antitrust action or because they may have certain people within
the executive seat at the company who share certain interests with these leaders.
And it's time, I think, for, again, employees at these companies and for these CEOs to come
and hear the real stories to engage with the people who are the victims of disinformation
and hate on their platforms and to begin acting as fast as possible in the face of this
existential threat. You're making me think of a phrase that someone brought up in one of these
meetings that I've attended, which is that the people closest to the pain should be closest
to the power. And what you, you know, are doing and you're bringing these people into the room,
and I'm curious to hear more about it is you're closing the loop that is unclosed.
There's an open loop right now between I make a decision and the scale of a design decision
or a tweak in a parameter of which newsfeed stories come to the top or the bottom
or which people I recommend for Twitter followers to suggest a new user for them to follow.
You know, each of those tiny decisions actually impact millions, if not billions of people.
and one of the problems is that the people who are closest to the pain of those negative impacts,
there's so many of them, and they live miles and miles away,
and they speak often different languages than the people who made the decision in the first place.
You know, we often say there's at least 22 languages in India.
How many engineers at a Facebook or YouTube or Twitter do we think speak the 22 different languages of India?
So what happened, you know, I mean, I heard that people were touched and moved,
in the rooms of tech companies that you brought them into.
But, you know, when you said some changes occurred,
but oftentimes it takes a lot longer for that change to occur, doesn't it?
I mean, it's been years in some cases that these things were reported
and then didn't get addressed at all until, you know,
you bring people into the room like that.
If you can talk a little bit about what specifically you were asking for when you met them?
Yeah, definitely.
So I agree with you completely on closing the loop
and that the people most impacted should be the people closest to the decision-making.
And what I would say is what happened in those meetings is there was movement.
I think we mobilized a good number of those engaged on these topics to act more forcefully
and pursue more forceful solutions.
We have promises.
I unfortunately can't go into detail, but we do have promises from all these platforms
that they're testing and they're going to roll out much stronger solutions to these issues.
And particularly in this case, we hope that they do it as soon as possible
because of the U.S. 2020 elections coming up.
I wish I could say more, but we did get promises.
But you're right that so far not enough has been done.
And our key asks in those meetings were revolved around two main goals.
The first is what we call correcting the record.
And the idea here is that when a person is targeted with disinformation, with false or misleading content,
and that's flagged and fact-checked by independent fact-checkers,
the platform should alert everyone who was targeted with that misleading content to say,
sorry, but you viewed harmful or misleading content or disinformation,
and then provide them with sophisticated links because it's important here, not just that you provide the like one line correction.
It's important here that you provide sophisticated, well-designed corrections that will engage people and be memorable.
And we've done testing on this internally out of us, and we also have two academics that we've been working with at Ohio State University and at George Washington University.
And we've shown that by showing people well-designed corrections, you can decrease the amount of people who believe in this information by up to 50%, sometimes up to 70%.
The second ask is what we call detoxifying the algorithm.
And this is, I think, very closely connected with a lot of the work that the Center for Humane Technology does.
But it's about how do you redesign algorithms to be more humane, to put human rights standards.
first, and mainly to stop recommending disinformation and hateful content to users.
And so, for example, with YouTube, one of the key things we think YouTube could implement
today is if there's a channel on YouTube that is spreading, let's say, climate denial misinformation,
and this channel is consistently spreading this misinformation, our research found that
YouTube is actually recommending it's sending misinformation.
millions of people to these types of videos that then make it hard to reach climate policy.
But what YouTube can do, at least as a first step to detox its algorithm, is if a channel
is found, let's say there's a three strike rule.
So a channel is found three times to have shared misinformation content that is harmful
to society with the purpose to mislead people.
Then YouTube can say, we're going to stop recommending videos from this channel to our users.
The videos can stay up there.
As of us, we have fought in countries like Iran for freedom of expression and stuff,
but they shouldn't be recommended to users.
Users shouldn't be pushed down that rabbit hole.
So these were the kind of two big things that we were asking for in those meetings.
And that mirrors the approach of freedom of speech is not the same thing as freedom of reach,
that you're granted a right to post content on the internet,
but there's nothing that says that we each have human rights to reach millions of people
just because we open our mouth.
So, yeah, I hear you there.
And also, it's interesting, the researcher Brioni Swire Thompson's research at Northeastern,
she found that, you know, really interesting looking at how human memory works,
that when you issue a correction, you can't just say this thing that you read, that's not true.
Your mind doesn't work like a whiteboard eraser where you just erase the other thing.
You actually have to provide the factual alternative.
The corrections work better when you repeat the retractions multiple times,
because our automatic memory is based on salience and repetition.
And so, you know, one sort of other, you know, additional thing would be,
how do we repeat that correction more often than the original thing got airtime?
There was actually a suggestion from one of our listeners to this podcast
that every time there's hate speech that's taken down,
we notify the person who posted that hate speech
that we're going to post three times as much positive information
about the thing that they were hateful about.
And that not only changes and corrects the automatic memory,
in terms of what gets repeated more often,
but it also creates deterrence
because I know that if I communicate in the future with hate speech,
I'm actually just going to get drowned out by the opposite.
Another thing that she found in her research
is if you tailor the correction to the audience,
so if you have more personalized things,
because if, for example, you present corrective information
in a way that's threatening to the person's identity,
it actually will backfire, obviously.
And so finding an unthreatening, more strategic way
to personalize the correction.
that's you know that's another big one but you know I find this so interesting because
getting this right has less to do with getting the right algorithm right or the right
tech or the right machine learning or the right data it's about the human mind it's about
sociology it's about how we work and I think you and I both share you know an optimism in a
way even though people are hearing lots of criticism that we believe in the power of
human connection and basic decency and civility and goodness but you have to design
systems that make sure that that's what's brought out in us, as opposed to unregulated, unchecked
systems that mainly have allowed bad actors and authoritarian actors and the most funded nefarious
actors to essentially out-compete everyone else. Yeah, 100%. I mean, and we see this every day.
We don't see it enough, but people are good. Everyone, to a large extent, loves their children.
Everyone wants to make sure that their families and friends have a healthy lifestyle.
No one wants to see their neighbors drowning in solution or having their house burnt down.
And what humanity has created through this power of connection in the last decades and centuries
is something really beautiful.
I mean, if you look around you now with the team that's working with you,
if you look at Silicon Valley and what has been created in terms of technology for the world,
there's so much beauty in it.
And it does show you that we feel agency towards what.
wise interactions, we feel agency towards serving each other.
And I think that's what gives me hope.
And that's also what just terrifies me about the environment that social media has created
for our minds and behavior.
I was reading a book by Sapolsky, I believe, a professor at Stanford.
And he explains in detail how the agricultural revolution, because it changed how we interacted
as humans from being largely, you know, hunter-gatherers moving around to becoming sedentary
and how that just rewired our minds in many ways
and how that has basically resulted in some of the good and bad
of the world we live in today.
And I just think about social media
and the kind of digitization of the world
is kind of like a modern form of the agricultural revolution,
something that's just really transforming the pathways of human connection.
It's transforming the ways people engage,
it's transforming the way people move and talk to each other.
And I think we are not even aware of like 5% of the actual consequences that this is going to have on our future.
How is US politics going to look like 20 years from now where this generation is just saturated by this type of content?
It's terrifying to think about that because it's so unpredictable, but the small amount of data we already have now does not show a pretty picture.
And I do feel, you know, based on the meetings when we brought the survivors into the different offices in Silicon Valley, based on a lot of the just communities we work with around the world, from high-level politicians to indigenous communities across Africa and South Asia and so forth, I do believe that there is hope in creating a better world, but I think that hope is quickly being diminished by the social media.
environment that we have today. And it's on all of us for everything that we care about to fight
back against this and stop it before it's too late. Yeah, I couldn't agree more when I think about
the impact on the next generation. You know, everything I just heard from the psychology professor
about the quality of students thinking and the cognition and the conversations with the U.S.
senators saying they're answering to more and more crazy and extreme constituencies. There's just
less high-quality conversation, less nuanced thinking, and that is debasing whether democracy
has a future. It's a form of a government that is based on a strong, sovereign mind, a powerful
mind, critical minds, and conversational and open-minded minds. And, you know, I think about a broader
version of your Correct the Record campaign, because I think that we almost need, if you think
about how this gets adjudicated, you and I and everybody listening to this, no one wants this future.
I mean, I don't think I mean, listens to this and says, you know what, Tristan and Fadi, I actually disagree.
I think we should just keep plowing right straight along doing exactly what we're doing.
I mean, that's what gives me kind of optimism is when everybody sees this, we all realize that no one wants this.
And yet the question is, how do you reverse out of what is, in my mind, a 10-year-long hypnotic trance of artificial polarization, of artificial loneliness and conditioning of children's minds in a certain way?
I almost feel like we need a kind of truth and reconciliation commission.
to sort of say, you know, back to the kind of correct the record thing, it's also correct
humanity.
We have to go back and say, we the technology companies, in the same way that we would notify
you about a fake news story and say this wasn't real, we have to notify you about a false
consciousness that we've created that also wasn't real.
And here's specifically the effects that that has created.
It's created some false loneliness, some false addiction, some false polarization, false
extremism, false, you know, losing a reputation of mainstream media because we actually had this
role in turning mainstream media into clickbait and exaggeration. And I'm not trying to say that,
you know, evil people in tech did this, you know, intentionally. I'm saying we allowed a bad
business model to take over that none of us really want now that we see it. And we need to be able to,
though, not just correct the record, but I think kind of have this correcting humanity, correcting
the false consciousness with this, you know, back notification to all of humanity. I mean, I can
imagine the day I wake up in the future where the close to 3 billion people collectively
using Facebook, YouTube, Twitter, et cetera, get not just a notification that says I read this false
news article, but I've been inside of and marinating in a false consciousness that technology
created through a business model. And, you know, I hope that policymakers listening to this
we'll think about that we all want this to change.
And so a path that would be a solution that's big enough
that would correct enough of this
would involve, I think, really naming the perimeter and surface area
of the false consciousness that has taken place
and reversing as much of it as we can.
Because we want a future where sovereign minds and democracy actually works.
We want a future where children aren't hollowed out and dumbed down
and made more lonely and bullied more often,
but instead are more free, more developed,
to more have in a more intergenerational relationships and wisdom available to them.
And technology could do all of those things if we actually flip it around in a radical enough way.
And that's, I think, you know, with both of our work, it's like how do we, you know, have campaigns that don't just try to stop the bleeding as it gushes out in more and more places where death by a thousand cuts.
But how do we actually kind of transform the solution to just mass healing from what's taken place?
And I'm just curious if you have any thoughts on that or other things that you wish.
policymakers or others would do to help us kind of get back to a stable and sane place again.
First, I have to say that I am like deeply moved by that vision, Tristan.
You know, it gave me goosebumps as you spoke about Truth and Reconciliation Commission
and everyone getting a notification saying, you know, wake up from this false consciousness.
And that's a beautiful vision. And I think it's also realistic. I think it's something
that we can achieve together. So thank you for sharing that. And on the question of
policymakers, definitely we spoke a lot about our advocacy and the Avazas' work targeting
the platforms. But the truth is we hope that they act. We give the fact that they actually
do act a less than 50% chance. And that's why a big portion of our advocacy is now targeting
policymakers and decision makers to create smart deliberative regulations that allow for more transparency
that can hold the platforms accountable for the type of content that they are amplifying
and really create an actual living, breathing type of truth and reconciliation committee
that starts fixing these problems, one policy at a time. And I think we speak about this
at a pivotal moment, probably a tipping point
where we have a small window of opportunity.
And that's connected to the U.S. 2020 elections.
And of us, we're a bipartisan organization,
but we are working and investing a lot of our time
to ensure that these elections are decided by democratic means,
not by disinformation, not by interference,
by safe means where people go to the polls armed with facts,
not armed with anger and outrage and fake news.
And that is an important step because if that happens,
if democracy actually wins in these elections,
then it will open the door for creating the type of policies,
whether that means instituting a transparency parameter
that allows for detoxing the algorithm
or makes it necessary that platforms for viral disinformation content
provide a correction for the record.
We think there are certain countries we feel are moving in that direction.
The EU as a region is now considering how to regulate disinformation very seriously,
and we have hoped that they'll move towards these solutions.
But I think when we look at the elections now as they're unfolding in the U.S.,
and because I think a lot of our audience are policymakers in the U.S.,
we found that the top 100 fake news stories,
viral fake news stories that have been independently fact-checked by, you know, AP, Politifact and others, had reached, and this is from November, 155 million views on Facebook.
So that's more than the registered voting population in the U.S., and that's 1.5 times more than the reach of the top fake news stories that were being spread six months ahead of the 2016 elections.
So if you just look at those numbers, again, just look at the data.
It looks like even these elections, with all the talk about fighting disinformation,
with all the promises of taking down fake bots and so forth,
we're looking at a tsunami that's about to hit the US
and we're beginning to see the first waves of it are ready.
And so if we do want to create this vision, if we do want to fix this problem,
for the future we created.
Every policymaker listening to this,
every platform executive needs to begin using their,
microphone, using their power, using the power that was given to them by the voters who put them
in Congress or on the Senate to say, we want to defend the fabric of the United States, we want
people to go to the polls armed with facts, and the platforms need to defend that. Because
right now we're seeing a shift in the complete opposite direction. And if that happens, you know,
these platforms, although the EU and other countries may move to regulate, these platforms are based
in the USA. And if the U.S.
does not act to solve them, it will be very hard to, and regulate them, it will be very hard
to hold them accountable. But at least now, to ensure a path to smart regulation,
policymakers need to begin speaking up and saying enough is enough. And I don't know what you think
about. I mean, I'd love to hear your kind of thoughts on also what you think needs to be done
to push for more like smart policy, particularly around the US. Yeah. You know, it sort of feels like
the global climate change of culture.
We often say with climate change, it's an enormous problem that would involve thousands
of companies and industries and, you know, changing from the cement that they make to the
agricultural methods to regenerative, et cetera, all this stuff.
And we'd have to have hundreds of governments help enact the policy in that carbon taxes
to try to transition that entire ecosystem to something that it's not causing an ecological
crisis. And that can be really debilitating. And we can say, you know, much like climate change,
this problem of sort of the mass downgrading and degrading of humans through this extractive
business model of automating and manipulating human attention with AI at scale through these
major platforms can feel really hopeless. But unlike climate change, only about a handful of,
I don't know, 100 to 200 people at a handful of companies, maybe 10,
companies, regulated by one government in one country, the United States, could actually completely
turn this around if we had the political will. Like, if you think about how tractable this is to
change these technology products compared to other issues where you have to change, again,
hundreds of industries, hundreds of countries, you know, changing and regulating, this is
immensely tractable. The only thing that's missing is the kind of political will and motivation
that says enough is enough that we can no longer have this business model that profits off
of the kind of long-tail automation of user-generated content with automated algorithms
choosing what goes in front of which eyeballs and cannot distinguish between good faith
versus nefarious actors. And so, you know, where I'm optimistic is if the United States or
specifically California as a state legislator could actually act. And I think that, you know,
pressure from all sides is helpful. The UK has actually got some stuff that's moving with
the off-com internet harms report that's moving along the European Union, the new president of the
European Commission. Ursula von der Leyen is laying out a new EU digital strategy, which we'd like
to see move to address and name some of these issues. But, you know, really, why not do it right
here where the companies are located and where we have the most power and influence? And I think what
we have to do is create a vision that everybody loses in this world if we keep letting it
going. It's not that one political side wins and the other side loses and aha, there we go,
we can dominate and then carry out the future. It's actually that this win-lose game being
played over the minds of everyone and with the most powerful country on Earth here in the United
States with nuclear weapons is actually debasing the viability of that country. And I would say that
the U.S. government is already a puppet or shadow government of the forces of polarization
disinformation that have already been sown by these companies. And what we really need is this
kind of, you know, I think a policy that, again, meets the scale of the problem, something like
not just correcting the record, but correcting humanity. And having that kind of mass truth
and reconciliation for what has happened. Again, not to angrily blame evil people at tech companies
because that's not the case, but to actually say none of us wanted this to happen with this
business model, and no one profits or wins from the next generation of children being harmed
in the ways that they're harmed by bullying, depression, teen suicide increases, all of those
kinds of things. So, you know, I think the future of our lives and of humanity really depends
on actually a not even very complicated set of actions, but just simply having the political
will that everybody, you know, loses if we don't act. And I think when you paint a picture of
Omni-Lose-Lose, that everybody loses by continuing on with this business model, that's the
kind of representation we all need to see is down the road if we don't change. It's not a matter
of, you know, foddy interest on one our way and other people want the other way. It's about the
viability of human civilization. And I hope that through, you know, your amazing advocacy and people
should really check out the petitions and campaigns that you're running on, is it avos.org?
That, you know, people back and support the work that you're doing and that we continue working
together to make this change happen because each one of us has, you know, a role in making it happen.
And I just couldn't be more grateful for the fact that there's not that many people out there
working on this. There's not that many people who have spoken up about it.
And there's certainly almost no one that I know who's actually done the specific work you've done
by bringing the people closest to the pain, closest to the power.
And so I just want to thank you so much for coming on the podcast
and for sharing all of your insights and knowledge with us today.
I want to also thank you and the team at the Center for Humane Technology,
honestly, for a lot of the advice and a lot of the key talking points.
But I also want to highlight that a lot of the terms that have been introduced to this conversation,
such as freedom of speech is not freedom of reach.
are things that you and your team help develop and spread.
And that's already shifting this conversation in the right direction.
So we'll definitely continue working together until this problem is solved.
And I'm excited to see the future that we create where the technology that we have adds to humanity and doesn't subtract from it.
Me too. And thank you so much, Fari. It means a lot.
Your undivided attention is produced by the Center for Humane Technology.
Our executive producer is Dan Kedmi, and our associate producer is Natalie Jones.
Nur al-Samurai and Mara Curtis Nelson helped with the fact-checking.
Original music and sound design by Ryan and Hayes Holiday,
and special thanks to the whole Center for Humane Technology team for making this podcast possible.
A very special thanks to the generous lead supporters of our work at the Center for Humane Technology,
including the Omidiar Network, the Gerald Schwartz and Heather Reisman Foundation,
the Patrick J. McGovern Foundation, Evolve Foundation, Craig Newmark Philanthropies,
and Knight Foundation, among many others. Huge thanks from all of us.
