Front Burner - Can an ad boycott fix Facebook’s hate speech problem?
Episode Date: July 6, 2020Over 800 companies, including Microsoft, Lululemon, Pfizer and Canada’s five biggest banks are pulling their ads from Facebook this month. They’re just a few of the companies responding to the Sto...p Hate for Profit boycott, led by civil rights groups who want white supremacist content and misleading climate and vaccine information off the platform. Today on Front Burner, we talk to McGill’s Beaverbrook Chair in Media, Ethics and Communications and Big Tech Podcast co-host Taylor Owen on whether a threat to the tech giant’s bottom line is the right incentive to deal with hate speech on the platform.
Transcript
Discussion (0)
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem.
Brought to you in part by National Angel Capital Organization,
empowering Canada's entrepreneurs through angel investment and industry connections.
This is a CBC Podcast.
Hi, I'm Josh Bloch.
It may not be obvious when you log in,
but some of your favorite social media platforms are in the middle of big changes.
There's been a push for them to fight hateful content online.
And everyone from Facebook to Twitch are rolling out new rules or bans. We're adopting some new policies to prohibit a wider category of hateful content and ads.
Reddit shut down more than 2,000 forums, including its biggest Donald Trump community.
It's effectively shutting down the largest Trump supporter forum on the internet.
YouTube shut out several creators known for white supremacist content,
like former KKK leader David Duke.
And over 800 companies are demanding that Facebook go even further.
Their business model's flawed and they failed to take the steps needed to remedy it.
So I think now more than ever they're endangering human health
and weakening our democratic systems.
This is Ryan Gellert, general manager of Patagonia.
It's one of the brands responding
to a call from civil rights groups to boycott Facebook ads this month. Canada's five biggest
banks, Microsoft, Ford, and others are pressuring Facebook to crack down on hatred and misinformation.
So tomorrow, organizers of the boycott will meet with Facebook executives to talk about just that.
But can pressure from advertisers really keep a giant like
Facebook in check? That's today on FrontBurner.
I'm here with Taylor Owen. He's the Beaverbrook Chair of Media Ethics and Communications at
McGill University, and he's the co-host of the Big Tech podcast. Hi, Taylor.
Hi.
It's great to have you here.
So this month, there's this rapidly growing number of companies pulling their ads from
Facebook.
It's huge companies, including Adidas, Pfizer, and Lululemon.
Brands such as Coke, Honda, Verizon, and Starbucks.
In addition to Hershey's, in addition to Unilever.
Reebok, Clif Bar, Chobani, and I could go on and on and Starbucks. In addition to Hershey's, in addition to Unilever. Reebok, Cliff Bar, Chobani.
And I could go on and on and on.
What are they hoping will happen?
Well, I mean, I think the tactic is to go after what the core of the Facebook business model is,
which is advertising.
We have to remember that Facebook is at its core, one of the world's largest advertising companies, and 98% of their revenue comes from the sale of digital ads.
With these companies who increasingly see a PR challenge of being affiliated with some of the
content that's circulating on Facebook, want to change the company's behavior, this on its
surface seems like a fairly effective
way of doing it. You know, it's remarkable, though, the sheer number of platforms making
policy changes and bans in this moment. It kind of feels unprecedented. I mean, you have YouTube
banning white supremacists Richard Spencer and David Duke, Reddit shut down the popular subreddit
The Donald, and then the live streaming platform Twitch temporarily banned Donald Trump for hateful
conduct. Why in your mind is all this happening right now? So I think the now is the critical
aspect of that. Why now? Why not after the UN found that the government of Myanmar used Facebook
to incite genocide? Senator, what's happening in Myanmar is a terrible tragedy, and we need to do more.
We all agree with that.
Okay.
But UN investigators have blamed you, blamed Facebook.
We're playing a role in their genocide.
Or why not after the Duterte regime cracked down on their free press using the platform?
after the Duterte regime cracked down on their free press using the platform. Freedom of the press is the foundation of every single right you have as a Filipino citizen.
Rappler uncovered online disinformation networks pushing pro-government messages
and threats against Ressa and other journalists.
You are a fake news outlet.
And I think the reason is it's happening now because the debate is happening in the U.S.
These are American companies where the bulk of their revenue at the moment still comes
from the United States.
And so in the United States right now, we have this confluence of Donald Trump using the platform to spread harmful, certainly and arguably hateful speech.
No justice, no peace, no racism, police.
We have a Black Lives Matter movement that is calling attention in an unprecedented way to the dangers of this kind of speech.
Demonstrations have erupted across the globe calling for justice and police reform.
Social media has erupted too, but not all the what you are seeing is true.
of the coronavirus, where in a very visceral way, people are seeing the costs of false information circulating in our public sphere.
What do you mean?
Well, with coronavirus, we desperately need everybody to agree on some basic facts and
to change their behavior in a common way.
And when we see false information and conspiracy theories and anti-vax content.
On live TV, Trump asked a top official to look at whether UV rays or disinfectant can be inserted into bodies as COVID-19 treatments.
The denial of science have led to the United States having the worst epidemic.
Happen to be taking it.
Hydroxychloroquine.
I'm taking it. Hydroxychloroquine.
Right now, yeah. Because I
think it's good. I've heard a lot of good stories. Coming from even the most senior levels in our
political system, spreading lies about a public health emergency on a platform in which people
get their information and their news about the world, I think we're seeing the impact of that,
certainly in the United States right now, where you're just not getting strong buy-in to these public health
measures. And in part, because people just don't believe the basic facts about this virus.
And it's a moment where I think people are starting to demand, and certainly advertisers are,
more responsibility from companies over what circulates on their platforms.
You know, Facebook really stood out around the Black Lives Matter protests,
around George Floyd's death, when Donald Trump tweeted,
when the looting starts, the shooting starts.
How would you know that phrase and not know it's racially charged history?
Well, I've heard that phrase for a long time.
I don't know where it came from, where it originated. I view that phrase as... In 1967, the Miami police chief...
Well, I don't know. I've also heard from many other places, but...
Twitter added a disclaimer to the tweet, and it said that it broke rules about glorifying violence.
But Facebook, on the other hand, left that comment on social media without any disclaimer.
That seems to have got a lot of attention. I mean, walk me through Facebook's
approach to moderating its content when it comes to things like the tweets from Donald Trump.
The way they have contorted their rules and policies to allow for Trump to continue doing
what he does is one layer. The broader issue, though, is that Facebook just has a fairly restrained view
of how speech should be moderating on their platform. So particularly when it comes to
paid content and paid content by political actors, it's their view that the public should be able to
judge the speech of political actors.
I am committed to making sure that Facebook remains a place where people can use their voice
to discuss important issues, because I really believe that we make more progress together
when we can hear each other's perspectives.
And that it shouldn't be up to them to limit that speech.
And Zuckerberg has said that Facebook shouldn't be an arbiter of truth.
Twitter decided for the first time ever to fact check one of President Trump's tweets.
I wondered if you thought that Twitter may have made the wrong decision here.
We have a different policy, I think, than Twitter on this.
You know, I just believe strongly that Facebook shouldn't be the arbiter of truth of everything that people say online.
That is the position. The challenge with that, of course, is that they moderate speech all the time.
They have content moderators around the world applying their terms of service agreement and
the rules of Facebook to over 100 billion messages a day. So they are without a doubt moderating speech. Their algorithms
decide what we see, what voices get amplified, what content ends up in our newsfeed, what goes
viral. These are all acts of editing speech. However, when it gets to paid speech, particularly
paid speech of powerful politicians, they have a much less hands-on approach. And, you know, often
here, seeing speech from politicians is in the public interest. And in the same way that news
outlets will often report what a politician says, we think it's important that people should
generally be able to see it for themselves on our platforms too. And I think that's something
that's coming into public view. Some of the implications of that particularly are coming into public view now in the United States.
In a way, I think people in other countries always knew it was a problem. Again, if you lived in the
Philippines and you experienced what Duterte did using those same tools of Facebook, you probably
would have been calling for this a lot sooner. And certainly you would have felt the impact of this kind of speech paired with these kinds
of tools much sooner.
There's Maria Ressa, who founded the Replinews website, which has been at the forefront of
fighting propaganda spread through social media.
I went to Facebook in August of 2016 and told them, we are where you're going to be.
We're the cautionary tale.
We're the canary in the coal mine.
But it's happening in the United States now, and that is getting the attention of both
broad populations and now at the moment, global advertisers.
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem.
Brought to you in part by National Angel Capital Organization.
Empowering Canada's entrepreneurs through angel investment and industry connections.
Hi, it's Ramit Sethi here.
You may have seen my money show on Netflix.
I've been talking about money for 20 years.
I've talked to millions of people and I have some startling numbers to share with you.
Did you know that of the people I speak to, 50% of them do not know their own household income?
That's not a typo. 50%. That's because money is confusing. In my new book and podcast,
Money for Couples, I help you and your partner create a financial vision together.
To listen to this podcast, just search for Money for Couples.
So it's so interesting.
We're seeing the market step in to try and reform Facebook by voting with ad money.
They are right at the target of this Stop Hate for Profit campaign that's been led by a number of groups.
It was started by the campaign Stop Hate for Profit.
It lists the content it wants gone from Facebook and Instagram too.
But how effective do you think a boycott like this is likely to be?
So I think this is the core of this debate at the moment. My feeling is that the effect of this boycott will be limited. And it's partly because time and time again,
as each one of these incidences arose, this is not the first controversy over speech on Facebook,
time and time again, the market has sent a very clear signal, which is that there is a short-term cost with a stock price dip and perhaps some sort of decline in advertising revenue.
But very quickly, both the stock price and the ad revenue comes roaring back.
It sounds like Mark Zuckerberg agrees with this, right?
He said just the other day that he expects this
revenue to come back quite soon. Right now, if you want to place a micro-targeted ad anywhere
in the world, Facebook and Google, who offer the most sophisticated products, over 50% of all
digital ads in the world go to one of those two companies.
So if you're an advertiser and you find this particular type of product
useful, you have no other options. And I just don't think we can expect the market to correct
what is ultimately a market failure. And the market failure is that we have largely a duopoly in advertising.
We have monopolies over the accumulation of data about our lives, where a very small number of
companies own that data and can benefit from it with these kinds of advertising services they can
provide. And unless we get at that and some of the harms affiliated with that,
then I think we're kind of missing the bigger picture here.
I do want to point out that Facebook says that, you know, it does put billions of dollars into
both people and to tech to moderate the content that's there that has a sophisticated AI that's
picking up 90% of the hate speech before it even appears online.
Zuckerberg also said that they are going to change some policies. To prohibit claims that people from a specific race or specific ethnicity or national origin
or religious affiliation or caste, any claim that they are a threat to the physical safety
or health or survival of anyone else.
And we're also changing our policies to better protect immigrants, migrants, refugees,
and asylum seekers. And that like Twitter, it's now going to tag newsworthy posts,
you know, that break the rules, tweets like the one that Donald Trump put out
there. We make a decision to leave up content that would otherwise violate our policies,
because we consider that the public interest value outweighs the risk of that content.
So we will start soon labeling some of the content that we leave up, because it is deemed
newsworthy. So now people are gonna be able to know when that designation has been made.
Do you think that these measures go some distance in terms of fixing the problem?
I think undoubtedly Facebook is doing things now
that they would have denied the necessity for or even the possibility of doing.
But what they're saying is actually a
little bit different. It's that 90% of the hate speech that ultimately is identified is identified
by AI. What we don't know is how much actually in total circulates but isn't found. And that's
important because the scale of the speech and the messages that are circulating
on these platforms is almost beyond our comprehension.
I mean, Nick Clegg, just in his piece talking about their responses last week.
Nick Clegg is the VP of Global Affairs at Facebook?
Yeah, exactly.
And he said that there were 100 billion messages circulating on Facebook every day. Around 115 billion messages are sent on our platforms on an average day.
And only a tiny fraction of that is hate speech.
And some small percentage of those are identified either by AI or by their human content moderators.
But we do not know how many circulate unseen.
Right. But we do not know how many circulate unseen. And we know that a lot of the harms that happen sit in private groups, WhatsApp, which are literally encrypted, so we can't even see what's happening.
And so I just think we need to really think about how the design of this technology leads to a scale of problem that is very detached from the types of solutions
we're talking about.
The civil rights groups behind the boycott are meeting with Facebook tomorrow.
If they really want to reform Facebook, what should they be doing? What should they be asking
for that meeting and doing more broadly?
My feeling is that the attention should be focused not at small changes that might happen
in the short term over the moderation of the platform. I think there will probably be more
money committed to content moderation. I think we might see a little bit some movement on moderation of political ads
in the United States coming up to this election. But my feeling is this kind of movement should be
more directed at governments and at citizens to support government intervention. When we have a
market failure that is causing harms in our society.
That is when we expect governments to regulate.
And I think that's the moment we're now in.
I think we need to take Zuckerberg at his word when he says he's open to more regulation
and focus our attention on what that regulation should look like.
Not regulation that necessarily serves the interests of private
companies, but regulation that serves the interests of our democratic societies. And that's where a
big movement and a big public campaign could most effectively, I think, focus its attention.
And to what extent have governments been willing or reluctant to try and create or enforce that kind of regulation
online? So we've seen a real change in the last couple of years of governments stepping into this
space. But we also need to really get a handle on what kind of speech as a democratic society
we want to allow in the digital public sphere? And these are really hard
questions for governments because they strike right at the core of a tension between our need
and desire for free speech in a democratic society and our responsibility to protect
individuals from the harms of that free speech.
And every government comes at this in a slightly different way due to cultural and historical experiences.
So one of the first countries we've seen really move aggressively on moderating content and hate speech online has been Germany.
...has passed one of the strictest online hate speech laws worldwide.
online has been Germany....has passed one of the strictest online hate speech laws worldwide.
Here in Germany, big social media companies are asked to remove hate speech within 24 hours.
Because of their unique historical experience.
And so they literally banned Nazi speech on platforms and created very significant fines
for any piece of speech that circulated after it had been flagged.
The Justice Minister Heiko Maas, who introduced the legislation,
said it will prevent calls to murder, hate speech and Holocaust denial,
which he described as attacks on other people's freedom of expression.
But opposition is mounting.
Now, that had lots of challenges to it,
and other countries are now learning from what worked and what didn't.
What I do believe very strongly, though, is that in a democracy, for those of us who are
lucky enough to live in a democracy, these kinds of difficult conversations
really have to be happening by institutions with democratic legitimacy and accountability.
democratic legitimacy and accountability. Global companies sitting outside of our jurisdiction,
trying to apply norms of speech to everybody on the planet on a billion pieces of content a day,
do not fall into either of those. So even though this debate is hard for democratic governments,
and they're reluctant to step into this debate, I think it's their responsibility to take ownership of it.
Well, and it strikes me that last week we spoke with Filipino journalist Maria Ressa about how the president of the Philippines, Rodrigo Duterte, was using social media to stoke hatred
against his critics.
We also challenged impunity in a propaganda war, in the propaganda war, the use of social media,
Facebook in particular in the Philippines, because that's an American company. We continue,
despite the government's efforts to harass, to intimidate us, we continue moving forward.
And it seems like his government would not be one that you would really want
regulating the social media that he's
using to attack his critics. Absolutely. And this gets at one of the best argument that platforms
can make for a light touch on moderating speech, which is that for many of their users of their
tool, they offer a much more democratic form of speech than the
governments of those citizens would allow. And in those countries, it is probably the case that
the free speech allowed on the platform vastly outweighs the harms that are caused by that speech,
because it is fundamentally democratizing.
So this gets at one of the structural problems in the way we've designed these technologies
and our public debate, is that we have a company, in this case Facebook, whose rules apply
to billions of users in vastly different political, cultural, and economic contexts.
And so if you try and come up with one set of rules for all of those people,
you are going to get this real discrepancy between who is harmed and who is protected by the nature of that speech.
who is protected by the nature of that speech.
And I don't know a way around that other than saying that those of us
who do live in democracies,
who can have these conversations
in a democratic way
and impose our laws and our norms
and our expectations around speech
on the companies that determine our speech
have to do so.
Taylor, thank you so much for speaking with me today.
My pleasure.
Some news before we go today.
Canada's ethics commissioner is now investigating Prime Minister Justin Trudeau's decision
to have the WE Charity manage a $900 million student grant program.
I do not want you to let anyone get away with telling you that you're the leaders of tomorrow.
Because you're already leaders today.
WE is best known for WE Days, these youth empowerment events that happen in stadiums all around the world.
There was criticism from the moment the partnership was announced,
from the charitable sector, the NDP, and the conservatives,
in part because of the Trudeau family's long-standing connection with the non-profit.
Before he became prime minister, he's spoken at many WE Days a few years ago.
Last Friday, WE Charity and the federal government announced that the partnership was cancelled.
And hours later, this investigation was launched.
More on this on tomorrow's show.
That's all for today. I'm Josh Bloch. Thanks for listening to FrontBurner.