Front Burner - Twitter trolls target Canadian pipeline, immigration debates
Episode Date: February 15, 2019CBC/Radio-Canada journalists crunch the data on more than 9-million troll tweets and reveal foreign campaigns to influence Canadians' opinions. Retweets focused on issues like pipelines and immigratio...n. Jeff Yates joins us to explain what he learned. Elizabeth Dubois from the University of Ottawa paints the wider picture of how troll activity is changing.
Transcript
Discussion (0)
My name is Graham Isidor.
I have a progressive eye disease called keratoconus.
Unmaying I'm losing my vision has been hard,
but explaining it to other people has been harder.
Lately, I've been trying to talk about it.
Short Sighted is an attempt to explain what vision loss feels like
by exploring how it sounds.
By sharing my story, we get into all the things you don't see
about hidden disabilities.
Short Sighted, from CBC's Personally, available now.
This is a CBC Podcast.
I said, do you know who did it?
And he said, we have a very good idea who the responsible person was.
a very good idea who the responsible person was.
Uncover, bomb on board.
Investigating the biggest unsolved mass murder in Canada,
CP Flight 21.
Get the Uncover podcast for free on Apple Podcasts
and Google Podcasts.
Available now.
Hello, I'm Jamie Poisson.
After the 2016 presidential election in the United States, it was clear that Twitter had a problem. We are proud of how that free and open exchange has been weaponized.
Troll armies, propaganda through bots and human coordination.
Worse, a relatively small number of bad faith actors were able to game Twitter to have an outsized impact.
Bots and trolls, based in places like Russia, played a role in shaping the political conversation
and were accused of influencing the result.
The company has since banned tens of thousands of accounts.
And it's gone a step further, releasing archives of tweets from a lot of these the result. The company has since banned tens of thousands of accounts. And it's gone a step further,
releasing archives of tweets
from a lot of these dead accounts.
That information shows that trolls,
bots, and political manipulation
isn't just a problem in the United States.
As a new report reveals,
trolls from foreign countries
are working hard to amplify rage
on polarizing issues.
CBC Radio Canada in Montreal
analyzed nearly 10 million tweets.
They've concluded that Twitter trolls linked to suspected foreign influence
stoked controversy over pipelines and immigration here in Canada.
Today I'm talking to Jeff Yates, one of the reporters on this story,
about who was behind this troll campaign and why.
And then I'm speaking to Elizabeth Dubois, an expert on this stuff,
about how we should deal with the problem.
This is FrontBurner.
Jeff, hello.
Hi, thanks for having me on.
Thanks so much for being here.
So 10 million tweets, that's a lot of tweets.
Can we drill down on the ones that directly targeted Canadians?
How many are we talking about here?
So we're talking about a little over 21,000 tweets directly targeting Canadians.
So we looked at these 9.6 million tweets.
So we looked at these 9.6 million tweets.
And so we compiled a database of tweets that were using certain keywords pertaining to Canada. So, you know, we looked at tweets that mentioned Canada, tweets that mentioned certain Canadian issues or politicians or even cities or popular hashtags like, you know, Ontario Polly or Quebec Polly, stuff like that.
So we're pretty confident that these tweets were at least, you know,
amplifying messages that were either for or by Canadians.
And what are these tweets saying? What are the messages?
So they're basically retweeting Canadian activists.
You know, the eighth most retweeted account in our database
was a Greenpeace activist who's critical of pipelines.
Prime Minister Trudeau, it's time to reverse course on Kinder Morgan.
You promised climate leadership.
You promised indigenous reconciliation.
And by endorsing Trans Mountain, you violate both.
But they were also sharing, you know, news articles.
They were sharing opinions of, you know, Canadian articles, they were sharing opinions of, you know,
Canadian citizens that are against pipelines. So, you know, we're not talking about fake news here,
we're talking about amplifying messages and opinions that were already there in the Canadian
Twitter sphere. So is the idea here that Twitter identified these 9.6, almost 10 million tweets as tweets that originated from trolls, either
trolls who are people or, you know, bots that are performing the role of trolls. And they published
all of these tweets. And, you know, from this huge database, you have been able to call,
you have been able to identify 21,000 Canadian.
Yeah, exactly. Exactly. I think the, you know, why Twitter is doing this is because they were
criticized in the past, you know, when they would remove fake accounts or potentially malicious
accounts, they would just delete them. And then, you know, journalists or researchers couldn't look
at what they were doing. So this is kind of their trying to be a bit more transparent
and allow us to look at what they were trying to do, basically.
The company says it's now releasing all this content
to enable further independent academic research and investigations.
And I know that you mentioned the pipeline tweet,
specifically retweeting activists who are anti-pipeline.
But can you give me some examples of some of the other Canadian-centric tweets You mentioned the pipeline tweet, specifically retweeting activists who are anti-pipeline.
But can you give me some examples of some of the other Canadian-centric tweets that you found in this massive stockpile of data? Yeah, so we saw a lot of tweets about immigration issues, you know, stoking, you know, amplifying very negative messages about immigration, refugees, Muslims,
a lot of conspiracy theories as well.
We saw a huge spike of activities around major news events.
So, for example, the Quebec mosque shooting two years ago.
There are many questions remaining.
We don't have identities of any of the victims,
or again, what might have led to what police are calling this this horrible mass murder
and you know there was a huge spike of activity of bots retweeting accounts talking about this
this event mostly in a negative light there were some conspiracy theories in there you know
basically putting into question the official the official story of how things went down,
claiming that there was a second shooter, for instance.
And what do we know about where these bots or trolls were coming from?
Yeah, so Twitter published databases of suspected influence operations in four countries.
So we looked at iran russia
venezuela and bangladesh so twitter says that these accounts seem to be run out of these countries
once again you know they can't say for sure who is running this they can't say for sure that it's
actually a government doing it um but they do suspect that it's uh it looks like an influence campaign. And at the very least, it's not authentic behavior.
So it's not actual people behind, you know, it's not personal Twitter accounts.
It looks like it has a certain objective in mind.
Again, I'll pick up on this pipeline tweet that it looks like it originated in Iran.
Why would Iran want to intervene in a pipeline debate
in Canada? That's an interesting question. It's hard to determine if we, you know, we can't look
at the actual account. So we can't see who they were following, who was following them. We only
have like basically the tweets and certain statistics about those tweets. So, you know,
I talked to a Middle East expert here in Montreal who observes Middle Eastern politics.
He was saying that, you know, Iran does have a certain interest in seeing Canadian pipeline projects fail because, you know, Iran is facing down sanctions.
So they can't sell oil to Europe or to North America.
They have to go towards the Asian market.
And, of course, the pipeline projects in Canada could help Canada sell more oil to Asia.
So it might be a way to protect their market stake in Asia.
This might have been the beginning of a larger operation.
So they might have been retweeting activist messages, trying to ingratiate themselves in this community for an ulterior purpose that we don't know.
So, you know, this was taken down, so we'll never know what it could have done in the
future.
But that might also be a part of that.
You know, we saw stuff like that during the 2016 presidential election.
You know, Russian agents did this.
So they would pose as activists for Black Lives Matter or for gun rights or stuff
like that. And they would basically build audiences in these activist circles and then use that to
propagate certain messages to discourage people to vote and stuff like that.
Deputy Attorney General Rod Rosenstein this afternoon announced charges against 13 Russian
nationals and three Russian companies. The defendants posed as
politically and socially active Americans, advocating for and against particular candidates.
They established social media pages and groups to communicate with unwitting Americans.
And I should say these 9.6 million tweets that you've analyzed, the 21,000 Canadian tweets,
point six million tweets that you've analyzed the 21 000 canadian tweets what what years do they span uh so some of them went that went back pretty far i think the first one was something like 2011
um and they were they were active until uh november uh of last year uh so that is really
incredible to hear because i think for a lot of people, myself included, it feels like this is a
relatively new issue. But to hear that this was happening all the way back to 2011, it just really
gives me pause. Yeah, well, when you look at the operations that were done for the 2016 presidential
election in the US, a lot of these things were put in place years before the election. It was a long
game. And this might be what they were trying to do with these accounts, retweeting Canadians.
It might be a long game.
You know, building communities online takes time.
And, you know, not all of these efforts will work.
So, you know, a lot of these Twitter accounts had very few followers.
So, but, you know, maybe it's just they're playing a statistical game,
you know, so you create a thousand accounts,
maybe one of them will become really popular
and then you can use that.
But, you know, this has been going on for years.
We'll be back in a second. Canada's largest library of audiobooks, including exclusive content curated by and for Canadians.
Experience books in a whole new way, where stories are brought to life by powerful performances from
renowned actors and narrators. With the free Audible app, you can listen anytime, anywhere,
whether you're at home, in the car, or out on a jog. The first 30 days of the Audible membership are free, including a free book. Go to www.audible.ca to learn more.
Elizabeth Dubois is with me now.
Hi, Elizabeth.
Hello.
Thanks so much for being here.
My pleasure.
Elizabeth, you teach at the University of Ottawa and you study digital media, influence and politics.
And I have a lot of questions today.
We know that foreign influence campaigns have been a problem in other elections.
And that's part of why Twitter released this data and our colleagues crunched it.
And what I want to talk to you about is how we deal with this problem today. But first, I'm curious to know whether you even think there's a big problem in
Canada compared to what's happened in other countries in the world, like the United States
and France, for example. What we might assess as a very covert effort in 2016 in the United States is a very overt effort as well as covert in Germany and France.
Yeah, that's the big question, isn't it? I mean, in terms of comparing to other places,
it's clear that we haven't had the kind of interference yet that these other countries
have faced. That doesn't mean we're not going to get it in the 2019 election. And it doesn't mean that
there aren't, you know, people and groups at work right now. The tricky bit is when you are trying
to innovatively use technology to get your political messages across, you often try and do
that under the radar because you don't want the people who have the opposite message to you using your same tactics, right?
Like you've got the strategic advantage if you keep it quiet.
And so it's really hard for us to actually even know what's happening.
And I know that you also studied this issue during the 2015 election. these bots or trolls sort of messing around fault lines like pipelines, immigration,
the Quebec moth shooting, for example? Yeah, so we definitely saw evidence of some
bots being active in the 2015 election. We didn't do a deep dive into the specific topics that they
were looking at. But certainly, immigration is one that we saw frequently being discussed
online, and bots were part of that conversation. When my colleagues were crunching all of this
data from Twitter, they found that a lot of these trolls or bots that were also trolls
originated from a couple of countries, Iran, Venezuela, and Russia. And I think we have
a pretty good sense why Russia wants to intervene in elections. You know, my colleague Jeff was
talking about how Iran might also be interested in intervening in a pipeline debate in Canada
because of oil issues there. Why Venezuela? That's a good question. And it's not one that I have investigated myself.
I think when you look, though, around the world at where some of the most unrest is politically,
and you think about where, in particular, populist movements have been strongest,
where in particular populist movements have been strongest.
Venezuela is definitely one of those ones. And the kinds of tactics that are being used in these disinformation campaigns
are not necessarily linked to populist campaigns,
but so far we've seen that correlation show up.
And so that is one potential reason among what I would imagine
would be a really wide variety. And I just can't speak to that definitively.
The Twitter data, my colleagues crunched, it comes, you know, some of these tweets are as
old as 2011. So is the strategy shifting? Yeah, I think that that's a really important point. The way people use Twitter in
2011 is not the same way people use Twitter today. The way Twitter identified what was
against community standards and what wasn't in 2011 is different from today. The way that the
back end of Twitter identified what is reasonable automation and what is spam is different.
We've seen positive results from our work.
We're identifying and challenging 8 to 10 million suspicious accounts every week.
And we're thwarting over half a million accounts from logging in to Twitter every single day.
And so because all of those contextual factors have changed,
And so because all of those contextual factors have changed, so have the approaches that people who are trying to send these political messages out.
Those approaches have changed too.
They maybe were pretty simple attacks when it was, I am going to find the person talking about this thing that I don't like and I'm going to send them lots of hate until they leave, right? That's a pretty non-sophisticated way of using Twitter. In 2011, maybe that happened.
And today, maybe we would see a much more sophisticated, coordinated attack, a network
of different accounts or users who are all engaging in a way that it's harder to catch any one of them,
but the end result might still be the same,
that that person being on the receiving end of the attacks is pushed away.
Interesting.
You know, Twitter, we're talking about Twitter a lot today,
but it represents a fraction of the available digital audience.
Facebook obviously dwarfs it.
So can you also give me a sense of what's happening elsewhere? Is the same stuff happening on Facebook, for example?
This is where we come to the really big research problem. The reason we talk about Twitter a lot
in this context is because Twitter's data has typically been more accessible and there's been
a bit more transparency around what they're doing
and how they're going about it in terms of dealing with trolling and disinformation. Not that there's
been a ton of transparency, but it's been a little bit higher than in the case of Facebook, for
example. And so we just really don't know much about what's going on on Facebook. We know that Facebook themselves have identified coordinated attacks and coordinated malicious actors.
Our teams have found and shut down thousands of fake accounts that could be attempting to influence elections in many other countries, including recently in the French elections.
Definitely there's evidence even from within Facebook that some of this is happening.
Definitely there's evidence even from within Facebook that some of this is happening, but we don't really have access to Canadian specific data to be able to go and do that deep dive to know what the specific contours of it are in our case. And what's Facebook's reasoning for not releasing this data to you, for example?
Well, there's a bunch of different things.
One is at the end of the day, they have a bottom line. They are responsible to their shareholders and not to Canadians and the good of Canadian democracy.
Part of it is if you make more transparent the processes you use to identify bad actors, it gives bad actors more of an opportunity to figure out how to game
that system. I had not thought about it from that perspective. Right and so
that's obviously not an ideal situation.
Before I let you go, I want to get a sense here on what we can do about this.
The federal government has just announced a strategy to deal with Before I let you go, I want to get a sense here on what we can do about this.
The federal government has just announced a strategy to deal with election meddling. We have been watching and learning from the experience of others, making our own assessments, and have developed a plan that will protect Canada's election.
So they're setting up a panel of people to warn us if the election is being undermined.
There's a plan to bring together a bunch of our intelligence services.
There's a bunch of money, $7 million on digital news literacy. They're also calling on social media
companies to do more. We see that the social media companies, while they're starting to take some
responsibility, still have a ways to go. And I think for their consumers and their users, they
want to know that when they're interacting on those platforms, that they can have confidence
in the interactions that they're having.
This all seems pretty top level to me.
Do you get a sense we're able to combat this kind of activity with this sort of plan?
I think it's a really good step. I think there's a massive problem when we think about the way tech companies have been addressed in that announcement that Minister Gould made.
They were saying, you know, essentially, we're going to ask the tech companies to comply. We're
going to hope that they put Canada's democracy first. But there's no regulatory reason that
they absolutely need to in most circumstances. And so I'm worried about that.
Well, I think that's a good place to leave it.
Elizabeth, thank you so much.
My pleasure.
Thanks for having me.
That's it for today.
FrontBurner is a daily podcast from CBC News and CBC Podcasts.
It's produced by Chris Berube, Elaine Chao, Shannon Higgins, and Stephen Howard.
With help from Aisha Barmania and special thanks to Sylvia Thompson, Marie Claudet, Suzanne Dufresne, and Kim Garrity.
If you ever think to yourself, hey, this show sounded really good today, that's Derek VanderWijk.
He does our sound design and technical work.
Our music is by Joseph Chabison of Boombox Sound.
The executive producer of Frontburner is Nick McCabe-Locos.
And I'm Jamie Poisson. See you Monday.
For more CBC Podcasts, go to cbc.ca slash podcasts.
It's 2011 and the Arab Spring is raging.
A lesbian activist in Syria starts a blog.
She names it Gay Girl in Damascus.
Am I crazy? Maybe.
As her profile grows, so does the danger. The object of the email was, please read this while sitting down.
It's like a genie came out of the bottle and you can't put it back.
Gay Girl Gone. Available now.