The Agenda with Steve Paikin (Audio) - Are Canadian Voters Vulnerable to Online Disinformation?
Episode Date: April 24, 2025In a high-stake federal election, accurate campaign information is essential, and the line between what's real and what isn't is blurry. The Canadian Digital Media Research Network is collecting data ...on disinformation and publishing their findings weekly to determine how vulnerable Canadians are during this election cycle. Taylor Owen, principal investigator for the Media Ecosystem Observatory, joins The Agenda to discuss.See omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
Renew your 2.0 TVO with more thought-provoking documentaries, insightful current affairs coverage, and fun programs and learning experiences for kids.
Regular contributions from people like you help us make a difference in the lives of Ontarians of all ages.
Visit tvo.me slash 2025 donate to renew your support or make a first-time donation
and continue to discover your 2.TVO.
He was like a father figure to me.
Unfortunately, found myself in a very vulnerable position.
This is a story about a psychiatrist in Toronto accused
of abusing two of his patients, which he denies.
It's also a story about a system that is supposed to protect patients.
From TVO Podcasts, I'm Krisha Collier.
And this is The Oath.
Subscribe today, wherever you listen.
In a federal election as high stakes as this one, information about the issues, the parties and their leaders is essential and the line between what's
real and what isn't is blurry. Just how vulnerable are Canadian voters to the
disinformation and deep fakes they're exposed to online. Taylor Owen is with
the Canadian Digital Media Research Network which publishes their findings
on election disinformation every week.
He's also the Beaverbrook Chair
in Media Ethics and Communications,
and the founding director
of the Center for Media Technology and Democracy
at McGill University,
and he joins us now from Montreal with more.
Taylor, good to see you again.
How you doing?
Likewise, great to see you too.
Excellent.
What is the Canadian Digital Media Research Network?
Let's start there.
So this is a network we have been developing over the last number of years to collectively
as a research community in Canada, study the information ecosystem.
We began it in 2019 during the federal election when there was a lot of concern after what happened in the US in 2016 around the Trump election, that we might be vulnerable in our information
ecosystem to malicious actors who might want to undermine our democracy.
So for the last five years, we've been developing this network.
We collect information about our information ecosystem as a
whole and we study it particularly during an election and this moment of
real intensity. We try and study malicious actors that might want to
affect and undermine our democracy and we try and get a sense of what a healthy
ecosystem looks like. And the project specifically that you're working on now is what?
So right now we are intensely studying the Canadian election.
We are on a daily basis looking and with the public trying to identify potential incidents
that undermine the credibility and the reliability of information we're getting during an election and we study how that information is flowing through the
ecosystem across the seven social media platforms that we study and we put
surveys out into the field to see if the information that's flowing through the
system is affecting people's behavior in a potentially negative way or
misleading way in the lead up to the election.
Why do you feel there's a need for this right now?
Well, I mean, we believe for a long time in exactly what
was in one of the main conclusions of the Foreign
Interference Commission that we've been through for the last
year and a half in Canada.
Justice Oge found, concluded that of all the things she was looking at in the foreign
interference space, she felt the biggest threat to our democracy was the integrity of
information in our information ecosystem online.
And we believe that strongly.
We think that we've built a digital ecosystem that is largely governed by a few companies
and has real vulnerabilities in it.
While it provides us with a tremendous amount of positive information and reliable information,
it also has some core structural vulnerabilities that allow it to be manipulated by potentially malicious actors.
And how do you gather all the examples
of this maliciousness?
So during an election, we have a fairly detailed protocol
that we try and follow, where the public can send us
information about potentially harmful content they're seeing.
Journalists,ists obviously report on
potentially misleading content or false content in the ecosystem. And when we
receive those incidents, we evaluate them. So we receive dozens a day, obviously, as
I'm sure you're very aware there's a lot of bad content on the internet, so people
have tons of examples to send us. But we evaluate them based on a number of criteria how far they spread how much potential impact they might have
and
how
How the ecosystem is responding to that particular threat
so for example something might be spreading and a ton of
journalists are covering it and actually fact-checking it very quickly and the
ecosystem is sort of responding in a healthy way. But sometimes this
information can be much more pernicious and sort of sits below the
surface, spreads far, and could really impact the election. And when it's that kind
of content, we then study it in much more detail.
We put a survey into the field to see if large numbers
of people are being convinced by it one way or another,
if it's changing our behavior.
And we do a much more detailed analysis
of how it's flowing across these seven platforms,
where it might've come from, who is amplifying it,
and we issue a much more detailed report on it.
Well, let's do an example of that right now.
I'm going to ask our director Sheldon Osmond to bring a picture up here.
And you have classified this picture as an ongoing incident
throughout the federal election campaign because it is
AI-generated fake news.
And for those listening on podcast
and can't see this right now,
I'm just going to describe what we see as a screenshot
from a Facebook ad showing, frankly, a fake CBC ad.
It's a picture of Mark Carney,
and it leads to a fake CBC article.
It has since been taken down,
and in fact, even in the headline,
they have misspelled the word announcement.
But this is all phony. How common is this type of photo?
It's not just photos. There's actually a real widespread campaign at the moment
to spread AI generated video of the leading candidates in the election on meta platforms.
So it's mostly happening on Facebook and Instagram.
And they are created by AI and they are largely trying to drive
Canadians to websites that have cryptocurrency scams on them.
So it's motivated by a financial goal, but the way they're doing it is
capitalizing on the political moment we're in now and using the main
characters in that, Paulyev and Carney, as their templates for this
campaign. And I think what's really interesting about this one is, and we're
really worrying frankly, is it's
enabled by a real confluence of events that have converged on the
Meta platform. This is the first election, remember, where Meta has banned news
links, so there's no actual links to journalism on Meta, so there's a void that this
is filling. We have a new unique capacity where
AI can both create content and exist in the world on its own
using language models.
So it can just be trained to do a broad set of things,
and then this content can be automatically created.
And we have a very different posture of Meta at the moment,
where they are taking down
some of their content moderation procedures and teams that they've had in place for the
previous two elections.
So this confluence of things, I think, has led to a real vulnerability in two of our
main platforms, Facebook and Instagram.
And we've seen a real spread of this kind of AI generated fake content targeting both major parties.
This sure sounds as a, I think one of my grandparents back in the day would have said,
this is Bass-Ackwards where apparently Facebook is going to block out legitimate journalism and
real news, but in fact allow the deep fakes and the AI-generated BS to get through.
Anyway, that's a little aside because I want to show another example here
because, okay, Sheldon, you want to bring this next one up here?
The CBC has investigated these images here.
We've got two shots again of Mark Carney,
and on the left you see a screenshot of a fake CBC ad from Facebook
that included a deep fake video of Mark Carney
compared with a still shot from the fake video of Mark Carney compared with a
still shot from the real video of his liberal leadership speech from January
16th that's the one on the right. Now these images are really, I mean he's
wearing the same thing, it's very very similar. So who is this kind of
thing targeted at and who is likely to be taken in by this kind of stuff?
I mean, as you say, it's incredibly difficult on first look to know if these things are real or not.
And that's the nature of the technology at the moment, is it's become almost indistinguishable.
And we can't actually place that responsibility on each individual user to know if something is real or not.
It's just it's too close.
That particular one, again, these are people trying to financially benefit from the political moment we're in right now. A lot of attention is being paid
to news about the election. There's a real appetite in the public for information about
the election, and we're not getting access to journalism. So there's a void to be filled
on these two platforms, Facebook and Instagram, and people
who want to manipulate us, in which in this case to get us to fall for cryptocurrency
scams have taken advantage of that opportunity.
But I think it reveals a bigger vulnerability, as I was saying before, which we've created
a system and in this case a set of policies as well by the platform themselves that have created this
vulnerability that any malicious actor, whether they want us to click on a cryptocurrency
link or they want to change our behavior during an election, can take advantage of.
Taylor, do you know who's actually doing this?
No.
It's incredibly difficult to know the sources of these campaigns. This is the challenge
we face with foreign interference as well, is just the nature of anonymity and the ability
to mask one's location online makes it incredibly difficult. So what we do at the observatory
is just look at the outputs of the ecosystem. We see what's public. Intelligence
services can do something different. They can look at sources and intent and
they have access to intelligence themselves. So we leave it to governments
to decipher origin and intent and we just look at outcomes. Do we have any
reason to suspect this stuff is coming from offshore as opposed to homegrown?
It's almost certain that the cryptocurrency scams are coming from offshore. I mean, it's very unlikely that those are domestic, but it's also very hard to know.
Gotcha. Is there one side of the ideological spectrum that you suspect is more as opposed
to less responsible for this stuff? Yeah, it's interesting.
In what we've seen this election, no.
These particular, this campaign you're talking about, about AI generated scam content that's
politically targeted, that seems to be ecumenical in who it's targeting.
It looks like what's happening is the language model, the AI that's creating the content, is just looking at Canadian
news. So, Carney gives a speech one morning, that shows up in a news feed, the AI reads
it and adjusts the content slightly in order to drive engagement and links to
those websites. So, I don't think that's being targeted.
More broadly, though, it's very dependent
on the particular circumstance and the motive.
So looking back a number of years,
we have cases where foreign governments have targeted
one party or the other at different moments,
either to undercut a party who they
see as against their interests, working against their interests,
or in support of a candidate or a party or a leader
who they see as aligned with their interests.
So it's a very opportunistic playing field.
Well, okay, let me try to fill in the blanks here a bit.
So we're talking Russia, China, America,
what are we talking? So Russia, China, India, and the United States are our main antagonists in the information
ecosystem right now. Amazingly, Canadians of those four countries, Canadians are most worried
about the United States. 68% of Canadians say the United States government is the biggest threat
to foreign interference in Canadian democracy, more than Russia, China, or
India, which is astonishing in my view, but tells you how far and how quickly
this ecosystem can adapt and change.
That's a fascinating reflection on public opinion, but does what the public thinks actually mirror what you
and your professional opinion think is the country we ought to be the most worried about?
So we, leading into this election, we shared that concern because we have seen an incredibly antagonistic federal government
in the US and we have seen leading actors in the American ecosystem, like Elon Musk,
directly interfere or engage with false information in the politics of other countries.
In the months leading up to the Canadian election, Musk had engaged actively in British politics and the German election, spreading a host of
false information to inject both himself and his platform, importantly, into the politics of those
countries. So if you combine that with the US president actively seeking to undermine our very
sovereignty in the lead
up to the election, we were very concerned that that's what was going to happen.
That we'd get a flood of false content coming from leading American influencers, politicians,
media personalities into the Canadian election, trying to undermine either the credibility
of our election or to sway it one way or another.
Interestingly, that didn't happen.
So far anyway, that's been the dog that hasn't barked in the Canadian election.
And I think there's potential reasons, theories for that.
The main one though is that Trump has stopped talking about the 51st state. Well, his press secretary hasn't though.
She mentioned it again last week.
No, no, under a question from one of our Canadian journalists.
Yes.
But the rhetoric leading up to the election was entirely different than what we've seen.
And I mean, the rhetoric from the White House changed soon after our Prime
Minister Carney's call with Trump, but also after the premier of Alberta had been saying to
US influencers and podcasters that this wasn't helping the Conservative
Party in Canada. So one thing or the other might have influenced this, but the
impact is that this big fish in the digital ecosystem, in this case, Trump and Musk and JD Vance,
some of the big podcasters, just stopped talking about the 51st state and the takeover of Canada.
And when the big fish in this ecosystem stopped talking about something, the discourse dissipates. We know from how the highly algorithmically filtered and centralized
feeds we now consume, so on TikTok, on Instagram, on X, these are highly centralized feeds,
and they take a signal from the big players in it. So when Musk talks about something on X, it isn't just the content he produces that we see,
it's the entire ecosystem contorts itself around those messages.
And when him and Trump and the other big players stopped talking about the 51st state,
that discourse dissipated in Canada.
And that major vector of potential vulnerability that we were concerned about,
the Canadian government was concerned about, just didn't really play out, which is a positive
thing for our election.
Last couple of minutes here, Taylor, let's see if we can focus on a couple of more things.
Who in your view is most vulnerable to this kind of disinformation? I actually don't think we can separate out segments of our society as more or less vulnerable
here.
I think we've built a set of technologies, both the social media platforms and the feeds
through which we get content, the ability for automated accounts to create content that's targeted very specifically at our biases and our beliefs and can be run at scale, and a set of policies that have left both ungoverned.
So those three dynamics are coming together to create a very sophisticated capacity to influence
our behavior.
I think we are all vulnerable to it.
I don't think you can tell the difference between those two videos.
I don't think I can.
I don't think that has anything to do with our political knowledge, our education.
I think it's a function that we are humans with biases and subjectivities, and these tools are designed to take advantage of those.
So to me, the solution is not us becoming more digitally
literate and questioning what we're seeing
and questioning every piece of content
that comes across our feed, although we should be
doing that to a certain degree.
But it's that we need to get at the vulnerabilities
in the very structure of this system, through better governance, the more accountability, through more
transparency of this ecosystem itself. And just finally, how confident are you
that we will in fact be able to do that? Well, I'm positive that we know the tools
to do it. We have the policies at the ready that have been tried in other countries that we know
work to bring accountability and transparency and safety and reliability to these ecosystems.
But so far in Canada, we have not done it.
So I'm hoping the next government, whoever it may be, takes this issue seriously, frankly.
Again, to where we started, as Justice Marie
Oge said, the biggest threat to Canadian democracy is the vulnerability and reliability of our
digital ecosystem. And I hope the next government takes that seriously.
You've given us a lot to think about. So thanks so much. McGill University's Taylor Owen,
joining us on TVO tonight. He from the Canadian Digital Media Research Network. Thanks, Taylor.
Thanks so much. Thanks a lot.