The Agenda with Steve Paikin (Audio) - Is Big Tech politicizing social media?
Episode Date: February 4, 2025Meta is ending its third-party fact-checking program in the U.S. and allowing "more speech" on Facebook and Instagram. It says it will move towards a community notes model similar to "X," as Mark Zuck...erberg becomes more aligned with the Trump administration. The Agenda invites a panel of experts to break down what this means for us and the future of social media.See omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
Meta recently announced plans to end its third party fact checking program in the US in favor
of a community notes model similar to X.
While the company argues it is moving towards more speech on its Facebook and Instagram
platforms, critics see it as cozying up to the Trump administration.
What does this move mean for users of the world's biggest social network?
Has big tech now overly politicized social media?
Let's ask.
In Montreal, Quebec, Taylor Owen,
he's the Beaverbrook Chair in Media Ethics
and Communications and founding director of
the Center for Media Technology and Democracy
at McGill University.
In London, Ontario, Carmi Levy, tech analyst and journalist.
And here in our studio, Brie McEwen,
Associate Professor in the Institute of Communication,
Culture, Information and Technology at UTM, the University of Toronto Mississauga.
And Erin Kelly, CEO and co-founder of Advanced Symbolics, Inc.
And it's good to have you two here in our studio and to our friends in Points Beyond.
Thanks for joining us on TVO tonight.
Carmi gonna put you to work right away. It was on January 7th that Metta announced
that they were gonna end fact checking on their platforms.
What, in effect, does that actually mean?
I mean, initially it doesn't mean a whole lot,
certainly not in Canada because this applies in the US only,
but it will have global impact.
What it means is that the company is making official
what we've long suspected, is that they want to get out of the fact checking and content moderation business, that they
don't want to be put on blast by Donald Trump in his second administration as they were
during the first, and that they're telegraphing. They're essentially repositioning the company
to align with the political winds that are now blowing in Washington. It's pretty clear that this is part of a broader trend,
that big tech in total has been moving in that direction,
long seen as the opponents of Donald Trump.
Now they've been making pilgrimage to Mar-a-Lago,
and now the White House essentially meeting
with the president, making sure that they are
on the same side, and that over the next four years, they are on the same side and that over the
next four years they can expect favorable political treatment perhaps
supportive legislation for their business. So that's kind of what we can
expect is really this is if you're the CEO of a company that's what you do your
primary accountability is fiduciary you adapt your policies based on the context
of your current business the politics politics are changing. The business has to change as well.
We did reach out to Meta for comment and here is what they said.
Sheldon, you want to bring this up and I'll read this out loud for those listening on podcast.
We are beginning with rolling out community notes in the US
and will continue to improve it over the course of the year before expansion to other countries.
Building a community will take time. There are no changes in other countries at this time.
Okay, Taylor, what should we infer from all that?
I think it's true.
They never really wanted to be in the content moderation
or fact checking business to begin with.
And I think it's worth looking back at why
a lot of these programs were rolled out to begin with, which was a reaction to some real challenges on the platform that we saw particularly in the 2016
U.S. election, where the political winds at that moment were going in the opposite direction.
The political winds were insisting that these platforms take their role as overseeing our
public sphere, our digital public sphere more responsibly.
And in response, Metta at the time hired apparently 20 to 30,000 fact checkers or content moderators,
I should say, started training an AI to make their, to do some of this work for them.
But I think in retrospect, really that was reacting to a different political moment.
And we're clearly in one where taking down those guardrails is not only something they
might want to do, and I think probably always wanted to do, and now they're sort of being
enabled to by the political moment, but also something that's going to have another big
effect on how we all experience our digital lives.
Brie, what was your reaction upon hearing
the end of so-called fact checking
on Instagram and Facebook?
Well, it's only the end of fact checking in the US, right?
They've been very clear that in the European Union,
they will not end fact checking.
And that's because they have to comply
with the European Union laws.
So the question for Canada will be,
do we end up like the European Union
and sort of with a digital services act or are we sort of the US model and whatever they want to appease Donald Trump with is what we end up getting.
So fact checking though has been a challenge for companies.
And I would say the content moderation team at Meta and the fact checking team have been different.
They've had independent fact checkers.
But when people get fact checked,
they often, their response is not, oh, I
should look into this information
and see what quality it is.
They are like, I'm being censored.
Fact checking isn't censorship.
It's actually adding more speech.
But people generally feel that way.
We do know that, we don't know about how community notes have really played out in X, because
we don't have independent access to data from X anymore.
But we do know that in smaller communities, community moderation can work much better
than a fact-checking system, right, adhering to the social norms.
But Meta's problem is that they are not a small community.
They are a large network with a lot of diverse viewpoints
within everyone's individual newsfeed.
And so they haven't really explained
how their community notes system is
going to handle the different context that they're working in.
Let me get you to follow up on that, Aaron, the community
notes business.
Because millions of people may be on Twitter, but excuse me, but at the end of the day, they're actually
not a huge part of the population.
And maybe people don't know what community notes are.
So what is that?
So community notes is somebody posts something,
and then you can write a note, like a little tag that's
appended to the post that says, actually, so let's say,
what is most frequently used for,
what I've seen it most frequently used for is
like an e-commerce outlet is saying,
hey buy my doggy blanket, right?
And it'll be shipped to you in two days.
And people say, actually I ordered from this company
and I didn't get it for three months.
Or, and when I did it was poor quality.
And so you can, it's comments on e-commerce
or something that somebody posts.
So it's not the same as responding to the post. It's a tag to the post.
Sort of like a little, that you can read.
And are they usually helpful?
I think they're helpful, but it's not a miss and it's not really an answer to miss and disinformation.
So if I can sort of add to it everybody else is saying I don't think this is a political decision by Facebook. It's a business
decision as you know others have said. And honestly I don't even think that
Facebook has been doing a good job on fact checking. I mean you can argue that.
And I don't understand why we're leaving it to the networks to check themselves.
We don't do that in television. We don't do that with advertising, we have Ad Standards Canada which is a third party that looks to make sure that
people are upholding ad standards. Yes, it's funded by business, we can get into all of
that. We have the CRTC for radio and telecommunications which has famously decided it's not going
to regulate the internet. I think this is the government abdicating its responsibility
to monitor these networks.
The technology exists, you don't have to be in the platform to monitor it. In fact,
technically you can monitor it better from outside the platform. There's all
sorts of reasons for that because the algorithms are trained in a certain way
and it's very hard to have competing algorithms on the same network. We won't
get into a technical discussion but it can be better done outside with a
different algorithm. So I don't understand why
Elections Canada is not for example making sure that we don't have miss and disinformation during elections
That's their job and as we move to a federated model, which we've already done
So mastodon is federated I can put my own server which Mark Zuckerberg or nobody else has any control over
I moderate that server and I can bring all my crazy friends onto that server and we can do all sorts of things.
And it's a direct connection.
So the government has to start monitoring this itself.
Let me pick up on that with Carmi because if I wanted to be a smart aleck, which I'm
not but well every now and then, I might conclude by saying apparently we still care about facts
in Canada and the European Union,
but they don't in the United States, which is why they're not going to fact check anymore.
Now, would that be a wrong inference to draw?
No, I don't think it is, because let's face it, it's pretty clear the platforms in the,
which are largely American-based and global footprint, don't want to be in the business.
Who would want to hire 20 or 30,000 content moderators?
Who would want to build the infrastructure, create the algorithms, build the AIs that will monitor this content in real time,
build a data center that can actually handle that kind of compute capabilities.
And so, no, they don't want to be in the business and
there isn't legislation in place in the US that compels them to do so,
nor is there in Canada.
So, I mean, I think it applies on both sides of the borders.
It just doesn't fit the business model of big social.
And if the government is going to give them a free pass,
hey, let's get out of this and save ourselves the expense.
Oh, and by the way, look good in the eyes
of the current president, then hey, let's go.
And so I think that's really where we're at now.
Here in Canada, I wouldn't trust the government
to deploy any kind of technology.
We all remember Phoenix Pay, we all remember COVID COVID Alert and all the apps that the government has tried and failed to
deploy at Cadillac or Porsche level prices without actually delivering what they promised.
And so technology is not something that Ottawa does well.
And I think certainly on this scale to ask them to use algorithms to monitor social media
platforms and discern misinformation from proper
legitimate information and then act on that. I think that's way too tall of an order for a
government that can't even figure out how to pay its own people. So as much as I'd like to see that
happen, really don't see that happening in our lifetime without any kind of major investment
in technology capacity at a government level. Erin. So I have a, I think there's different ways that we can do the monitoring.
I think that's our first problem.
I think getting into a situation where we're looking at everything everybody says,
I mean that makes you China or Russia, right?
It makes you a surveillance society.
It's very 1984.
Yes, but there's other ways and I think that's where we have to have a discussion about
how else could we detect miss and disinformation.
So as an example, we know from research that miss and disinformation actually travels through
the network differently.
So what do I mean by that?
If you were to create a community of alt.right, alt.left, many different communities, and
then you have a general population sample.
If misinformation comes out or egregious information comes out
it will travel more quickly in those other networks than in the general
population because the general population will say this is nonsense, right?
And so when you see something traveling differently than you flag it that's
going to reduce your costs a lot more than reading everything everybody says.
So and we don't want to be reading everything everybody says. So I think
there are cost-effective ways. I wouldn't say that we should set it up like Phoenix.
I think it's a third-party agency. Ad Standards is not part of the government.
It's a separate organization that does monitoring on behalf of, you know, that
following the legislation. So there's different models to do it. There's
different methodologies and I think some are better than others.
But we're not even having that discussion.
We're just leaving it to people to police themselves.
Nobody in the country polices themselves, not even the police.
At the end of the day then, Bri, is there anything more to this
than this is meta trying to suck up to Donald Trump?
It is a lot of that.
But I think there are many there are so many threads that are going on here, and
for TV's sake I'm going to pull out a few of them.
One is that Metta doesn't like being in the content moderation business, but if you get
out of the content moderation business, you're still moderating what content is available.
And so they have always been in this position of reflecting the societies that they exist in
and having to think through how their algorithms influence
the way people see the world around them.
So when they change from something like fact checking
to community notes, they do have an influence on how
we see the world around us.
But I think that this particular moment,
the changeover in leadership at Metta,
the six-minute video that Mark Zuckerberg did,
and the changes to this particular strategy
shows us that Metta, out of all of the large social media
companies, is in the awkward position
of having taken the president of the United States offline at the end of his last term.
They did do that.
And I think the tell in their statement is that how they say, how are we going to say
people can't say something online when someone can say it on the floor of Congress?
And they're telling us that in the American government, people are saying things that
are not true.
We don't want to be in the position of having to be
the one standing up going, that's not true,
because that's a politically difficult position for them,
that is a difficult business position for them
as they deal with regulation.
And so instead of doing that, they're switching
to this community note system so that they don't have
to fact check anyone.
And then they can say, if a community note comes out, oh, we didn't say that.
The community decided.
And so Metta is able to put this little bit of remove between themselves and US politicians
who may not be telling the truth or wrapping things in a kernel of misinformation in a
way that will allow them to say,
it wasn't us, we're just delivering the speech that people want.
Let me switch from Facebook to TikTok here, and Taylor, I'll get you in on this one.
Trump came to TikTok's rescue, which is a very odd thing,
because of course he's been talking a lot of smack about China lately.
So what's in it for him here?
What's going on?
Well, a few things seem to change his position.
I mean, one was a very large campaign contribution by someone who has a real
financial stake in the future of TikTok that after which the day later he
reversed his position on the band.
So let's be perfect with that.
I'm sure that was a complete coincidence. A total coincidence.
But in addition to that, I think he saw the political power he had on
TikTok through the election. He saw a political constituency in a younger
demographic using TikTok. And now with his proximity to the US tech sector,
I think he sees a real financial
and geopolitical opportunity for Americans
to take ownership in this technology
that previously was largely controlled by China.
So there's a bunch of things that play here.
And at the end of the day, the TikTok,
whether we should ban or not TikTok,
was a real conflation of very different constituencies. People who had concerns
about the mental health of kids, people who were worried about device addiction, people
worried about national security, people worried about data privacy. They were all sort of aligned
to a certain degree for a moment on banning TikTok. But those are very different political constituencies that just in the end, I think, couldn't hang
together here.
Carmue, what's your prognosis on whether, I know he's the most powerful man in the world,
but can the most powerful man, can even the most powerful man in the world force the Chinese
owners of TikTok to sell half their company to somebody in America?
Well, it's interesting because there has been movements on that front. the Chinese owners of TikTok to sell half their company to somebody in America.
Well, it's interesting because there has been movement on that front.
Originally, the Chinese government said, no way, no how.
They will not allow the company to be sold.
They would not allow their proprietary technology,
the algorithm to fall into anyone else's hands,
and they wouldn't even consider the conversation.
Whereas now, we're seeing reports
that the Chinese government, the Communist Party,
they are having internal conversations about who they would like to have
By the company if it comes to that and of course Elon Musk's name has risen to the top of that list
And so Donald Trump has already to a certain extent achieved that ends and he's kind of moved the Chinese government's position
To the point that they would consider either a complete or a partial sale without the algorithm allowing
consider either a complete or a partial sale without the algorithm allowing American, whoever buys it, on the US side to craft a new algorithm and incorporate it into the platform.
And so I think we've already seen movement there, but we also need to remind ourselves
here that this isn't about ideology.
This is all about getting out of content moderation because it costs it money.
It doesn't care about politics.
All it cares about is keeping us engaged on that platform and serving up as many ads against that. And ultimately it is, that is their
business. And if they can do it more cost effectively by not getting involved in all
this messiness in the geopolitical space, this gives them a very convenient way out
of it. And at the same time, it lets them be buddy buddy with the current administration.
Let me go back to Taylor on this. Do you think that's the play at the end of the day, which is that this was Trump's
way of delivering half of TikTok to a guy who gave him $250 million during the
campaign, namely Musk?
I mean, it's not impossible at all.
And it's certainly a reflection of a new alignment between US tech power
and US political power.
I mean, this is pretty unprecedented,
what we're now seeing, which is, and it's not just on TikTok.
It's going to mean a very aggressive pushback
against any country or region, in the case of the EU,
that tries to regulate American tech.
I think we'll see the real deploying
of American political and geopolitical power to push back against anything
internationally that's seen as not being in alignment or in the interests of US tech.
And that's a new moment.
And I think in Canada, too, we need to come to grips with that.
We're in a very different political place with this new constituency that's sitting
around him in the White House.
Let me do one more follow-up to you, Taylor, because I know you got a heart out,
you've got to get going on to something else, so we're grateful for your time here.
But tell us, you say Canada's got to get a handle on this. Do we look like we are?
I mean on the digital file it sure would have been better if we had passed our online harms act,
which could have dealt with a lot of the problems we're talking about in a very efficient way before the election
of Donald Trump.
I think our internet would be far safer and we'd be much more aligned in a more secure
position in alignment with Europe on our regulatory policy than we are now.
I think it's going to be very difficult now for any new digital governance to come into
play that isn't seen as being aligned with US tech interests. I think it's going to be very difficult now for any new digital governance to come into play
that isn't seen as being aligned with U.S.ic and political interests that want to push
back against our own ability to regulate our public sphere. And I think that's the
question we're going to be faced here in the next three and a bit year, three and a half, almost four years.
They're not oligarchs, they're bro-logarchs. Let's get it right here, okay?
That's the new expression. Taylor Owen from McGill University, thanks for
joining us. You better scoot if you're going to get
to your next thing on time.
Thanks for joining us on TVO tonight.
OK, well, let's pick up there.
Bre, do you want to pick up on that, this notion that, you know,
holy smokes, governments and tech giants,
I mean, this is going to be a perpetual struggle from now
until whenever.
Everybody's trying to get the upper hand.
How's it looking to you?
Well, you know, Mark Zuckerberg, the rumor
is really did not like being called an oligarch.
But I think that that might be where we are.
And I think that when I think of the Canadian context,
we're already looking at Metta not really abiding
by Canadian laws, right?
So we said, hey, you have to pay news content
to be able to run that on your newsfeed,
which having high quality news sources on the newsfeed
is a way to help with misinformation
and the overall information environment.
And Metta said, you know what?
No news links for you, Canada.
I was trying to have a conversation
with someone in the States the other day,
and I said, oh, actually, here's this news story, and I realized I couldn't post it. I couldn't have have a conversation with someone in the States the other day and I said, oh actually here's this news story and I realized I
couldn't post it. I couldn't have the evidence for that argument. And so if we
think about that being a source of Canadians getting information, we really
have a not great situation within our meta news feeds and the content that
we're getting. And so that is the question for Canada.
Can we make laws that these large companies care about
on our own, or will they just say, OK, well,
we'll just take our ball and go home, in which case
we may need to think about aligning ourselves
with the European Union or aligning ourselves
with the US, whichever way that plays out
and whichever way we think is appropriate for Canadians,
because we might not have the standing to just do it on our own.
That's a good question. How do you see it?
So first of all, I don't think Meta is a news platform.
You know, we say a news platform. It's an advertising platform.
That's what it is, okay? And its algorithms are there to find audiences for your content.
So if you're... It's great if you're a small business,
you've opened a Mexican restaurant and it makes sure
that all the people who like Mexican food,
who live in your community see it and it's fantastic
that way, that's when it works really well.
If you're an anti-vaxxer, it finds all the people
who are anti-vaxxers and shows them your anti-vaxxing
theories and that's how it works.
It's an advertising platform that seeks out the audience
for your content.
So it's not a news platform,
it's not a fact checking platform, it was never designed.
We don't ask advertisers to be balanced
in their advertising.
Nobody says-
But there are lines they can't cross.
Exactly, and who keeps those lines at standards?
Not the advertisers.
The advertisers had their way,
they would just say,
my sugar cereal is the healthiest thing you can eat.
And they're not balanced, or ads aren't balanced.
We don't say my cereal will make you fat and your teeth rot, but it's delicious.
I mean, we just say it's great tasting. But there is something called the ad council that's supposed to keep an eye on them.
There is, and we need, my point of view is we need a council
similar to that.
I think we can have our sovereignty. What Mark Zuckerberg doesn't want is to be in the fact-checking
business. He doesn't mind if the government, if Elections Canada says you know what there's
misinformation here and it's telling people go to this polling station, they shouldn't be,
it's over here. He's not going to have a problem with that. He just doesn't want to pay 30,000
people to listen to every conversation
so that he can catch when everybody says something wrong about the election or about vaccines
or what have you. Like I want to put out there that they do do some fact checking. We haven't
seen any animal torture videos on social media and it's not because people don't post them.
It's because we all agree that that is disgusting.
It gets flagged immediately.
It's when you get into this gray zone,
and I will add that let's take it outside of social media
for a moment.
Remember the university presidents last year
who had to resign because of anti-Semitism?
In the states.
And they gave very lawyerly answers,
which is, I don't know what I have the jurisdiction to do
here.
Can I kick this student out of university because he's protesting the Gaza War? I
don't know, right? And a lot of people don't know. We had a trucker protest in
Ottawa a few years ago. I'm from Ottawa. Three weeks of horn honking, and
people saying get these people out of there. Oh, well they have the right to
protest. Is this a protest? I mean they have the right to stand on Parliament
Hill. Do they have the right to honk their horns all night? So we don't
know as a society what people's rights are and where your rights infringe
on my rights. We really don't know that yet and it's not just in technology.
A couple of minutes to go here and, Karmie, I want to... let me give you the
responsibility of explaining to us the introduction of AI characters on Facebook
and the significance thereof.
What's that about?
Because Facebook is not growing like it used to.
It is aging out.
Young users don't use it.
So demographics are significantly older
than any other platform.
And so they have to find a way to keep
the size of the network growing.
They have to find a way to maximize engagement. And you network growing, they have to find a way to maximize engagement,
and you're certainly not going to attract more humans
to your platform now, so you create AI accounts
that look and feel just like a human account
to drive engagement and give advertisers a reason
to pay the rates that they're paying,
and convince them that, yeah, we are going to get you
in front of a very large audience,
even if some of that audience is not necessarily human.
That's what this is all about is it's a dying platform.
We are reaching the beginning of the end of Facebook
and other platforms like it.
And they're finding all sorts of creative ways
to make sure that they can keep the money spigot on
for as long as they possibly can
before the whole thing runs out of gas.
In your view, is that kosher to do that?
Not even remotely, morally, ethically, I mean, certainly.
There is no law that says it's illegal
because the laws are way behind the bleeding edge of technology. But morally and ethically,
if I see content or any kind of asset that's synthetic and it's not labeled as such,
and I'm led to believe that I'm interacting with a human, that's dirty pool. And so the company will
say, well, we've put a label there. But if it's tiny and in the corner and I don't really notice
it and I'm being duped, I think we should have
a moral and ethical conversation around that.
And ultimately what it's doing to us,
why are we even engaging with platforms like this
in the first place if this is the kind of game
they're playing to make sure that we stay on it
as long as possible.
Not really where I wanna go.
And quite frankly, when I first joined Facebook,
it isn't what I signed up for.
Last 30 to you, Erin.
And that's not new.
China has been putting synthetic users on Facebook for a long time, and not just that,
all the social media platforms clicking on ads, they've been clicking on, because companies
pay per click, so running up their tabs and they're not getting any sales.
We've been seeing this for a long time and all the platforms know it's been happening.
So unfortunately, anybody can put that on there and game these networks.
And it really is buyer beware for the companies who are paying per click to make sure they
have a system.
And again, it's monitoring and don't expect them to monitor themselves to make sure that
there's integrity here.
I don't think the networks can do it themselves.
That's my last 30 seconds.
There we go.
My thanks to Carmi Levy in London, Ontario for joining us, Carmi.
Always great to have you on our program.
Brie McEwen from UTM, Erin Kelly, Advanced Symbolics Inc.
Thanks so much, everybody, for being on TVO tonight.