Ideas - Platforms, Power and Democracy: Understanding the Influence of Social Media
Episode Date: August 21, 2024Research around social media was already hard to do. Now it’s even harder. Researchers describe how Big Tech and right-wing lawsuits block efforts to hold social media giants accountable. *This epis...ode originally aired on Feb. 1, 2024.
Transcript
Discussion (0)
Hey there, I'm Kathleen Goltar and I have a confession to make. I am a true crime fanatic.
I devour books and films and most of all true crime podcasts. But sometimes I just want to
know more. I want to go deeper. And that's where my podcast Crime Story comes in. Every week I go
behind the scenes with the creators of the best in true crime. I chat with the host of Scamanda, Teacher's Pet, Bone Valley, the list goes on.
For the insider scoop, find Crime Story in your podcast app.
This is a CBC Podcast.
Is there a principal reason why I should delete my social media?
And if so, what is it?
There are two.
One of them is for your own good and the other is for society's good.
Welcome to Ideas. I'm Nala Ayyad.
The damage social media is doing globally to public discourse and to democracies is outweighing the benefits of it.
In the years since Mark Zuckerberg launched Facebook in a Harvard dorm room in 2004,
we've been hearing more and more about the dangers of social media.
For decades, there's been that Surgeon General's warning on packs of cigarettes.
But this morning, for the first time, a new warning about something else,
social media and what it means for kids' mental health.
Of course, these revelations rarely come from social media companies themselves.
More often than not, they're the product of rigorous work with researchers combing through mountains of data.
But that mountain has become even harder to climb.
In 2022, X, the platform formerly known as Twitter,
cut off free data access for researchers.
Meta, TikTok and Snapchat
have also implemented restrictive policies.
And it's not just big tech
that's become hostile to researchers.
The past year has also seen
a political assault on research,
with some politicians,
like Republican Congressman Jim Jordan, alleging that those doing this kind of work are actually part of
the censorship industrial complex. I want to thank our witnesses for appearing before us today
and helping us to continue our work in exposing the censorship industrial complex, this marriage
of big government, big tech,
and as we found out with some of our work, big academia.
In the U.S., dozens of social media researchers
are facing investigations from House Republicans,
and some are facing lawsuits from conservative legal groups.
One of those researchers is Rene DiResta.
When state actors are undertaking these operations,
it's not because they are just doing something for fun, right?
They're aiming to achieve a power aim, right?
And so when we do work that disrupts that,
a lot of the response is actually
to try to discredit the researcher.
Rene DiResta is the research manager
at the Stanford Internet Observatory,
and one of the panelists featured
in a conversation I moderated
at a conference
in Montreal called Attention.
It makes it really hard for us to really understand what it is that's happening with respect to
platforms, power, and democracy, when the people who were supposed to be doing the verifiably
accurate and deep research about it are being attacked at every turn.
Michael Wagner is a professor at the University of Wisconsin-Madison School of Journalism and
Mass Communication. Like Rene, he's also being investigated by House Republicans
for his work on disinformation. They were joined by a third panelist.
Our job is really very fundamentally about telling the truth and
about finding the truth. And finding the truth in places where people in power don't want you to be
looking. Joan Donovan is one of the world's foremost experts on disinformation and is an
assistant professor of journalism and emerging media studies at Boston University. Immediately before her current post, Joan had worked at Harvard,
where she conducted research that was critical of Facebook.
But shortly after a half-a-billion-dollar donation to Harvard
from a foundation run by Mark Zuckerberg and his wife Priscilla Chan,
Joan's research project was terminated.
Harvard tried to destroy my career.
I believe it was just the decision of the dean to terminate me
because I was making trouble for the donors.
I sat down with Renee, Michael, and Joan to talk about what they're doing
to both study and resist the influence of big tech.
resist the influence of big tech. I'd like to begin this conversation by just getting a quick opening salvo from each of you. In this day and age, of course, I think most people, and certainly
I would imagine everybody here, understands that there is a problem with social platforms in general. One description we like to talk about
on the show is that social media is neither social nor media. But I want to hear in your words
where you stand. What's the most urgent problem that requires attention? And Joan,
I'd like to start with you. Hey, everybody. What a good-looking group.
I'm really enjoying the gender balance right now,
so it's nice to see you all.
Yeah, I mean, we could rank these issues in many different ways.
I mean, whenever we think about Internet technologies and innovation,
we tend to tell a very positive story of tech platforms that have connected many to many,
have given voice to the voiceless, have given rise to new civil rights movements across the globe.
And yes, there is some of that, but technology is a paradoxical innovation
in the sense that with the good comes the bad. And over the last
more than a decade of my research into internet technologies, we've gone from seeing
social movements being the first to adapt to social media to now states and governments and
frankly military operations being carried out on those same platforms.
The word platform is very intentional here because they didn't want you to think about
social media as a product. They didn't want you to think about it like a gun or like tobacco
or like pharmaceuticals. See what I'm getting at, right? All of these other big industries
that make tons of money are regulated in different ways to protect who? The consumer, protect national
security. So we've entered this moment where not just social media, but all of these digital
technologies have gotten away with it. And they've gotten away with it primarily because of some U.S.
laws, which suggest that platform companies are merely information conduits, and they're not
businesses like other businesses. Uber drivers are not workers like other drivers.
And in that moment, I think the most pressing thing that researchers can do is bring the cloud down to earth to demystify technology, to treat it like a consumer end product and say,
yeah, if we're going to build technologies with endless scrolling that give teenagers the excitement of getting 400 notifications a day, and not worry about how distracting that
might be while they're at work. We don't think about the health effects of these platforms. We
don't think about the regulatory capture that these companies have done to prevent any regulation
from coming to these products. And in some ways, we buy into the myth. We buy into the hype,
and in some ways we buy into the myth, we buy into the hype,
because we ourselves benefit from being able to stalk our exes.
And so I'll stop there, but it's a problem of regulation,
it's a problem of lobbying, it's a problem of the companies getting away with it,
primarily because we as a society
misunderstand the product itself as magic,
rather than understanding it like we understand other
harmful products. Renee, what would you say? So I think my succinct answer would be unaccountable
private power, right? That's the challenge. So I think one of the key gaps right now, and the reason
that it remains such an unaccountable private power, is that there's very little visibility
into a lot of the harmful dynamics on the platforms. Some of the ones Joan alluded to,
I think the area that I've been most captivated by
in the work that we've put out lately
has actually really been around child exploitation content.
And so as we have looked at questions even related to
how do you detect and disrupt networks devoted to that,
it is demonstrably illegal, of course,
but also you have the questions of how do platforms
do that detection work themselves? Is there oversight that is ensuring that they do it?
Who is looking on the outside to provide an oversight function when there has been very
little actual government oversight, particularly within the United States? So I think that question
of how do we gain visibility and then how do we think about accountability requires a very multi-stakeholder type response.
Thank you. Mike?
These are both really good answers.
And so I'm going to do something a little stranger, I think.
So I'll talk about this in kind of two ways.
One of which relates to how we understand how humans behave on social media.
And we tend to think of it as people doing things on social media.
But that is not how most people live their lives.
They live their lives in an overall information ecology
where we talk to human beings face-to-face about things that matter to us.
We watch news on television.
We read news on papers.
We scroll on our phones mindlessly over
and over and over. And I've done that at different points during the day, even when listening to all
the cool things that were happening up on stage. But one thing is that we tend to say, is social
media causing this problem? Is social media to blame for this or that? It's how our use of social
media interacts with all of the other things that we get information from, that we learn from,
that we agree with, that we disagree with. And we spend too little time thinking about that.
The number one source for human beings in democracies in North America to learn about
what's happening in the world is not social media, it's local television news. And that's
really hard to study because it's in the air and it's gone. And so it's way harder to study than
it is to study social media.
So the behavior answer, I think, is we kind of misunderstand social media's power when we think about what it does.
The other problem, I think, is that one thing that researchers
are supposed to be able to do in this space is act in a way as referees.
What can we verifiably tell you is true or not
about the way different platforms operate
and how they influence what people believe to be true, what they want, how they behave, how they
organize, all those kinds of things. And in the current environment, the things that both Joan
and Renee were talking about have become deeply politicized and people like us and many people
in the audience who are doing this kind of work
are now finding themselves the subject of attention rather than the subject of engaging
with people's research. And it makes it really hard for us to really understand what it is that's
happening with respect to platforms, power, and democracy, when the people who were supposed to be doing the verifiably accurate and deep research about it
are being attacked at every turn.
If the idea is what can you verifiably tell us about what these platforms do to our lives
or what happens on these platforms, what is the effect of the politicization?
What does that prevent you from being able to do?
And Renee, maybe I'll start with you.
So there's a few things that that does.
First, in the research that we do, particularly, you know, we were part of Twitter Moderation
Research Consortium very, very early on, which was a researcher relationship that Twitter
put together.
Most of the work that I did in the context of the relationship with Twitter looked at state
actor takedowns. We looked at influence operations from Egypt and Saudi Arabia targeting Iran and
Iran targeting them back. We looked at the US Pentagon, right? I mean, we did a fairly
comprehensive project using data provided by Twitter to highlight what the United States
government was doing, right? And so this is work that only happens when there is
an open channel of communication. And when you create the perception that an
open channel of communication is some sort of collusion or some sort of
nefarious, you know, cabal to prevent people from speaking freely, and that
narrative propagates not only within the United States, but overseas as well. You create an opportunity by which that relationship becomes chilled. And when state
actors are undertaking these operations, it's not because they are just doing something for fun,
right? They're aiming to achieve a power aim, right? They're aiming to either maintain or
obtain political power. And so when we do work that disrupts that, a lot of the response
is actually to try to discredit the researcher. I can't tell you how many times Sputnik and RT
had written about us over the years, right? We had a whole little wall of press clippings,
the Washington mandarins and disinformation spooks, I think, you know. But then sometimes
it also got very, very, very, very serious, right? There have been multiple occasions where people who have family in China or in India
receive visits from governments, right?
So there are very real costs associated with this.
And so it is in the interest of power to prevent those kinds of relationships
from being transparent and effective.
And that chilling effect has extended into the United States now as well.
Joan, can you address that question of just the chill that that creates
on the kinds of research that you think is necessary
into the platforms and what they do?
Yeah, I mean, I've had a very particular experience
with power and influence, and there's a few things that go on.
You know, when you're, in my case,
a non-tenured academic,
I think you're not tenured, right?
And you are.
Yep.
Yeah, and so tenure is this process
whereby, you know, good for you.
I'm like barely an academic.
That's what's wrong, right?
Yeah, that deserves a round of applause.
That's right.
But you don't But as an academic, you don't actually envision needing to use tenure.
You're a math professor.
Math doesn't seem to change that much, right?
But nevertheless, what it does is it protects you from outsiders encroaching on your work
and trying to influence you in different ways by putting lawsuits up against you, which we've seen a few in our day.
But the kind of power that I've experienced is more so the soft power aspects.
It's not a good thing for a senator to call your boss about research that you're doing, right?
No matter who your boss
is and no matter who the senator is. We're speaking hypothetically. Hypothetically. Or
say someone from a company, you know, joins the board of your school and maligns your reputation
at that advisory meeting, hypothetically. You know, the point is that there are all of these soft power moments
where people in power who you are investigating, who you are looking into, whose benefits decrease
the more truth you uncover. And I was just talking with a fellow friend and journalist,
and we were talking about like, what is the purpose of a university these
days? Because a university is meant to share the light with the world. Our job is really very
fundamentally about telling the truth and about finding the truth. And finding the truth in places
where people in power don't want you to be looking. And this in our fields, because our field
confronts those who are
benefiting from media manipulation and disinformation campaigns, platform companies profit,
politicians get what they need out of it, right? And then the general public, there's nobody left
to protect them because the politicians are in on it, the companies are making bank, and then as you go deeper and uncover and uncover more and study for years,
what you end up finding, unfortunately, is that these companies actually knew all along this was happening.
And your research becomes a validator and says, well, we do see this too.
And as a result of that, you take your research to Congress.
I don't know how many times we've both been on the Hill, and they agree with you, and then they do nothing, right?
Which is why, you know, again, I will applaud Canada for sticking to it about this local news
nonsense. It's important to make sure that these companies act in the best interests of the people,
not necessarily the politicians.
But if you do this job well, if you do research on platforms
and you do research about democracy, you're going to find bad actors everywhere.
But that sweet spot of your research being used by policymakers to do something about it, or your journalistic efforts
being used to do something about it, is much harder when the politicians benefit themselves.
Many researchers argue that they need access to data in order to know exactly how these social media companies operate.
It's not just an academic exercise.
It's to know empirically what impact they're having on civil society and democracy.
But just how to get appropriate access is another question entirely.
Some argue that any form of collaboration between the researchers and social media giants automatically taints the research.
But others say that to do this work effectively, you need data.
And to get data, you need to collaborate with the platforms.
One project that Michael Wagner was involved with is a testament to just how thorny these kinds of collaborations can be.
In 2020, he was asked to independently monitor a series of studies on Facebook's role in the
2016 US election, studies that would see Facebook's own analysts collaborating with outside researchers.
Michael, I think it'd be useful for all of us to hear a little bit of the study that you recently were involved in,
because it kind of gives us an idea of how is it possible to study these platforms.
There are different models, and I'd like to discuss the one that you were involved in with Facebook.
You were the rapporteur of the project for three years.
Can you talk about what set that study apart from others looking at platforms?
Sure. So a lot of times researchers study social media platforms and have access to data via the
platform's terms of service. So if you wanted to study Facebook, you could see what people post
on public-facing pages and do research about them, as an example. Or you could collaborate
with a platform and do work and not consent participants,
which has happened at platforms in the past where people didn't know they were a part of the study
and their moods were manipulated or their likelihood of voting was manipulated and those
sorts of things. Or you could engage in a different kind of partnership. And so the project,
I'm studying a research project. So it is a meta take on meta,
which is the joke I'll just probably have to tell for the rest of my life.
So after the Cambridge Analytica scandal,
some researchers internal to meta were worried that anything they produced
would not be trusted after problems with another data sharing arrangement
called Social Science 1
took forever to get data to scholars and then gave the wrong data to scholars when it was
originally done. But anyway, the project I studied was one where researchers at Facebook
made an argument that said we should be studying the election and we're not going to have
credibility doing it on our own. The 2020 election. The 2020 U.S. election, the U.S. presidential election. And so they worked with researchers,
academics at American institutions around the country. And these folks were going to
collaborate with the Facebook researchers to study Facebook's impact on the 2020 presidential
election. And so the researchers on the outside
academics put up a set of guardrails. We're not going to take money, they said, so that they
could feel as if they were independent from Facebook. The consequence of that trade-off was
that Facebook normally will pay people to become a short-term researcher and then give them access
to the physical data because you have to be an employee of Facebook to have access to private data.
So now there are researchers who are independent
but also don't get the data.
So trade-off number one is we don't actually get to touch the data.
Another guardrail was the outside academics said,
we get to choose the questions we ask
and we get to choose how the results are interpreted.
Facebook agrees to this in principle
but has ways they can stop those
questions from being asked by not revealing all the kind of data they have or letting you know
whether you can get something or not. One thing I heard from a few different folks who worked at
Facebook was if researchers don't ask in the exact precise way, they'll be told we don't have that,
even though the Facebook researcher might know
what the person means. So there's a ton more to say, but I would say they're writing a total of
17 papers, four of which have been published. They study the 2020 election. It's 2023, almost 2024.
So it's taken a really long time. The first papers that came out largely told a story of Facebook's not so bad after all.
Those were the projects that the Facebook researchers prioritized wanting to do the
work on because they were the only ones who had access to the data. And so the big splash was,
this isn't such a big problem. Facebook isn't exacerbating polarization or causing huge
problems with misinformation, those sorts of things. There are more papers to come, which may make Facebook and Instagram look worse.
But the big splash was driven by Facebook. And I don't think that's a model for industry academy
collaboration. I'm wondering, Renee and Joan, whether you have thoughts on this model of
collaborating as a way of academy, getting access to some of the data it looks for to come up with some of the
answers that the rest of us want to get about how these platforms work. Renee? So we do different
types of research. And so when we look at something like state actor campaigns, one of the interesting
dynamics and the reason that we have been collaborative with platforms is that if you're
going to say that you believe that something is a state actor action, you want to have signal where
it is then corroborated
by actors with visibility into metadata,
where they can say these are where the people are logging in from,
these pages are actually connected behind the scenes to these pages,
the metadata and the things that we can't see as outsiders,
but they can, then provides us with an extra degree of signal.
And where this becomes really interesting
is that then you have some visibility into what's happening on platforms where there is no researcher relationship.
Joan? Yeah, I guess my question would be like, are you out here asking, you know, dear Fox,
did you rob the hen house? Right? I don't trust that data given to a network of researchers hasn't thoroughly been vetted by Facebook and cherry-picked.
I won't co-sign that.
And frankly, the failure of Social Science 1
was a success for Facebook
because what they did was they actively tied up,
I believe the number was $14 million worth of funding
that didn't get deployed over the year and a half the project existed.
And researchers hired researchers. They built tools to analyze the data. The data never came.
And then when the data did come, a researcher in Italy went through and audited it to see if
it was all there. And they had actually only provided half of the data that they said they would provide.
The tactic of Facebook to, you know, deny, deflect, and delay is about slowing our research field down.
It's about making sure that us as researchers feel the panic and pain of not having the data and not being sure enough about our conclusions
to go forward with our research products.
And I stand firmly on that
because over the years,
what I've experienced is
if you don't play ball with Facebook or Twitter
and you're not interested in partnering with them,
they will talk so much smack about you as a researcher.
They'll call you uncooperative. She's difficult. She doesn't collaborate. But my job isn't to make your
company's product better, right? And that's something that's become very perverse in our
field, that we are hyper-focused on improving the recommendation system of YouTube. No.
Can we talk about, to the extent that you can, talk about that experience, your experience
with social media and with a specific company and their interest or lack of interest in your
research? To the extent that you can. I mean, loose lips sink lawsuits, right? That's where I'm
at. Which is to say that everybody's going to get a chance to read in depth my experience of
leaving the big H and my experience with tech companies momentarily.
momentarily.
A few days after we spoke,
Joan did say more about her experience.
This field is being run by tech oligarchs who believe that academic research
should be a wing of their PR.
In an interview with CNN,
she said she felt that Meta,
Facebook's parent company,
had tried to interfere with her research and maintain that Harvard valued their donors
more than her academic freedom. I mean, it's gutting. Here I am at Harvard believing that
they would protect the sanctity of the truth and that they were understanding that this work was going to ruffle some feathers.
But what I didn't imagine was that I would need protection from Harvard itself.
After leaving Harvard, Joan Donovan joined Boston University,
where she's an assistant professor in journalism and emerging media studies.
She joined me on stage at McGill University in Montreal with René DiResta, who's the research manager at the Stanford Internet Observatory, and Michael Wagner at the University of Wisconsin-Madison School of Journalism and Mass Communication.
You're listening to Ideas,
where a podcast and a broadcast heard on CBC Radio 1 in Canada,
on US Public Radio,
across North America on Sirius XM,
in Australia on ABC Radio National,
and around the world at cbc.ca slash ideas.
Find us on the CBC Listen app and wherever you get your podcasts.
I'm Nala Ayed.
My name is Graham Isidore.
I have a progressive eye disease called keratoconus.
And being I'm losing my vision has been hard.
But explaining it to other people has been harder.
Lately, I've been trying to talk about it.
Short-sighted is an attempt to explain what vision loss feels like by exploring how it sounds.
By sharing my story, we get into all the things you don't see
about hidden disabilities.
Short-sighted, from CBC's Personally, available now.
From CBC's Personally, available now.
Revelations about the harmful effects of social media have been emerging for more than a decade now.
Researchers say these platforms are making us lonelier, more polarized, and are undermining our democracies.
Findings like these aren't good for a big tech's bottom line.
So maybe it's not totally surprising to learn that there's a growing body of evidence showing that social media companies are trying to constrain the researchers who study them.
A paper published by Harvard and the University of Toronto found that most tenure-track professors
in computer science had received some kind of
funding from the technology industry. But even when there's no money changing hands,
there are other thorny questions about how to do this research effectively and ethically.
That was the central issue I explored with a panel of three researchers
at the Attention Conference held at McGill University in November 2023.
Rene DiResta from Stanford University, Michael Wagner from the University of Wisconsin,
and Joan Donovan of Boston University,
who argues that if the social media platforms are the ones controlling the data that's provided to researchers,
then the data can't be trusted.
What is the alternative to working with these companies in wanting to conduct your own research?
I know you do this already. What's the alternate version of that?
Well, it's the kind of sociology and anthropology and work that journalists have done for years, which is,
if you want to know how people interact with a platform, you get a representative sample,
and you study how they interact with the platform. You know, you don't data scavenge, right? Our
field has become data scavengers. We take what we can get. We have no idea if it's a holistic representative sample of anything,
and we assume it's good data.
But our discipline has been overwrought with convenient sampling of Twitter data sets because
they were easier to get than most other data sets.
And I really worry too, this is the last point I'll make about this, which is that tech isn't just the biggest lobby
on the hill in DC.
It's also one of the growing,
if not the biggest source of funding for academia
and this field in particular.
And we have to wonder about what that means
about setting the research agenda,
deciding that your question wasn't asked in the right way.
So, oh no, we don't have that, right?
It's, I mean, I sound like an absolute lunatic.
I get it.
Actually, you know, if I may interject,
I remember when I was going to university 100 years ago,
the issue was that, but about, you know,
pharmaceutical companies funding your research. I used to be in genetics and that, but about, you know, pharmaceutical companies funding your research.
I used to be in genetics and that was the question, like, can you really be an independent
researcher if you've got, I don't want to name names, but a company coming in and paying for
your research? Is that a, I mean, is that a current concern? I mean, there are all kinds of people
funding research, whether they're collaborating in the research or not, does not worry you as an academic,
all of you. Renee, I'm looking at you. We don't take tech money for this reason, right?
No, but the bigger question. But I'll go with that, actually, because it is a very interesting
question. Because, you know, I remember when I was like a mom activist talking about vaccines
and stuff, the only people who were interested in funding us were pharma companies, actually,
right? But then once you do that, you're a pharma shill. And so you find yourself in the unique situation of, am I
going to be falsely accused of being a pharma shill with no money, or do I become a pharma shill,
right? And that's where I'll go with this also, which is we actually never did take pharma money,
that was the answer. And then honestly, the group dissolved because it had no funding,
and people can't just work for free, particularly moms with little kids. But in this particular capacity,
we don't take tech money, but we do take foundation money or donor money.
And then again, there's the question of,
does it actually impact the research?
Meaning, does the money come with strings?
Or does it optically impact the research?
Meaning, are the people who are angry about your findings
going to discredit you by virtue of the transitive property
of bad people, which is like your donor did X,
ergo you took their money, ergo you too are revocably tainted.
And so it is one of these things where I have noticed
in my years, and I'm very new to academia,
I would like the critiques to focus on the work,
and they often do not focus on the work,
but they focus on the people and the donors.
And that has been my experience.
Yeah, but the problem precisely is the work is that as a scientist,
one of the core values of science is replication.
And when you're getting bespoke data
that no one else can access or replicate,
then you're not doing science.
But Twitter would actually release those data sets publicly. So we would get access to what
was called unhashed data, meaning we could see the usernames. We could see certain things before
they would be made private. And the reason for that was that once we could see the usernames,
we could look for those usernames on other platforms. After that, the platform would
obscure the username with just a hashed identifier, right?
So you could still trace it through the dataset,
but you couldn't go find it elsewhere.
You know, why did they choose to do it that way?
Funny enough, as we have seen in the Twitter files,
there are decisions that are being made
related to where is that trade-off between privacy
and inadvertently exposing something
where an account might be misattributed.
Or a hostile government goes and downloads
the readily available data set,
and then something goes wrong as a result of that, right?
And so there are a lot of trade-offs.
And, you know, again, there's these questions
all the way down at some point.
So as members of the public who keep hearing about the fact
that social media and platforms in general cause society harm.
They might be polarizing.
They might be affecting our political landscapes.
How much confidence can we have that the right questions are being asked
about how these platforms affect our society
and affect this shrill conversation that we find ourselves in often?
I think social media is lighter fluid in
some ways, right? But it's also the case that humans have done awful things and have been
deeply divided since there have been humans. Like there was a four years long civil war in the
United States that wasn't started by Facebook, right? You know, there are partisan entrenchments
that lead to violence or other anti-democratic behaviors,
and we have those.
The affordances of these platforms
make some of these things easier.
They make some of them easier to spot.
They make some of them easier to coordinate.
And I think studying platforms in relation to each other,
not just let's study Facebook, let's study Instagram,
let's study TikTok, but let's look at how people actually use these things and how the affordances
of one might lead to what happens on another are one path forward to help us try to understand
how to think about solving some of these larger problems. Yeah, and I think also it plays into
this jurisdictional issue. So for instance, when,
I don't know if you all had this moment up here,
but there was this moment when US politicians were very concerned about white genocide in Africa.
And it was just, it seemed like it came out of nowhere.
And what we looked into is,
well, where is this topic popular?
And it was very popular on Gab,
which was a well-known right-wing platform that led to
some terrorism um the users were sharing a video on youtube so when we approached youtube about the
video and said hey you know it's got 500 000 views or so but most of those views are coming from
this white right-wing, sometimes extremist site.
And they said, well, that's not our problem, right?
And that's part of the walled garden approach
to all of these platforms
is you can put your awful content up on YouTube
and depending upon if it's making a viral splash or not,
they're gonna decide
if they're gonna apply their rules to it or not.
And what we're facing, and I think Renee is right to say, they're going to decide if they're going to apply their rules to it or not.
And what we're facing, and I think Renee is right to say there is something going on with the children and that the kind of content that young people are being exposed to, it's not like when
we were kids and you happen to find a hustler ripped apart in the woods, right? Pornography is part of their
daily diet online, right? That kind of hyper-sexualization, social comparison, this is
happening over and over and over again. And the design of Instagram itself is made to make
teenagers feel inadequate so that they keep staying on the platform. And there's 42 states
in the U.S. now that attorney generals are suing Meta because they knew their product design was
harmful to teenagers and they chose to do nothing. And it's not, I'm not making the argument from the 90s that we need to put stickers on CDs, right?
I'm making the argument that this is the product.
This is the product.
And the product could be different.
They experiment, they make the product different.
I mean, it's the biggest tell
if Facebook can get rid of the news in Canada.
Why can't they get rid of hate? Why can't they get rid of the news in Canada, why can't they get rid of hate?
Why can't they get rid of the racism?
There's all kinds of things that happen on that website
that once you start to dig into it,
you realize there's no such thing as the dark web.
It's just Facebook.
In one line, why can't they get rid of all that?
They could.
They choose not to because that's what gets you to go back.
Renee, you raised the child exploitation
at the very beginning.
I just, and you ran through
a whole slew of questions
after you raised the issue.
And I'm wondering how close
do you think as an academic
we are to answering those questions?
I mean, are we even part way
to answering the questions
about how it affects children
and how, beyond the anecdotal?
So I think, you know,
there are different types of research.
There's a couple of different varieties
represented up here.
There's people who are looking at,
you know, exposure to pro-anorexia content
or suicide, self-harm,
a lot of things like that.
A lot of our work is focused on more of the actual kind of illegal child content. And
one of the interesting dynamics around that actually has been, it actually does really
require human investigators. And you have to resource teams to go and do it. Once you have
some signal, right, into how a network is operating,
what it's doing, why it's doing it, what words it's using, what signals, what means for exchanging
funds, then what you find is that if you're just using basic classifiers that are like looking for
known bad keywords, what they don't recognize is that in an adversarial environment, those keywords
are going to change. But these are the sorts of things where you have to choose to invest the time
and people and teams into looking for things.
And unfortunately, what happens
is sometimes they just don't.
And so whether that's a staffing cut
or dropping a ball
or responding to something like,
you know, again,
a perception that trust and safety work is the enemy or information integrity and propaganda studies
are just, you know, ways to suppress disfavored points of view.
If they can kind of point to that and say,
we're not going to invest those research teams
into doing that work internally,
then what you see is the kind of stuff that we see on the outside, which should never actually, like it should be stopped immediately internally.
It should never have hit the point where it did. Yeah. And just to, we're running out of time,
but I do want to tell you this, that in leading up to this panel, I've been thinking,
trying to think of any other actor in our societies
that has had so much influence on our societies,
that has been so impervious to public scrutiny.
Perhaps government is the first thing that comes to mind.
But beyond that, I wonder where you think regulation fits into the answer,
where access to information so you guys can do your jobs is concerned where does
the regulation fit in i mean i'm i'm not on the side of the fence that wants more data access i
actually believe that as academics and stewards of the public trust we should be advocating for platform companies to erase data and not batch it and
tag it and provide it to academics. And I say this knowing that it impoverishes a field of people who
are coming up with very interesting methods. But I also know where that field of data analytics
comes from. It doesn't come from us as academics. It comes from
marketing. It comes from police. It comes from the surveillance state. It doesn't come from
academics being like, I want to know how many people clicked on this cat eating pizza video
and shared it with their uncles, right? So one of the things that I think government should do
is take a strong stance on privacy
and also take a strong stance on monetization of data,
which is to say that if a company decides to monetize data,
then maybe they shouldn't be also in the business of social media
because these are our private networks.
We give over our private pictures of our family because we believe that there is some kind of wizard behind
the curtain ensuring that that kind of data isn't shared beyond who we intend to share it with but i
also think that government could step in and require and mandate transparent auditing of the quality of
social media platforms. So I do believe in the aggregate we could require some kind of data
transparency, but I am very wary of any program or any mandated data collection that would also
enhance the surveillance capabilities of the state enhance the surveillance capabilities of the state
or the surveillance capabilities of the state qua academics acting as the proxy.
Michael and then Rene. I do think that data access is important. I think balancing access
with questions of privacy is equally important. But I'll also say that I think one thing regulators could do is help protect researchers at the companies
so that they can reveal things that they know to other researchers or to regulators or others.
Can you just expand on that? What would that look like?
So one thing I learned and published about in my
research about this collaboration between Facebook and other researchers was this question of what
the Facebook researchers care about social science, they care about getting things right,
they want to be seen as serious researchers, and they know that they're not going to be trusted.
And they have a responsibility, many of them feel,
to also protect their company and a culture within their company,
and so they're not going to just give things away
to outside academics who want to come and ask for things.
But some would probably be more helpful
if there was some protection provided to them
if they were to be able to say,
you know, there are things, we did this study, right,
on platform already, and so here's what we learned, here's how we did it, you want to come able to say, you know, there are things, we did this study, right, on platform already. And
so here's what we learned. Here's how we did it. You want to come do it again. Why don't we tell
you what we've learned and build from that, rather than the outside academics coming in and saying,
well, we don't want to trust any of that. We can't trust any of that. So it just, it slows things
down. It doesn't let us build on knowledge, which is what, you know, scholars are supposed to be
able to do.
But it is all entrenched and intertwined in these issues of a lack of trust, many well-earned, that cause problems.
Renee?
One of the challenges that we have in the United States right now, candidly, is that Congress actually just is too gridlocked to regulate.
And so nothing has been accomplished at a federal level in the United States.
And so we get piecemeal regulations you can see this if you do go to try to remove your information from a data broker you've particularly
you know coming from California we have certain rights that we can request under
California law I think maybe Virginia perhaps Colorado there's like maybe five
states that are listed we're like you can actually opt out and then you have a
cause of action if they put you back up right if they delete you for three months and then throw you back up there.
Right now, we just have this very kind of piecemeal, you know, set of things that are happening.
And the kind of old, you know, laboratories of democracy hope in the United States was that things would pass at the state level and inspire federal action.
And we're a bit gridlocked on that front.
Yeah.
Just as a final thing from me, I'm curious if any of you want to take this question what the
public's role is in this conversation where do where do all of us fit into the conversation
about what you guys can or cannot do with whom yeah i mean i i think first most people do not
wake up sit up in bed clap their hands together and say how can i hold my government accountable
today like that is just not how most of us live.
Most of us don't have time to do that.
Most of us don't have the knowledge to do that across a wide range of issues.
And part of what's happening now is that there are so many platforms and there is so much
news.
And we see this in our data time and time again.
Like there's just a steady decline in people's willingness to seek information and a steady
increase in news fatigue.
People are just tired. It's overwhelming. It's cacophonous. It's very easy to conclude a plague
on both their houses in a two-party system or something like that. And we need folks to be able
to participate a little bit more and be willing to engage in a little bit of costly talk and say,
yeah, our side was wrong on this one. I think those things are behaviors citizens could engage in that can help with the margins. But participating is
probably the most important thing. Yeah, I think when it comes to the public,
the one way not to get hoaxed very easily is to buy a newspaper, right? Straight up,
just start reading the news. You know, the one thing I love about the Irish is they didn't want
to read the news every day, so we invented the Sunday paper for them, I should say for us,
because, you know, we wanted all our news, but in one day, that's it, right? And so I think that
there has to be ways in which people are interacting with public interest information,
and if your business is providing information to the public, then you have to
provide a conduit to reliable fact-based reporting. And what's also interesting about it is that
I think the public often doesn't know what it's got till it's gone, right? You know, they pave
paradise, they put in a parking lot. Like this might be where we end up because it's got till it's gone right you know they pave paradise they put in a parking lot like this might be where we end up because it's not the case that those posts disappear and nothing
else fills the void right it gets filled with other kinds of you know novel and outrageous
contents um and so i think you know the public has a responsibility to push the government to behave
think you know the public has a responsibility to push the government to behave in a way that doesn't just look at the effects of lobbying but looks at the general needs of a public especially
during a moment like the pandemic where we really did need international news on the daily for us to
live our lives we are setting ourselves up for an even worse outcome during the next pandemic or endemic or
massive international crisis if we don't build those bonds and and fund news and so I think the
public has a toll to pay in some ways but I think part of it is a lot of people feel like because
they pay taxes they've done their part.
And maybe that's true.
Maybe the government needs to open up more funding opportunities for media, especially media that is going to be facts driven.
Great. Thank you.
I'd like to open it up to anyone who would like to ask a question.
I think there's a microphone back there.
So just, yeah.
Thank you so much.
One audience member wanted to know how the conversation we just heard
fits into our broader understanding of social media's impact on our democracy.
Yeah, I would say that's almost all I think about. I'm super concerned with can we achieve at first the bare minimum requirements of a democracy
right so election is freely and fairly held votes are counted winner takes office right
peacefully right like that's a minimum bar right and and then the next maybe set of bars are
are the choices people make with limited information they have relatively consistent with the choices they would make if they had full information.
So how much good little bits of information are they getting when they're making decisions?
And are there penalties for lying?
And by penalties for lying, I don't mean, you know, do we censor somebody?
I mean, are they voted out of office, for an example, if they're shown to be an incompetent or dishonest lawmaker?
And so thinking about how the way that individual engagement
in the overall information ecology,
which is highly now tied to social media,
not just through our use of it, but through link sharing,
which I think Joan had referenced before as a thing,
there's all these different kinds of ways that information
can influence
our ability to make the bare minimum
fundamental choices, let alone
all of the other things we might be really interested in
trying to figure out of what might
serve to depolarize
folks, or at the very least result
in people having more accurate perceptions
about their political opponents, and believing
that their political opponents ought to
have avenue to participate in the system, right? Democracy is for losers, by which I mean,
when you lose an election, there's another one, right? If you lose, you have another chance.
And that's the thing we need to really be keeping our eye on is the next chance is free and fair
as the prior chance. I mean, I guess I would just say that my newer
research is very concerned with state actors on platforms, sort of a return to the old,
you know, which Renee's work has been very influential in. But looking at it from the
perspective of we've gone through this moment where platform companies were trying and didn't get enough
pats on the back and now have rolled back all of their integrity initiatives or trust
and safety initiatives, which has let open a moment like what we saw with the Ukrainian war and then now most recently with Israel and Palestine and Hamas is that
state actors are more active than ever on these platforms and we don't know to the degree to which
these platform companies are even interacting with or trying to mitigate what's happening there. And that affects all of us. That kind of
misinformation is warfare. If you can confuse or dilute or polarize other populations, you can,
in many ways, win the information war. You know, I was just as surprised as the rest of you that I
woke up and learned that Ukraine had been sieged and
that Russia was aggressively attacking. I was also really surprised to see Western support for
Ukraine. And I think social media, for as much as I rag on it, I also think it has this potential
in a very real way to humanize populations that we otherwise
would not feel solidarities with and I would like to see more of that more of a
global culture online but where we're headed with a lot of the research that
happens in our field leads people to conclude that maybe we need digital
borders and maybe we need to shut down the internet maybe the internet only
should be in each country and we shouldn't have this free flow of information across countries because that
would be the way to protect against cyber troops and foreign invasion through our information
ecosystem and so i think it's really important that we have all kinds of different researchers
looking into and navigating those strategies around national security so that the public can flourish.
You know, the cost of information is so low,
but the price we pay in terms of a democracy is very high.
Joan Donovan is an assistant professor of journalism
and emerging media studies at Boston University.
René DiResta is the research manager at the Stanford Internet Observatory.
And Michael Wagner is a professor at the University of Wisconsin-Madison School of Journalism and Mass Communication.
The panel took place at the first ever Attention Conference in Montreal.
This episode was produced by Greg Kelly with additional help from Mitchell Stewart.
Special thanks to the Canadian Digital Media Research Network and the Media Ecosystem Observatory.
Lisa Ayuso is the web producer for Ideas.
Technical production, Danielle Duval.
Nikola Lukšić is the Senior Producer.
The Executive Producer of Ideas is Greg Kelly, and I'm Nala Ayyad. For more CBC Podcasts, go to cbc.ca slash podcasts.