Your Undivided Attention - TikTok’s Transparency Problem
Episode Date: March 2, 2023A few months ago on Your Undivided Attention, we released a Spotlight episode on TikTok's national security risks. Since then, we've learned more about the dangers of the China-owned company: We've se...en evidence of TikTok spying on US journalists, and proof of hidden state media accounts to influence the US elections. We’ve seen Congress ban TikTok on most government issued devices, and more than half of US states have done the same, along with dozens of US universities who are banning TikTok access from university wifi networks. More people in Western governments and media are saying that they used to believe that TikTok was an overblown threat. As we've seen more evidence of national security risks play out, there’s even talk of banning TikTok itself in certain countries. But is that the best solution? If we opt for a ban, how do we, as open societies, fight accusations of authoritarianism? On this episode of Your Undivided Attention, we're going to do a deep dive into these questions with Marc Faddoul. He's the co-director of Tracking Exposed, a nonprofit investigating the influence of social media algorithms in our lives. His work has shown how TikTok tweaks its algorithm to maximize partisan engagement in specific national elections, and how it bans international news in countries like Russia that are fighting propaganda battles inside their own borders. In other words, we don't all get the same TikTok because there are different geopolitical interests that might guide which TikTok you see. That is a kind of soft power that TikTok operates on a global scale, and it doesn’t get talked about often enough.We hope this episode leaves you with a lot to think about in terms of what the risks of TikTok are, how it's operating geopolitically, and what we can do about it.RECOMMENDED MEDIATracking Exposed Special Report: TikTok Content Restriction in RussiaHow has the Russian invasion of Ukraine affected the content that TikTok users see in Russia? [Part 1 of Tracking Exposed series]Tracking Exposed Special Report: Content Restrictions on TikTok in Russia Following the Ukrainian WarHow are TikTok’s policy decisions affecting pro-war and anti-war content in Russia? [Part 2 of Tracking Exposed series]Tracking Exposed Special Report: French Elections 2022The visibility of French candidates on TikTok and YouTube search enginesThe Democratic Surround by Fred TurnerA dazzling cultural history that demonstrates how American intellectuals, artists, and designers from the 1930s-1960s imagined new kinds of collective events that were intended to promote a powerful experience of American democracy in actionRECLOMMENDED YUA EPISODESWhen Media Was for You and Me with Fred TurnerAddressing the TikTok ThreatA Fresh Take on Tech in China with Rui Ma and Duncan ClarkYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Transcript
Discussion (0)
A few months ago on this show, we did a short episode on TikTok's national security risks.
And since then, we've gotten even more information about the dangers of the China-owned company.
We've seen evidence of TikTok spying on U.S. journalists.
We've seen evidence of TikTok hidden state media accounts to influence U.S. elections.
We've seen Congress recently banned TikTok on most government-issued devices.
At least 31 states have done the same thing, along with dozens of U.S. universities who are banning TikTok access from university Wi-Firm.
networks. And we've seen FBI director Christopher Ray, noting that by Chinese law,
Bightance, which is the company that owns TikTok, is obligated to honor the wishes of the Chinese
government. Under Chinese law, Chinese companies are required to essentially, and I'm going
to shorthand here, basically do whatever the Chinese government wants them to in terms of sharing
information or serving as a tool of the Chinese government. We also called early for the need
to ban TikTok, not as a total solution to the problems of the attention economy that we
we've discussed on this show, but as a first and important step to deal with an honest threat.
And we were accused of being xenophobic or participating in a new red scare against Chinese
apps, but we're already seeing more people in the government and media saying they used to
believe that it was sort of an overblown threat. But as we've seen more evidence of these
national security risks playing out, we have to take these questions seriously. But how do we as
open societies who might, in this case, ban TikTok, fight accusations of authoritarianism?
Are we becoming no better than China?
Today on your divided attention,
we're going to do a deep dive on that question with Mark Fadul.
He's the co-director of Tracking Exposed,
which is a non-profit investigating the influence
of social media algorithms in our lives.
And his work has shown how TikTok tweaks its algorithm
to maximize partisan engagement in specific national elections
and how it bans international news
in countries like Russia that are fighting propaganda battle
inside their own borders.
In other words, we don't all get the same TikTok.
because there are different geopolitical interests
that might guide which TikTok we see.
And just a point of clarification before we get started,
I often refer to the influence of the Chinese Communist Party in this episode.
And obviously, Bight Dance is an independent company,
but there are laws in China that obligate its companies
to actually follow the whims of the Chinese Communist Party.
So when I talk about Chinese influence in this conversation,
that's what I'm referring to.
And with that, here we go.
Mark, welcome to your undivided attention.
Hi, Tristan. Thank you for having me here.
I've been in many conversations with people in Washington, D.C., and others that are very concerned about TikTok.
And there really is this question in my mind of what to do about it.
Now, we make all these claims about the potential power for the Chinese Communist Party
to be able to have an influence over turning the dials up or down.
And that's an incredible amount of soft power, to be able to control not hard power,
like kinetic power, military power, but the soft power, the moral consensus, the values,
what people think is right or wrong in the world.
And before we get into that, I want to just actually make sure we're setting the stage
for listeners about what makes TikTok different or unique in terms of its design compared
to other social platforms.
Because people might say, you know, isn't Instagram basically just the same kind of product
as TikTok?
So could you explain some of the design differences?
Yeah, of course.
I mean, TikTok has specificities in terms of its technical design, but also, you know,
also its usability features, and so first, it is an app where the algorithm is particularly
prominent. The content is designed to be consumed almost exclusively based on algorithmic
recommendations, and so the user will only see one video at a time, and so there is full
attention on a single piece of content, unlike on other platforms, for example. Also, the
videos tend to be very short, which encourages fast swiping, almost like you would on Tinder.
and this on top of the massive user data harvesting
gives a lot of high-quality training data
regarding what the user likes or does not like.
And this data is extremely precious
to then train the algorithm
to identify your unique specific interest
at an unprecedented speed and accuracy.
So where, for example, 10 minutes of usage,
YouTube might get one data point
on a video that you liked or dislike,
TikTok will get 100.
because you will swipe for 100 videos in 10 minutes.
And so basically, the whole platform is designed for and around its recommendation algorithm.
Then another point which makes TikTok difference from other platform is that it's a platform
where the direct network of friends of the user has little relevance.
So the app doesn't serve content based on what your friends are watching.
It will serve you anything that will make you engage.
So this also means that anyone can become viral.
So virality can be very fast to acquire and also short-lived sometimes,
which can be also challenging for content creators if they want to make a living out of it.
And the last important difference is it's a marketing positioning.
It is really designed for a younger audience.
TikTok originally said it was just a dancing or a tentigning app.
And this is also a dangerous positioning because they say so to avoid scrutiny.
But in fact, it's much wider than this.
and nearly any topic is covered by a niche community on the platform.
So you said a couple important things that I just want to make sure listeners get,
which is on Instagram or Twitter or Facebook,
I have to choose which friends I'm following.
I have to choose which accounts I'm going to get content from.
So I might go to Instagram, I follow 100 accounts,
and then those 100 accounts are what are used to populate my feed.
Where on TikTok, it's different.
It's sourcing from the global supply of billions of videos
that might have been uploaded today.
And then of those, what are the maximally additional,
or entertaining videos.
And you said another thing, which is that the surface area of signals of which it can
train from to figure out what is the most addicting, it is picking up so many more of those
signals in a single session.
And it's important, I think, when people think about TikTok success, that one of the
reasons it's likely out-competed the other apps is because it's gathering so much training
data so quickly and is able to get better and better at predicting and anticipating what
might be the best thing to keep you watching much more so than Instagram.
Absolutely.
So for all this kind of insane shenanigans and the ridiculousness that we'd ever allow this to happen, Mark, are you actually on TikTok? How do you relate to these things yourself?
I am, but obviously I'm there only for work.
Right.
Just kidding.
But something that definitely struck me when I joined the app actually is how fast he was able to detect my interest in TikTok algorithm itself, which is obviously one of the things I'm coming here to be interested in.
But I think it's a pretty niche interest.
And within an hour of using the app,
I was starting to be targeted with specific content creators
talking about how the algorithm is promoting them
or demoting them when they talk about specific topic.
And this was very fascinating to me
how the algorithm was able to detect this interest
while without taking any explicit input on it.
It was just maybe because...
You never searched for the word algorithm
or recommendations,
TikTok recommendations.
on TikTok. No, it's probably just that one of the 1,000 first videos as the algorithm showed me
had this specific topic. And obviously I probably not only watched this video, but watch it
maybe twice or maybe liked it. And then suddenly I got another one. And then it's all it takes
to detect a niche interest. Yeah, I mean, this is funny because it takes just such a little
amount of information for to know exactly who you are. But what you're saying is it could predict
this very niche interest in the algorithm itself. And it's kind of fun.
and ironic that the algorithm is showing you things about itself in this very meta way.
Exactly. Very meta.
And I should clarify that I am not on TikTok. I only downloaded it in advance of this interview
and saw that other people who had me in their contact so that it could say, hey, this person
followed you on TikTok, don't you want to follow them back? Which, by the way, for listeners
is just to remind them, this is one of the easiest persuasive psychology techniques is the sense
of social reciprocity. Oh, so-and-so followed you. Now they're sitting there waiting there
being like, oh, why haven't you followed me back?
These are the kinds of things that you can play with.
And it has nothing to do even with algorithms.
It's one of the things that I think is overlooked
is that even if we had perfect controls on the algorithm,
I could still change the structure of the design of TikTok,
the social pressure design, the way that notifications appear.
And I would say that TikTok would be a national security risk
even if it was just able to alter the design of itself.
And so I want to sort of just emphasize to listeners
the degrees of freedom that I have as a persuasive technology design,
to influence your population with just very subtle controls
so long as I have your entire population on my system.
Yeah, no, I mean, one other subtle dials that they have
and that is very specific to the design of TikTok is the music.
It's a huge part of what contents get viral
and what contents get shared on TikTok.
And it also has a very strong, people have very strong emotional attachment to music.
It brings back memory, it triggers emotion.
And so also that's another way in which persuasive design,
design can be tempered with specifically on TikTok.
So let's get into what happens in the event of war.
How could TikTok do whatever it wants in sort of shaping public perceptions?
You did a whole report on the Russian invasion of Ukraine and how that played out over TikTok in 2022.
Do you want to talk about that report?
So basically when the war in Ukraine unfolded, we started monitoring what content was being
recommended to users in Ukraine and in Russia on the platform.
And we noticed that overnight, TikTok had created basically a separate version of TikTok just
for Russia. So it had put in place a upload ban, meaning that people could not post new
content at all, and in particular they could not post content regarding the war that was
unfolding in front of them. And also, all international content was made inaccessible.
And so they made this change without announcing it
and only acknowledged it after the publication of a report which was exposing it.
So first of all, it showed how the platform was acting in an opaque way regarding its policies.
But also the consequence of these policies is that they created basically a bubble of Russian-only content.
And what it meant for ordinary Russian user is that they couldn't see that the whole world was basically against the war
and that it was stifling the scale of resistance.
And this is particularly critical
because at the very beginning of the war,
TikTok was really seen as a threat for the Kremlin,
because the app is widely used in Russia,
and the overwhelming majority of war-related content
was critical of the invasion at the beginning.
But the policy change implemented by TikTok
completely inverted this tendency.
And so after the upload ban was put in place,
we noticed that there was also a loophole
that still allowed certain accounts to continue posting from Russia,
and that's what we exposed in this second report.
And it turned out that the overwhelming majority of content
that would still go through despite the upload ban
were pro-Kremlin content.
And so the result was that for a Russian user
who are still seeing war-related content on TikTok,
it would suddenly become as if everyone was supporting the conflict.
And last, in our third report,
we exposed that some of the content
which TikTok claimed to be banned
was in fact still being recommended by the 4U algorithm.
And so this was an unprecedented phenomenon
which we called shadow promotion
as opposed to shadow banning,
which is when a piece of content
which appears to be available on the platform
in fact is banned from being algorithmically recommended.
Here shadow promotion is the opposite
where a piece of content which appears to be banned
on the platform is in fact still being promoted by the algorithm.
So I don't understand what kind of content we're talking about is banned.
why is this a good or bad thing that it's still being promoted?
Could you clarify that?
I think the main problem is a matter of opacity
where we don't know what gets promoted or demoted
and we don't know what could be demoted artificially
or intentionally by TikTok in the event of a war.
So this is really the critical piece.
Shadowbinding is in fact widely common
across all social media platform.
It is one of the moderation techniques
that have been put in place
to deal with borderline content.
So when moderators review a content that is not really bad enough
in order to be completely censored or a blog from the platform,
they decide to keep it online, but to limit its reach.
So the main reason why we believe that this international content suddenly became unavailable in Russia
is that a few days earlier, the Kremlin had passed a new fake news law,
which basically forbid any mentions of the word war in any media.
And it was basically impossible to manually moderate every individual piece of content.
So TikTok went for a very coarse approach, which was to say,
let's ban international content completely across the whole platform in Russia.
It was sort of the only way for TikTok to remain available in Russia,
which is a very important market for them,
to accept the demands of the Kremlin when it comes to content moderation.
So the BBC, for example, you would go on the page of the BBC in Russia,
and there would be no content at all.
But that content might still show up through algorithmic recommendation.
And so this is once again highlighting the need for greater transparency
on what content gets promoted and more generally what content is available on the platform.
I mean, this is relevant for current conversations in Washington, D.C.,
where supposedly there's a deal being brokered where, hey, you know, the U.S. will allow TikTok to
continue operating in the U.S. so long as we get some sense of the ability to audit that things are
what they seem. But what you're saying here is that maybe the U.S. cuts this deal with TikTok.
It looks by all visible appearances that this bad content that maybe the U.S. doesn't want to be
on the platform or certain channels that those are not visible, but that even so, it could still
be recommending all this continent or shadow promoting it, in your words, and that we wouldn't have a means
of honestly detecting that unless there's people like you out there in the world and there's only
one or a handful of you trying to scan basically what billions of videos and hundreds of countries
and hundreds of languages. How could we possibly know what's really going on? And then really
there's this bigger question of like what amount of transparency is possible because there's just
going to be way more issues of concern than there are human beings to look at all these different
videos. Is there really a possibility for transparency that's meaningful when there just is
vastly more content moving through the system?
Moreover, you also said that it only took one day for TikTok to create its own custom version of TikTok for Russia that had a different recommendation system.
I think this is very important that if you can make that change in one day, imagine a China invaded Taiwan tomorrow.
In one day, TikTok could create a custom version of TikTok for the U.S. that said we're going to recommend a totally different set of content that's all about why the U.S. had this colonial background and we shouldn't really get involved in other people's business and here's all the wars that we shouldn't have gotten into.
and here's why we shouldn't get into a war with China about Taiwan
and why Taiwan is always a part of China.
So quickly, you can change this instrument of soft power.
You can change it and tweak it
to reflect the goals that you have.
Absolutely. I think here,
what you're pointing again at the issue of the opacity of the system,
and I think there's great hope right now
that regulation will sort of solve this issue
by forcing platforms to open their data
and provide a researcher data action.
and interfaces to investigate the systems.
Now, the big question is, can we actually trust these interfaces
that are put in place by the platform?
I think we have a serious integrity issue to ask yourselves.
I'll give an example which I think is quite representative to me,
which is on Facebook, which had put in place crowd tangle,
which is basically a platform for researcher and journalist
to access a greater amount of data than a normal user would be able.
to, and this was put in place by Facebook willingly, it was not like coerced by any legal
framework, and was vastly used for scrutiny.
During the capital riot on January 6th during the 2020 election in the US, a lot of data
went missing from CrowdTangle.
And that was what the platform called to be a bug, but it was quite a timely bug, because
just at the moment where platform scrutiny was most important, suddenly
these data access went missing.
And now the question is, to take again your example with Jason is great,
if China was to invade Taiwan,
would we be confident to rely on TikTok's official data access
to audit which content is being promoted or demoted on the topic?
And I believe that we should not.
And that's why the methods and the paradigm that we're using at Trekking Exposed
is adversarial audits.
So what we mean by adversarial audits is that we're,
we are collecting the data in a way that is completely independent of the platform.
So we are basically running bot accounts and then scraping the data and automating this
so that we can emulate different users and see how the algorithm reacts in respond.
And so I believe that this type of adroarorial audit from independent actors like us will
remain necessary despite that the new legislative framework will now enforce the platform
to put in place official data access and APIs,
which I believe are a great development,
but would not solve the issue completely.
Tell us about the French elections, which your group monitored,
and how TikTok has been impacting the French elections and maybe elections more broadly,
and some of the risk areas that show up there.
So the story about the French election is that we started monitoring
what content was being promoted during the...
the presidential campaign in 2022.
And election-related content was extremely widespread on TikTok
during the whole presidential campaign.
And though the platform's narrative has always been
that TikTok is not a political platform,
we were forced to observe and to measure
that indeed it was highly political.
To put a number, we estimated that there was at least
one billion views of election.
related content in France, which has only 65 million inhabitants. So indeed, TikTok is political,
whether they like it or not. And I believe that they sort of like it, but pretend that they don't
want to be political. Right. Just importantly, they want to make people believe, we're just a
dancing platform. Look at these kids that are dancing in these funny videos and it's totally
innocuous. I mean, this is a Trojan horse. It's a Trojan talk. It's not TikTok. It's like
it looks innocuous, but it's actually changing and shaping the basis of your election.
Absolutely. Just like any other social media platform does, whether they like it or not. And if we take the difference with YouTube, for example, which we also have been analyzing during the French election, is that YouTube at least acknowledges that they are playing a role in the dissemination of political information. And so their YouTube strategy is to boost authoritative sources on sensitive issues, including election-related content. So this is not a perfect strategy, but it's
better than nothing, definitely, and at least
they acknowledge their role. On the
other hand, TikTok remains a completely
free market for
political information, which
is not to be confused with a level playing
field. And so indeed,
we know that engagement-driven algorithm
when it's fed with political
speech, it tends to amplify
the most polarizing and divisive
content, because you will
watch it, you will comment on it, whether
you love it or whether you hate it.
And so this is a phenomenon that has
already discussed many times on this podcast and that has been widely known for years.
But since no safeguard were put in place by TikTok during the election, it is exactly what
happened. And in that case, it was the most polarizing candidate was Eric Zemore, who was
really on the far right with the sort of populist and xenophobic arguments. And Zemur received
by far the most visibility proportionally. At some point, he concentrated more than 30% of the
overall engagement on TikTok, despite that he only collected 7% of the votes.
Well, that's profound. So he was getting 30% of the engagement, but only got 7% of the
votes. So I think that's telling. Just curious, what do you make of the fact that there's
a billion views of French election content when there's only 65 million citizens? Like,
what's going on there? Well, the optimistic answer is that youngsters are very interested in
politics and very engaged, and they'll all go to vote. I think another interpretation,
is that in particular
Zemur, who was generating
a lot of views, was basically
almost used as a meme.
And a lot of viral content on the
platform was political
because it was very polarizing
because obviously that's also how you can
shape the popular opinion
regarding a candidate is also
by making fun of them
or sort of selecting snippets of
stuff they have been seeing.
So I'm going to
break away from my conversation with Mark for a moment to share a few thoughts.
Now, one thing I don't talk a lot about on this podcast is that behind the scenes, you know,
we do have conversations with people in the U.S. national security apparatus and other countries
who are very concerned about this.
And one thing I've noticed that's not covered in these private national security conversations
is it's not just about whether one country, like the United States, bans TikTok.
Because even if the U.S. were to say ban TikTok right now, which would be a very unlikely,
you know, extreme action from most people's perspective,
It wouldn't stop TikTok from still controlling the moral consensus of what the rest of the world thinks.
What if in the future, the most popular app for social media is a Russian app?
How do we want to deal with apps that come to dominate critical infrastructure that are based in countries of concern?
And we had this precedent set with Huawei, where the U.S. actually stopped the rollout of Huawei 5G infrastructure in many countries throughout Europe.
We already have rules in telecommunications networks.
You know, you wouldn't allow Russia or China to install critical equipment.
inside of your networking infrastructure
because you would see that as a critical
infrastructure. And social media
is the 21st century telecommunications
infrastructure. And I think we need to start seeing it that way.
Okay, so generally
at this part of the show, we talk about solutions,
but let's go meta here for a second
because we've talked about the idea that regulators
could compel TikTok to become more transparent
about what's getting amplified by the algorithm
and what information is on the app overall.
I would push back and be like,
I don't think that just transparent
on a cancer cell
doesn't stop the cancer cell
for being a cancer cell
I'm looking for solutions here
I'm just curious
when you're in there with regulators
and you're thinking about this conversation
what do you think the solutions are?
Yeah I mean I completely agree
that transparency is just a first step
to sort of expose a problem
but then the second step should be
to propose better alternatives
and unfortunately
these alternatives
do not really exist at this point
we have like some things
that are slightly better, like Mastodon,
but they're not like that usable or that widespread.
But I believe we really need to shift the paradigm
to completely different models
which are not driven by profit.
So I think in particular,
I think critical digital infrastructure,
such as recommender's system,
should become public goods,
or at least not be purely profit-driven.
And so sorry here to use an evergreen example
to make the point,
but I think it's relevant here.
Wikipedia would never be such an authoritative, globally accepted source of truth if it was
a for-profit. And it's also user-driven. And that's really the models that I believe in here
to offer better an alternative. I believe in interoperable platform where users are empowered
to choose and control their algorithm based on their own interests and not based on the interest
of the platform, which is always going to be an engagement-driven model. So we have built a
of concept for such an algorithmic marketplace
for YouTube, which is called
YouTube.AI. It's still
a work in progress, but we're sort of
trying to show that it's
possible to think of
alternative systems, where the
recommender system is in fact
working for the user and not
against it.
So first, I totally agree with you on
the Wikipedia point, that the reason
we trust Wikipedia is because it operates
in the public interest and not for some private
interest, and it probably couldn't have worked
any other way. One thing I might disagree with you on is on the idea of being able to pick your
own recommendation system. I understand obviously the point of that. I'm not against that.
But I think that one of the problems that we also want to solve here is the breakdown of shared
reality. That if we have systems that are maximizing for just personalized benefit,
I think that there needs to be kind of a portfolio of media. And just like we have kind of a fairness
doctrine for somewhat equal airtime, fair airtime, between different politicians who are making
their points or different ideas making their points on a debate stage about a topic,
I feel like there needs to be a kind of a fairness to a shared reality and individual realities.
It's not that we want just a shared reality, some kind of communist top-down, you know,
we all see the same thing.
But if we don't have some basis for knowing what's going on and orienting even personalized
sources of sense making towards a synthesis oriented, you know, knowing what's true
between our ideas, rather than just getting confirmation bias on a stack of,
recommended stuff that just confirms what my tribe already believes. I think that we need to be
careful when we come up with marketplaces where people can choose their own algorithm, that
those algorithms have a kind of design code, just like we have safety codes or buildings or earthquake
codes or fire codes. I think there's sort of like a design code, which would want to be
democratically run through a process, but part of what it would include is the notion of how does
it enable a shared reality? And is there a way that can be designed with incentives at least that
orient people to better collaborative sense making and better coordination? I'm curious
you think of that. No, absolutely. I mean, I 100% agree. And I think that the same way that
content moderation has always been necessary on online platform, we're also going to need some
form of moderation mechanism on an algorithmic marketplace, including, for example, standards
and code of practice on how it should be built, like, requirements in terms of transparency,
that sort of thing. I do think, though, that there is a limit to how much patronalism can be
put inside of the design
of an information system
because people will eventually
seek for the information that they're looking
for. So I think that
what's important here are the default settings.
The default settings
should indeed entice
people to have a
diverse point of view.
Right now, the default
setting is that we are
promoted mind-numbing content
which is not particularly
diverse but also not particularly
interesting. Yeah, I think that what this triggers is a, in most people, when they hear this,
is, well, who are you to decide what is mind enhancing versus mind numbing? And the problem is,
if we don't decide, we're going to get mind numbing, we're going to get breakdown, we're going
to get decoherence, conflict, civil war type recommendations because those are the ones
that are the most engaging. And so the question is, by what means would we start to feel
comfortable, putting our hand on the steering wheel? And actually, I would refer listeners back to
our episode with Fred Turner, who's a professor at Stanford, and wrote a book called The Democratic
Surround, in which there was a notion of how do we do democratic principle media, meaning media
that increases tolerance, that cultivates in people the virtues of citizenship and the ability
to be epistemically humble, meaning to be humble to what I know, that I don't know everything,
that I can always learn something, and that to be seeking curiosity, to be seeking perspective
expansion. And I would just say that one rule, like imagine a default setting, which I totally
agree with, where that's what we should be caring about, what is the default, and the default
that orientes us towards prospective expansion rather than orientes us towards perspective, separation,
division, certainty. And for any given worldview, there's always a more complex worldview
that has even more considerations that we could be exposed to. And I think it would feel aesthetically
more beautiful and interesting to live in a world where we're constantly humbled by seeing
the world in a more complex way. And those videos online and those posts online, they do exist,
but they're not what's rewarded by today's recommendation algorithms.
One of my concerns getting to the next part of the conversation is how government regulators
are reacting to this. And you're speaking to a lot of EU regulators and governments around the
world about what they should do. What do they understand? What do they not understand? What is
the state of that conversation and what needs to happen? So I think both
in the EU and in the US, regulators now really understand the threat and the risk of computational
propaganda and information war and that it's a really critical geopolitical battlefield,
especially against Russia because they are the most skilled and resourced to conduct that war.
And so they are also starting to develop an infrastructure to face this threat,
but they really understand that they are behind Russia, both in terms of the means invested
and the methods that they are willing to use.
There's also a bit of a moral dilemma because how do you want to basically do the same things that you think I'm morally wrong that your adversary is doing.
And they're also starting to understand the arbitrary power that is wielded by platform and the need to have better legislative tools to regulate them.
They also understand that it's not easy to create laws for social media.
There's been this example in Germany, the NEDSDG, which was sort of a precursor.
of the Digital Service Act, which is the new EU regulation that was just adopted to better
regulate platforms. And we saw how it's difficult to put strong constraints on content
removal, because if you put too much constraints, then the platform will basically remote and
censor everything, which is not a side effect that governments wants to see. I do think that
there is one blind spot, both among EU and US regulators, it's that they put a lot of focus
on disinformation and content moderation
and freedom of speech as a consequence.
Well, there is maybe not enough focus
on the role of the algorithm
and how much reach individual piece of content can have.
Yeah, my experience of the conversations in DC
are actually that is mostly focused on where the data is stored.
And if the data was stored on the U.S. servers,
then suddenly that could solve all the problems,
but that doesn't solve any of the problems
around amplification and the ability
to make sure that I'm steering who you're hearing from.
So, Mark, I'm going to ask you a blunt question.
There's a lot of countries that are trying to figure out
what to do about the conundrums
that we've been laying out for the last hour or so.
Should we ban TikTok?
Should we force a sale of it?
What should these countries and policymakers
do to respond to this in your view?
Yeah, I mean, it's a tough question.
I think the better approach would be
to have a more systemic approach
and have enough regulation on the whole industry
to force TikTok to behave in a way that is respectful of democracy.
I think in the same way that the US in particular has a lot of security concern
regarding the fact that TikTok is owned by China.
There's also some sovereignty questions that are being asked in Europe, for example,
by the fact that the US controls all European citizen data,
which is obviously less of a threat than if it's owned by China,
but the whole rest of the world might ask the same question.
And I would also add that banning TikTok would not previews,
than another platform potentially Chinese-owned or Indian-owned to emerge,
which would be similarly concerning.
So I agree with you that I think we need a common approach to regulating
and creating guardrails and design codes and building codes for social media
that actually strengthens democracy.
Too often we settle for what would be less toxic for democracy
rather than what would actually be tech plus democracy equals stronger democracy.
And that's the standard that I want to orient as many listeners and technologists
towards. And I agree with you that those should be common and we shouldn't try to single out
TikTok. I think there's a shorter-term issue of TikTok simply not being allowed to grow and become a
greater percentage of vulnerable Western democracies running their cultural infrastructure on a CCP-influenced
company. And I do think there's sort of two steps. My strong recommendation is a strong
ban or a forced sale of TikTok to completely separate the operations. So it has absolutely no
link. We have to fork it completely.
or shut it down. India did this, and I think they sacrificed 200 million Indian TikTok users when India did do a full ban of 60 Chinese apps. And I think that that can happen in the U.S. And then in addition to that, as you said, just like platform transparency is not sufficient, banning TikTok and calling it a day is not sufficient. We also are going to need these better guardrails for how all the social media can operate in a way that actually strengthens democratic societies. And I think we need both, it's my opinion.
Sure. Yeah, I mean, I don't think it would be a bad idea either. I think it's a new on top.
Well, unfortunately, that would mean you'd have less to do every day because you wouldn't have to study all these things that TikTok is recommending.
But maybe that'd be a good thing for your life. Who knows?
That's why I say it's new.
Well, let's just close with this, Mark.
So in terms of solutions, is there anything we didn't cover that you want policymakers to know, people working at TikTok to know, or that you want the public to know in terms of addressing these issues?
I think in general the recommendation should be the same, is that we should be really mindful of an intentional on the constant.
that we are consuming.
And so we should be aware of the infrastructure
that is underlying where this content comes from.
And in a way, we should be as intentional
with our informational diet as we are with our food diet.
And so we are what we eat, but we are also what we watch.
And I think you're doing a great job at the Center
for Humane Technology in raising awareness
among users regarding these issues.
And I think education is going to be really a critical step
to reach a better state of it.
out. Well, you've helped us a lot with that goal right here by spending the last hour with
us. Thank you, Mark, so much for coming on your undivided attention. I hope this leaves listeners
with a lot more to think about in terms of what the risks of TikTok are, how it's operating
geopolitically, and real things that we can do about it. Thank you so much. Thank you, Drus. It was my pleasure.
Mark Fadul started his career building algorithms before quickly moving to analyzing their impact on
society. And as part of that work, tracking exposed has been putting out reports highlighting
how algorithms are influencing tracking and profiling all of us, not just on TikTok, but also
on YouTube and Facebook. And Mark's organization has also built a set of open source tools
so analysts and users can better track how the algorithm is affecting them. We're going to have
links to their reports on Russian TikTok and the French elections in the show notes.
And if you want to go deeper into the themes that we've been exploring in this episode, and
all the themes that we've been exploring on this podcast about how do we create more humane
technology. I'd like to invite you to check out our free course, Foundations of Humane Technology
at HumaneTech.com slash course. Your undivided attention is produced by the Center for Humane
Technology, a nonprofit organization working to catalyze a humane future. Our senior producer is Julia
Scott. Our associate producer is Kirsten McMurray. Mia Lobell is our consulting producer, mixing on this
episode by Jeff Sudakin. Original music and sound design by Ryan and Hayes Holiday, and a special thanks
to the whole Center for Humane Technology team
for making this podcast possible.
A very special thanks to our generous lead supporters,
including the Omidiar Network,
Craig Newmark Philanthropies,
and the Evolve Foundation, among many others.
You can find show notes, transcripts,
and much more at HumaneTech.com.
And if you made it all the way here,
let me give one more thank you to you
for giving us your undivided attention.
