Your Undivided Attention - A Conversation with Facebook Whistleblower Frances Haugen
Episode Date: October 18, 2021We are now in social media's Big Tobacco moment. And that’s largely thanks to the courage of one woman: Frances Haugen.Frances is a specialist in algorithmic product management. She worked at Google..., Pinterest, and Yelp before joining Facebook — first as a Product Manager on Civic Misinformation, and then on the Counter-Espionage team. But what she saw at Facebook was that the company consistently and knowingly prioritized profits over public safety. So Frances made the courageous decision to blow the whistle — which resulted in the biggest disclosure in the history of Facebook, and in the history of social media.In this special interview, co-hosts Tristan and Aza go behind the headlines with Frances herself. We go deeper into the problems she exposed, discuss potential solutions, and explore her motivations — along with why she fundamentally believes change is possible. We also announce an exciting campaign being launched by the Center for Humane Technology — to use this window of opportunity to make Facebook safer.
Transcript
Discussion (0)
Hey everyone, it's Tristan. Real quick before we dive in, the audio quality in this episode wasn't
quite as good as we'd want it to be, but we wanted to prioritize getting it to you sooner rather
than later, because it's the Facebook whistleblower. Indeed, we got the opportunity to speak
with Francis Howgan herself. And if all that sounds exciting to you and you're an audio producer,
we're actually looking for a senior producer for this show, your undivided attention.
We're especially interested in candidates who are aligned with our mission,
which is why we're sharing this opportunity with you, our listeners.
So please apply or share the role with someone who might.
Visit humanetech.com slash careers.
And with that, here we go.
I joined Facebook because I had a very close friend who helped me relearned a walk
when I got really sick back in 2015, 2016,
who got radicalized on the internet.
And so the issue of misinformation
and the quality of our information environment
was a very personal issue for me.
And once I began working at Facebook,
it became apparent to me over time
that there were conflicts of interest
between what was in the public's good
and the profits of Facebook,
and that Facebook consistently resolved those conflicts
by prioritizing profits over people.
And at some point I realized
the only real path forward,
The only thing that was aligned with the public good was making sure that the public had the
information they needed to make decisions that were good for themselves.
That's Frances Howgan.
She's a specialist in algorithmic product management.
She worked on ranking algorithms at Google, Pinterest, and Yelp, and was even a co-founder
of a popular dating app called Hinge.
She was then recruited to Facebook to be a product manager on civic misinformation and then
worked on counter-espionage.
But what she saw at Facebook was that the company was consistently and knowingly prioritizing
profits over public safety.
And so Francis made the courageous decision to blow the whistle, which resulted in the biggest
disclosure in the history of Facebook and in the history of social media.
We are genuinely now in social media's big tobacco moment.
I'm Tristan Harris.
And I'm Isaraskan.
And this is your undivided attention.
the podcast from the Center for Humane Technology.
What's wild about Francis Hougain's whistleblowing
is that our very conversations about it
are falling prey to precisely the forces
that she's whistleblowing about.
For example, do you have an opinion about Francis?
Did you read that opinion on social media, on Facebook, or on Twitter?
There are stories going viral right now
that she's some kind of operative
or a fake or phony whistleblower
while she was secretly colluding with the government
to drive up more censorship of speech online.
But these stories are actually going viral
because of the very forces that Francis is talking about.
But the amazing thing about Francis is
she still believes change is possible.
And that's why she blew the whistle.
In order to help Facebook make the changes
that so many people outside the company
and so many people on the inside
want Facebook to make.
Frances, I am so excited to have you here on your undivided attention.
And I just have to start by asking, how are you?
And when you decided to blow the whistle, did you realize that what you were planning to leak would become the biggest exquisite in the history of the company?
I'm doing okay.
The months between when I left Facebook and when the stories beginning to come out were much harder than the last week.
Because I accept the consequences of my actions.
I don't think I did anything wrong.
I took a great personal risk
because I cared about what I thought
the consequences were, right?
That I thought kids' lives were at risk
that Facebook was responsible
for ethnic violence in other countries.
But the thing that motivated me to act
was a fear for that, you know,
the ethnic violence in Myanmar
and in Ethiopia was just the beginning.
And it's really scary, right?
Like you look down the road
And, you know, at my low point, which was like New Year's Eve, 2019, 2020, like, I literally had a panic attack on New Year's Eve because it had been so many months of, like, learning more and more and more depressing, scary things and feeling like there wasn't resources inside the company to actually make the level of difference fast enough to, like, respect those lives, right?
And Facebook keeps saying this.
They're like, we invest more in integrity than anyone else in the world.
Well, it's like, yeah, but you also have voluntarily chosen to go into some of the most fragile places in the world.
And to take on even more responsibility, you've chosen to subsidize the Internet for the people in those vulnerable places, but only if they use Facebook.
And so I think there is a certain level of obligation that comes from the idea of if you save someone's life, you're responsible for it, right?
like if Facebook is going to take on the internet experience
for hundreds of millions of people around the world
who because Facebook subsidized their internet
a free and open internet didn't develop in their language
it feels like they have a higher obligation of safety for those people
given that they have made it harder for alternatives to emerge
maybe that's a good place to dive in because you worked on the civic integrity team
which was looking at a lot of what I think you called at risk
countries, or maybe Facebook has that internal terminology.
And I think, you know, we spoke a little bit on this podcast about that.
But my sense is that you were looking at a lot of really dark things that were happening
around the world.
And they were a lot darker than maybe what's happening in the U.S., but I have a sense
that we get a flavor of that, as we've talked about a lot in this podcast.
Do you want to talk a little bit about what are the things that you saw that you were
worried that not enough other people were seeing?
And I'll just say one last thing, which is you realize that the world didn't
understand the information you were looking at because you were part of what a team of 20 or something
people inside civic integrity who actually knew this and it had big consequences and the rest of the
world didn't know that and I just relate to that so much that there's this sort of internal truth
of certain areas of the tech industry that are bound by these problems and the rest of the world
doesn't understand it I would just love you know how do we go ahead and equalize some of that
asymmetry where people don't understand what what was happening that you saw I think the question
of even inside the United States outside of the United States like I joined Facebook
because I saw how bad, like I had lived with the consequences of what misinformation could do in the United States.
And I showed up and I learned almost immediately that what we see in the United States is the sanitized, healthiest version of Facebook.
That, you know, most languages in the world don't have basic integrity systems from Facebook because Facebook has only chosen a handful of languages to write these AI systems that make their system safer.
And I learned that in places in the world where people are fresh,
coming on the internet, there's lots of norms that we take for granted as people who live
in a society that's had the internet for 30 years at this point. For example, the idea that
people put fake stuff up on the internet, that idea is not actually a norm in other places to the
same extent that it is here. You had people with master's degrees in India saying, you know,
these are educated people saying, why would someone go to the trouble of putting something fake
on the internet. That sounds like a lot of work. And, you know, once you start realizing that
people's experience of the internet is so different depending where you are, right, that someone
who is becoming literate to use Facebook for the first time in a place like Myanmar, that
the experience of the internet in a situation like that is just very, very different than
what we're seeing today. And we're about to face a real, real big change as like a global
civilization in terms of as Starlink. So that's SpaceX.
is, um, internet service provider. As Starlink expands, you know, I bet Facebook gets another
billion users over the next five years. And those people, a lot of them will have had the
internet for the first time. You know, they may become literate to use, use Facebook. And given the
level of protections, I saw Facebook was giving to people in the most vulnerable places in the
world. I genuinely think that there are a kind of shockingly large number of lives on the line.
I mean, from a business perspective, it also costs resources.
to support each one of these other languages.
And there's like a growth rate to,
here's these new countries that are coming online,
here's these new languages that are coming online,
and they're all languages we haven't seen before,
which means that they cost a lot more
than adding a million more users in a language we already have.
So do you want to talk a little about the economies of scale
that kind of emerge from this issue?
The economies of scale issue is really essential
to the safety problem at Facebook.
Let's imagine Facebook is trying to roll out
to new countries.
in new languages. They would do exactly what Facebook has done. They started out in English.
They got lots and lots of English-speaking users. They started rolling out to major Western European
languages. They moved out until the next largest markets. And as they continue to grow, they
keep going into these communities that have fewer and fewer language speakers. Programming a new
language for safety, like building out these integrity and safety systems, costs the same amount
in a language that has 10 million speakers as one that's like English that has a billion speakers.
As you roll into these new languages, the average amount of revenue you get gets smaller and smaller
because each one of these new languages has fewer speakers, and on average, the speakers of those
languages are less affluent than the ones that are already on the platform. And so given that
there is a fixed cost for each one of these new languages, it's just not economical for Facebook
to build out the level of safety that currently is available in the United States and
these other places. And people are paying for that with their lives.
What this brings to mind is Sophie Zhang, who is another Facebook whistleblower.
And to quote her, she said, I have found multiple blatant attempts by foreign national governments
to views our platform on vast scales to mislead their own citizenry and caused international
news on multiple occasions. I have personally made decisions that affected national presence
without oversight and taken action to enforce against so many prominent politicians globally that I've
lost count. And I believe you've talked, Francis, about the triage process. I can't remember whether
you said it was 80 percent or two-thirds of the cases when you were on the espionage teams that
you just had to ignore. So I worked on counter espionage under the threat intelligence org
as the last job I did at Facebook. And our team was small enough that, you know, of the cases we
knew about. We only were able to actively work on about a third of them. About a third of
them, we would occasionally check in on them. And a third of them, they were basically in the ice
box. And like, maybe occasionally someone would take a look, but like they weren't able to be
worked at the level that that third of cases that got support got. And part that I think is most
concerning is we intentionally did not invest in proactive detection mechanisms because we couldn't
support the cases that we had to start with. And so, like, the metaphor that,
that, you know, I think of for this is we were dealing with, like, the outer skin of an onion only we didn't know if the size of the onion was a baseball or a beach ball, right?
Because we intentionally did not invest in systems to figure out how many candidate cases there were because we couldn't work on the ones that we already had.
So there's this question of, like, why is that happening?
Like Facebook makes $40 billion in profit a year, right?
That's not the revenues. That's profit.
Surely Facebook could afford, you know, to have 50 people on that team instead of having.
having, you know, under 10, maybe six people on that team.
I think there is a real thing of that Facebook has a long tradition of thinking of itself
as a scrappy startup and there is a lack of appropriate baselining for what level of investment
should be happening.
And so Facebook has made a number of statements about how there's hard tradeoffs involved
and like if the answers were easy, like these problems would be solved.
And it's like, actually there's lots of problems or lots of solutions that have been identified
by Facebook's own employees, the challenge here is that Facebook doesn't want to invest,
you know, three, four, five times as many people in solving these problems.
And because there is no oversight, we, the public, never get a chance to weigh in on what
level of investment should be happening, even though our safety and our lives is on the line.
It seems like that's kind of the main issue that so often came up in the themes of the documents
that you just closed is Facebook, so far as I could tell, in the documents, their own researchers
would say, we have actually many solutions to this problem, but then when given the choice
to implement them, not always, but when given the choice to implement them, they would choose
to simply not make that change if it would drop revenue, growth, or engagement. Why do you think
that is? I mean, in so many cases, whether it's teenage mental health or in investing more
in integrity or in reducing virality as opposed to dealing with content moderation or fact-checking,
why do you think it is that they wouldn't opt for what was good for society when they have the
resources to do so.
I have a lot of empathy for where Facebook is coming from.
A real challenge and like a responsibility they take on, which I think is great,
is a seriousness about making decisions around being sensitive to the fact that they do have a lot of power.
One of the things that I think is unfortunate is they are uncomfortable enough about the
notion that they have that power, that they avoid making decisions.
So, for example, Twitter has made the call of, you know, you have to click through a link in order to reshare it.
That's not a giant imposition, the idea that, like, if you want to share a link, you need to at least click on the link.
And the reality is that little changes like that that increase the friction of how information spreads actually have a really big impact on the quality of the information that spreads.
At the same time, part of why I think Facebook didn't implement that is, one, it would slightly decrease.
growth, right? We're talking 0.1-1% sessions kind of thing, like not a huge impact of growth,
but still like an impact of growth. And second, they'd have to acknowledge that their systems
were dangerous, right? And Facebook tries to stay away from any kind of opportunity where
people might associate that the company is dangerous. And so I think those are factors that lead
them to not act. It becomes this question of like who gets to decide how tradeoffs.
should be resolved. And right now, the public doesn't get any transparency into that. And so it's
just up to Facebook to decide whether or not they want to prioritize growth at all costs.
That explains why Facebook likes to frame things in such false dichotomies. It's either free speech
or censorship, nothing in between. And I'm really curious, just what other of your favorite
examples of solutions that the brilliant people inside of Facebook came up with that they know
would do something, and then they've chosen not to do.
Here's an example.
Lots of dynamics on Facebook are driven by extreme users.
And by extreme, I don't mean like their positions are extreme.
I mean like the intensity of their usage of the platform is extreme.
So for example, someone who is a 99th percentile user in terms of the number of pieces
of content they produce.
Maybe that's posts or maybe it's comments might produce 10 times as many pieces of content
as a 95th percentile user.
And a 99.99% user might produce 100 times as much content as a 99%tile user or at least a 95th percentile user.
And so you can imagine a system or like a change where you just came in and said,
okay, we're going to figure out what the 99th percentile is for a number of comments.
Let's cap the number of comments you can give in a day at that number of comments.
So that might be 30 comments a day.
I don't know.
I don't know the exact number.
But when they went and did that analysis in the case of COVID, even if you ignore the content of the comments, you just say, we're going to cap people at whatever the 99th percentile is right now, that ends up having a huge impact on COVID misinformation because a very, very small number of people are hyper-sharers or they're hyper-commenters.
And there's a lot of power in just saying, hey, let's make space for more people to talk because it turns out on average, people are pretty cool.
Yeah, Renee, DiResta, one of our previous guests talks about this as the.
asymmetry of passion. But what you're talking about is the hyper asymmetry of passion where you have
a small number of people who are posting, you know, all over the place. I think in some of your
other work, you talked about the invites and that there's certain people also who invite like
many, many, many more people to groups. And that that's also kind of a different issue. Do you want to
talk about some of those other asymmetries? I think of it as like, you know, in complexity theory,
the notion of scale, that there's certain things that are at greater, greater scales than
others, and we could pay attention to the outliers and how do we control some of the extreme
usage that's more dangerous? So the example that you gave there around invites, so I've discussed
before the idea that Facebook should have to publish, but all of its rate limits are. So a rate limit
is, like let's say we sat down and said, how many people should someone be allowed to invite
to groups, like a group in any given day or any given week? How many people should they be allowed
to invite overall in that same time period? Because the reality is that some people are trying to
weaponize the platform, and most people aren't. You can imagine coming in and saying, okay, you can
invite 1,000 people a week to groups. The current limits are set so high that the documents show that
there was a person they found who had invited 300,000 people to Q&N-related groups. And one of the
things that's kind of scary about that is that Facebook has a feature in place such that if you are
invited to a group, Facebook will inject content from that group into your feed for, I think it's
30 days. And if you engage with any of it, they'll consider that like a ghost follow. So when that
person went and invited 300,000 people to Q&on groups, now all those people start having their
feeds flooded with Q&N content. Wait, so this is important. You're saying that if someone
gets invited to a group, they don't even accept the invite. They're not saying, yes, I would like to
join your Q&R group. You're saying, suddenly, by just the invitation alone, their feed gets flooded
with Q&N posts. And then if they engage at all, it kind of auto joins them in some way. Yes, it's this
question of that, you know, Facebook knows that groups are a valuable conduit for people to
connect on Facebook and that sometimes people get invited to groups and they either don't notice
the invitation or, you know, maybe they don't really understand that they have to accept it.
And so Facebook's idea is that instead of waiting for someone to accept a group, that you
might inject content from that group into their feed for a period of time. And if they engage
with that, then that we should assume that they want to continue to receive that content.
This becomes problematic when people get invited to really large groups because, like, let's say you have a group that has half a million members and it produces 500 pieces of content a day.
If you have an algorithm, you know, engagement-based ranking that prioritizes divisive, polarizing, hateful content, that content ends up, and there's 500 posts a day from that go into that group.
You might end in a situation where Facebook has to figure out what two or three posts of that 500 should go into your news feed.
And if you know that those biases exist with your engagement-based ranking, it means you're going to keep having this kind of forcing function that gives mass distribution to extreme content.
So when you combine that with the fact that, you know, Facebook will start auto-injecting content if you get invited, it's kind of a perfect storm because it means that someone who's really motivated, you know, they have that asymmetrical passion, can, you know, add huge numbers of people to their group.
every day, and then be able to force a stream of extreme content into their feed.
And if I recall correctly, wasn't the reason that Facebook leaned more on Facebook groups
because regular user engagement was going down, that regular users are posting less?
Facebook noticed that people who were members of groups had a higher retention rate on the platform,
and part of that is because they get exposed to more content.
Groups are generally wonderful. People love coming together. And to be clear, I'm not saying
that groups are bad. I'm just saying that the
the way you would design groups without algorithmic feeds,
without having computers choose what to focus on,
is you would design those groups at a much more human scale, right?
So you'd have things that looked like Discord servers, right?
You know, things where people have a single conversation,
and if it gets too noisy, they start opening smaller rooms
that focus on other topics.
And so I think there's a real advantage to having more of a human-oriented design strategy
instead of having an AI will save us strategy.
Most people are not aware of how Facebook builds the systems
that get to pick out the content that go into your newsfeed.
So Facebook goes and takes the actions of millions and millions and millions of people.
And they say, okay, so we had this information about what people were interested in the past.
We have information about the content that we could show them.
we're going to try to make these predictions and see, like, how accurate were we?
And systems are, quote, trained by looking at those millions and millions of people's actions.
But the reality is that not all of those people on the system interact with Facebook the same amount.
So if the behavior of someone who looks at thousands of posts every day is different than the behavior of someone who looks at, say, 50 a day,
that person, the person who looks at a thousand a day, has 20 times the impact.
on the algorithm as someone who looks at 50 stories a day. And so what's interesting is Facebook
knows that some of the people who consume the most misinformation, they've gone through
life experiences recently. They make them more vulnerable. Maybe they were recently widowed.
Maybe they were recently divorced. Maybe they moved to a new city. Maybe they're getting depressed.
That those people end up influencing the algorithm to an outsized impact compared to the average user.
And so, you know, there's a lot of these weird feedback.
cycles, where as people get more depressed, they might more compulsively use the platform.
The idea that their actions could then feedback and influence an average user of the platform
is kind of crazy. And you can imagine doing things like coming in and capping how much impact
any given user could contribute to the overall ranking for everyone else. And that might
also help rein in some of these impacts. I'm thinking of it almost like a genie coefficient
a measure of inequality.
We cap the amount of inequality on in society.
Well, there's an inequality in how much the more depressed, anxious, angry, et cetera.
People are actually influencing my feet, I think actually from a personal sovereignty
perspective, the notion that people who are, you know, sorting for that reactivity are
actually disproportionately influencing what I see as a normal user.
That kind of speaks to the ways in which we don't really have this kind of marketplace
of free speech.
Even my own usage is asymmetrically influenced by the people who have.
have shown themselves to be more reactive or have other psychological issues going on.
Do you want to talk a little bit about that?
Yeah, so like I've done studies on who are the people exposed to the most misinformation?
And because people who are socially isolated, so maybe their spouse recently died or they got
divorced or they moved to a new city, because those people don't have as many avenues for social
connection in face-to-face communications, they often lean heavier on Facebook.
A lot of Facebook strategies for dealing with conspiracies, myths, hoaxes are about demoting that content in their feed.
If you consume thousands of pieces of content a day, those demotions stop having an impact, right?
Because you're still going to get down to the stuff that might be harmful.
People sometimes ask, like, why am I so adamant about chronological feeds?
So that means your news feed should be put together using a system that you understand, right?
Let's order it by time, like your email, or maybe order it by time and explain to you any other tweaks that happen.
That system is something we all can understand together.
We can have a conversation about it.
But when you have a system that someone who is coping with their anxiety by consuming 2,000 pieces of Facebook content a day
and becoming more anxious as they read more extreme things on Facebook, do we really want to have that kind of behavior bleed over into people who haven't yet?
been influenced by Facebook that way. So it's almost like we don't just have a gradient of privilege,
we have a gradient of anxiety where the most anxious people pass on more of their anxiety or things
like that to the other users. And also, you know, the lower you scroll, the worse it gets is kind
of one of the other things that seems to emerge from what you've just shared. One of the things
that really struck me about the change to meaningful social interaction is that, as Francis has said,
it forced political parties to take more extreme views. And on free speech, how can you
have free speech when people's true beliefs are being held hostage to Facebook's need for
virality? Yeah. So I think one of the things that I found very shocking about what's in the
documents is there are multiple examples of people external to Facebook cluing in on patterns
that were seen inside of Facebook. So researchers inside of Facebook saw things like the
more angry, the common threads on a post, the more clicks go out of Facebook back to a publisher.
The publishers were writing in and saying, hey, our most popular content on Facebook is some of the
content we're most ashamed of, right? It's inflammatory. It's divisive. It plays on stereotypes.
Political parties were coming to Facebook and saying, hey, we notice you changed the algorithm.
Like, it used to be that we could share out something like a white paper on our agricultural policy,
and people would still get to read it.
Only now when we do that, all we get is crickets, right?
It doesn't really work anymore.
Because in an engagement-based ranking, those polarizing extreme divisive pieces of content are the ones that win.
I think that's one of these interesting things where, like, I think why I feel so strongly about chronological ranking, you know, order by time,
is that everyone can understand what order by time is.
And even Facebook doesn't really understand how the newsfeed works.
And I just think it's safer for society for us to say, hey, let's have a thing prioritizing our attention that we all understand instead of a system than not even the experts in the world understand.
Of course, for that to work in a game theoretic way, the app TikTok versus Twitter versus Instagram versus Facebook, the one that chooses chronological feed won't get as much engagement as the ones that rank by what's really good at getting your attention.
So if we were to go chronological, that's the kind of thing that you would need as a kind of a game
theoretic multipolar trap.
You would need everyone to go to a chronological feed at the same time.
And what I think that's pointing to is not necessarily that, I mean, you said it yourself.
We can start with chronological feed, but what you're really talking about is that everyone should
understand why the things are coming to them are coming to them.
And it shouldn't be based on a automated system that prioritizes the things that make society
not work.
I mean, the way I think about it now is that Facebook is basically their business model is making
sure you can never have a Thanksgiving dinner where you understand anybody else at the table
because their business model is polarizing society so that everyone gets their own
personalized view of what was most dividing, et cetera. And the goal here can't just be a nicer,
more enjoyable Facebook. It's got to be, well, Facebook is operating the information system or all
these systems are operating the information that goes into an open society. And the open society's
ability to govern is based on synthesis and constructiveness and perspective seeking and perspective
of synthesis and saying, okay, what are we actually going to do about our biggest problems?
And the biggest problem I see in an engagement-based ranking system is that by rewarding more
extreme polarizing population bases or political bases, it means that, as you said,
politicians and political leaders have to cater to a more extreme base, which means that
their unique selling proposition to their constituents is never agreeing with the other side,
which means that democracy grinds to a halt.
And that's what I mean by, you know, Facebook's business model is making sure you can't show up
at the dinner table and Thanksgiving and have a conversation and making sure that you're always
going to lose faith in your democracy. And that those two things are incompatible with democracies
working. And that's the kind of thing that makes people say, hey, I don't even want this democracy
anymore. I want authoritarianism. So either I want China or I want, you know, to elect some kind
of strong man who's just going to smash the glass and break through this thing so we can actually
have a real, real governance that's delivering results at least in some direction as opposed to
constant gridlock. It also means that we're talking about not just Facebook, but a business model
more generally. And as you're pointing out, Tristan, that means it can't be something that the solution
can't be applied only to Facebook. It has to be applied to the entire industry at once.
Yeah, I think it's the thing where we're going to have to have government oversight and have
them step in and say, hey, like, Section 230 right now gives immunity for content that is supplied
by users, right? It says, like, if platforms aren't the one creating content, then they're not
responsible for the content that gets created. But platforms are responsible for the choices they make
in designing their algorithms. And I think exempting those algorithmic choices from 230 and forcing
platforms to have to publish enough data that people could hold them accountable, is an interesting
strategy for forcing more platforms to go towards chronological ranking. Because the reality is,
if people can choose between an addiction-based growth-hacked algorithmic engagement-ranking-based
feed or one that is time-based, they're always going to pick the one that's engagement-based
because it is stickier. It does make you consume more content. But at the same time, like,
it also makes people depressed. It also causes eating disorders and kids. You know, there's real
consequences to these systems. And I just think in the end, like if you actually talk to people
and you said, do you want computers to choose what you focus on or do you want to choose what you
focus on, I think from a personal sovereignty perspective, we should all want to have control over
what we focus on, not have computers tell us, especially Facebook's computers.
I mean, the complexity here is that a lot of people pick who they follow or who they add as a
friend based on the AI recommended suggestions about who you should follow and who you should
recommend. So the computer, again, is involved in saying, well, here's some users that are more
extreme voices that we know that if you add them as a friend or you follow them on Twitter,
they're the ones that are going to get you coming back all the time because they say the most
outrageous things. So even if we're picking who our friends are, quote unquote, if the menu is not
the menu that we picked on our own, but was picked by, again, an engagement-based AI.
It's like the thing just kind of keeps going upstream.
You know, if we could have it be 80% less bad, 90% less bad, I think having, like, I totally
get it.
I'm actually really concerned about the engagement-based recommendations of, like, what
groups you should join or what people you should follow.
Totally get it.
Totally agree.
But like, at least start with the news feed, at least get that under control.
We shouldn't be afraid of making a problem 90 or 95% better because.
we can't make it 100% better.
One of the things, Francis, that really struck me about your testimony, about your 60
minutes piece, just the way that you show up is that you seem to be very motivated by care.
You say things like, I want to heal Facebook.
This doesn't feel like anger per se, although I feel notes of anger, but it feels nuanced,
and it feels almost more like an intervention.
And I'm really curious, how did you come to be this way?
How do you hold care in such high stakes?
There's a lot of research on how do people actually change.
And people very, very rarely change because you're angry at them.
It's just not how humans respond to stimulus.
And I just don't think being angry at people accomplishes a lot.
We've had combativeness.
We've seen polarization in society.
a lot more happens by reaching, you know, across the aisle of people or like saying, you know,
this is a collaborative problem that we can work on together. And I think I just have a lot longer
time horizon than a lot of people do, right, that I would rather do the walk with someone
approach than the fight someone approach. And Facebook likes to say, you know, she didn't even
make it two years. One, I would have worked at Facebook longer than two years, except for they
did not let me move to where I wanted to move. But two, Facebook needs to have a lot more
voices involved in the problem solution process. And there were a lot of people who didn't get a chance
to do that kind of collaborative care-based intervention. And I think there's, I think there's an
opportunity here where we've been angry at Facebook for so long. What if we came in and said,
Facebook, like, you're stuck? We keep having the exact same arguments. Like, what if we had
different arguments? You know, what if we brought more people at a table? And one of the
things that's been amazing about giving testimony last few days is that once you give people in
Congress a lot more options on what are steps forward, we can have a lot more of a diverse
conversation than just, you know, is there good or bad content on the platform? Should we take
more or less of it down? And I would much rather have like a constructive conversation than have
a demonizing conversation. The process of change is a very long and slow one. And when you're
driven by anger, you generally burn out. And people have commented to me about the scale.
of this disclosure, I don't think I could have pulled this off if I was motivated by anger
because it took too much effort to pull it off as it is.
So one of the things I find interesting is, you know, we are used to using the phrase morally
bankrupt.
And people keep misinterpreting my statement of moral bankruptcy as meaning that I am saying Facebook
is morally bankrupt.
The reality is I'm trying to take the word bankruptcy like in a financial sense that
In our society, sometimes people get in over their heads.
People spend too much money, they take on too much debt, they can't pay it off.
And in our society, we believe that people's lives are more valuable than money.
We have an avenue where if someone gets in too over their head,
we have an avenue where they can admit that they are overwhelmed and that they need help.
And it doesn't mean they get off scot-free, but if they're willing to be honest
and they are willing to ask for help, like we have a mechanism where they can take a reset.
And I really think that Facebook needs our help.
Being angry at Facebook is not going to solve the problem.
Like having Facebook hunker down and being more embattled is not going to solve the problem.
What Facebook needs is it needs people to work with them and to, for us to all together find solutions.
And I think the only way, you know, they're going to be able to recruit the people they need to solve these problems.
Or to get the will to solve these problems internally is if they declare a moral bankruptcy.
moral bankruptcy. They need to come in and say, yeah, we've done some things, some real
serious things. And some of those things we did intentionally and some of the things we did
unintentionally. We made other decisions we thought were good that led us down this path. And as a
result, we need to have a path out. Because all of us live on this planet together, all of us
are going to have social media. Like the idea that getting rid of social media is plausible,
I don't think that's true. But I think there's an opportunity there. And I think there's an
opportunity for Mark Zuckerberg to live a life where he feels at peace, where Facebook
can put the employee, like I never received a piece of clothing, a bag, a gift when I was
at Facebook that had the company logo on it. And the reason for that is that Facebook employees are
endangered when they wear the company logo. And I have this dream that one day Facebook will
work together with the public and we will actually begin to heal some of these problems
and that Facebook employees will get issued, you know, jackets and backpacks and hats that have a logo on it again.
And that's, I think that's possible.
And that's my hope.
That's why I think moral bankruptcy could bring.
Frances, are there any political or spiritual or other leaders from the past that you look up to as you think about this work?
I think part of why people get so angry about Facebook is they feel like,
it's impossible to change Facebook.
You know, like, people, like, you see a lot of the comments that people have made
as people have seen my disclosures, and they sound just exasperated.
They're like, is any of this new, like, nothing ever changes?
And I hear, like, a real sense of powerlessness.
And I really am inspired by people like Gandhi or, like, Nelson Mandela.
I do believe in, like, the power of peaceful resistance, right?
I do believe in the power of pacifism.
But the thing that I find inspiring about both those cases is that those people took on,
seemingly impossible foes, and because they were willing to work slowly and diligently,
like it took them decades, decades and decades of work.
And I think sometimes in tech, we rarely look more than two years down the road.
You know, we say if we can't accomplish something in two years, like it's not worth doing.
And I think there's this question of if we believe that things like engagement-based ranking
are so dangerous that there might be hundreds of thousands or millions of lives on the ride,
it becomes a thing where choosing strategies that allow us to keep grinding, potentially for a long time, right?
Like if you're powered by anger, you'll burn up into a little crisp.
But if you come in and you say, like, I see that Mark Zuckerberg is suffering, right?
I see that Facebook employees are suffering.
People who work on keeping us safe on Facebook grind themselves into the ground.
You know, we need a different way forward.
We need to help them feel like they can be a collaborative, integrated member of society.
not one that feels misunderstood and persecuted.
That sounds like a horrible fate.
And you can grind away on social change for quite a long time
if you're powered by the power of hope.
And I think there's a real opportunity here
where there's a collaborative solutions approach
that involves governments around the world,
involves people all around the world,
where we can make social media that we enjoy
that helps us bring out the best in humanity.
And I think that's possible.
And one of the things I'm most scared of from my disclosures is discouraging people from working
on Facebook because I think working at Facebook is one of the most important jobs in the world.
I will lightly nag them until I get to work there again.
You know, that is my hopeful dream one day, and people say I'm a Pollyanna for that.
But you never know. You never know. I have great optimism.
And it's one of these things where that cycle, I don't see how it breaks unless Facebook declares moral bankruptcy, right?
Facebook needs more people working there to solve these problems.
In order to get more people to work there,
people have to have faith that Facebook is acting in good faith, right?
That Facebook really wants to solve these problems,
that's willing to make the hard choices that will let them solve these problems.
And so coming out and declaring moral bankruptcy
is the process of saying, I need help.
I'm in and over my head.
But I'm willing to change, and I'm making a commitment to change while you join me.
And I think that that is a path forward.
That is a path to healing and a path to making
social media that we enjoy, that it brings out the best in humanity.
I'm curious, Francis, as you think about governance for Facebook to take this action
or all the other actions that are sort of like this, the safer for society at the cost
of a little bit of profit. What is the institutional governance, the process by which we can get
there? Because there's going to be this example and then there's going to be another 10 examples
and then another 100 going down.
So it's like designing products is a moving process.
What is the moving process for governance
that sort of matches the knowledge required
and the speed required to work on products like this?
We need to have a conversation about
how do we make sure that Facebook isn't the only one gradient's homework?
Facebook has established a pattern
where like even when asked very important direct questions,
questions like, is Facebook safe for our kids?
Facebook has outright lied to official groups like Congress.
We need to have a process of having privacy protected data.
So there's ways of obscuring the data such that people's privacy won't be harmed.
And Facebook has been emphasizing a false choice of that we can either have oversight or we can have privacy.
And that is a false choice.
We need to be working collaboratively with researchers to develop obfuscation techniques that can be privacy sensitive,
but also give meaningful data.
So the very, very, very minimum is we need to have enough data that someone other than
Facebook can choose what questions to ask about Facebook.
The second is we need to have something like a regulatory body that can actually force Facebook
to make changes when those independent researchers identify problems.
Because right now, until the incentives change, nothing is going to change at Facebook.
And Facebook has shown that on its own, it will not change.
Right.
Right now, for everyone that's on the outside, but on.
team help make Facebook better. It's like we're a surgeon trying to operate on a patient,
but we can't see inside the patient. Yep, totally. It's like cigarette companies, right? The cigarette
companies, when they were getting pressed about cancer, and just to be clear with people,
tobacco causes cancer in approximately 10% of people who smoke for decades. In the case of
Instagram, the person Facebook provided for Senate testimony, she said 80% of kids are fine on
Instagram. So 20% of kids aren't fine on Instagram. We should care about that. When cigarette companies
came out and said, hey, we're willing to admit that cancer is a bad thing. We've invented these
filtered cigarettes to make smoking safer. We're going to dilute the smoke with air. It's safer.
It causes less cancer. Because scientists could independently confirm whether or not that marketing
message was true, we're able to find out, no, in fact, filtered cigarettes are actually
more dangerous because people smoke substantially more when the experience of smoking is more
pleasant. And so people were breathing deeper in these carcinogens. They were consuming net more
nicotine, net more tobacco over longer periods of time. And so right now, Facebook makes claims
about how hard it's working on things, but we have no idea if any of these problems are getting
better. And from what little we do see from external researchers trying to piece together data,
we can see that misinformation has gone substantially worse on the platform, that it gets
distributed to more people, and that the platform gets ever more concentrated, and that's
dangerous.
That imperils people's lives.
And that I think is my meta-concern when we talk about solutions, which is that the first
derivative of harms increasing, the first and second derivative, so for those who don't remember
pre-calculpture, we're just talking about the growth rate of things like misinformation, the growth
rate of fake accounts, the growth rate of content that's toxic that we can't identify.
If the growth rate of all those harms is exceeding the speed and growth rate of our solutions,
then we know what the world is going to look like.
And it's not good.
And in my mind, when we talk about what kind of governance is necessary,
the only kind of governance that was adequate to the situation would not just be to the growth rate of the harms.
But, I mean, we want this to be going in the other direction.
We don't want just to move towards 5% less bad Facebook while the bads are growing at 50% per year.
That's not going to get us very far.
We want to move to a world where Facebook or social media, as it's a brain implant in our democracy,
actually makes for a better and stronger democracy.
And what I really wonder about is how can the growth rate of our solutions and the things
that we're implementing to fix these problems, how can that be faster than the trillion-dollar tech
companies that are trying to hide the problem?
To me, this is the trillion, the no-joke trillion-dollar question is the growth rate of the harms
compared to the growth rate of governance.
So the fear is, you know, Facebook can move so much faster than government can move.
And I think the thing that we always have to come back to is if the issue is that we're worried that Facebook can move faster than government can move, then we should absolutely insist for transparency and access to data.
I totally agree with you.
We have an existential threat on our hands, but like we can either throw up our hands and say, oh no, like the singularity is getting away from us or we can fight, right?
And I really believe that we have not lost the opportunity to change.
We have not lost the opportunity to act.
And if that is our fear that we have to move as absolutely as fast as possible, then let's do that.
Let's build that organization.
Let's build the ability to execute at that speed because we have to.
When we think about changing something as big as Facebook or as gargantuan as the entire social media engagement industry, where do we begin?
We talk about platform tweaks, design changes, and government regulations.
But where should we start? What should we focus on?
System theorist Dinella Meadows had a brilliant answer to that question.
In 1972, she introduced a framework known as the 12 leverage points to intervene in a system.
Her framework basically identifies 12 different leverage points for making change in a complex system
and puts those leverage points on a scale of increasing leverage.
Leverage points lower down on the scale are often easier to push, but have less than
less impact on the system.
Leverage points higher up on the scale are often much harder to push, but have more impact.
There are also feedback loops, which means pushing on a lower leverage point can sometimes
have unpredictably outsized impact.
Bringing it back to Facebook, we need an all-of-the-above strategy.
We need to push on the lower leverage points of small platform tweaks, design changes,
and internal governance, while advocating for longer-term systemic reform.
through external government regulation
and the push for new business models.
And in a world where Facebook affects almost 3 billion people every day,
even a tiny platform tweak can completely change the world.
It is by strategically pushing on a whole ecology of Donello's 12 leverage points
that we can make systemic change to the complex system
that is Facebook, the social media industry, and our society beyond.
Speaking of other solutions that are very small but can have a pretty big impact, one of the things listening to Wall Street Journal files, and it was, I think, hidden in the fourth episode, the reporter talks about a senior Facebook data scientist who goes to the executive news fees and says, hey, our data shows that if you just limit the number of reshares, how many hops it can take, I can share to you, you can share to Tristan, but no further, that that does more for fighting.
missing disinformation and toxic content than almost anything that we've done so far,
if not more. Can you expand on that? Can you explain how that works, why it works, and what
the solution is? Yeah. So when people talk about the idea that all that I want is censorship,
I find that really, I find that a mischaracterization of what I want, because I want to have
platform changes. I don't want us to pick winners and losers in the marketplace of ideas.
And the case of limiting reshare lengths basically says, hey, we know that the further down
a reshare chain you get, like let's say it's a reshare of a reshare of a reshare of a reshare,
the content gets worse and worse on average.
And that's because information that is crafted to be viral is designed to play on all of our
vulnerabilities.
And so the further down a reshure chain you get the higher chance that content is bad content.
Instead of picking which ideas are good and bad, if we just said, hey, you can share things infinitely, but you have to copy and paste at some point.
Like if a reshirt chain gets more than, say, two hops long, you have to take a moment and copy and paste.
You have no one stopping you.
You really want to keep spreading that idea, copy and paste.
But let's just not knee-jerk that.
It's kind of like saying you have to click through a link in order to reshare it.
You're not being oppressed by having to click on your link before you reshare it.
But that friction, like having a chance to take that extra breath, actually reduces the amount of toxic content that gets spread.
So that seems enormous, right?
Because Facebook spent something like, you know, several billion dollars on this fact-checking and content moderation, you know, combined.
And so if you're saying we could actually do more good by making this sort of one-click change in facetiously, I mean, the fact that you can make this double reshare and then after that you have to copy and paste, couldn't they imagine?
implement that immediately? I mean, why would they not do that?
For people who live in the United States, we take for granted the fact that there are some
countries in the world where 35% of all the content people see in the newsfeed is just
reshares, right? That in some places, people write a lot less original content or create a lot
less original content. And so reshares take up a lot more of the news feed in those places.
So anything that decreases the volume of reshares decreases Facebook's profits by little tiny
bits each time. So maybe that change would decrease Facebook's profits by 1%. And Facebook has come out,
like Facebook has trouble admitting that they have problems, but also it's a thing of every dollar
matters to Wall Street. And Facebook, because we haven't stepped in and said, hey, like, this is a
collaborative process. Like, we're going to get congressional oversight involved. Facebook has been left
optimizing for the shareholder interests instead of the public's interests.
It's hard for me because I'm like, this is the thing that they should implement tomorrow. I mean,
basically this is something that is content agnostic, language agnostic, and something that would
simply, I mean, we're only talking about reducing their profits by 1%. And this would make the world a lot
safer because you would just really rank down the kinds of things that are shared. I mean,
I've shared in this podcast before. There's sort of a lore story about Steve Jobs that when someone
showed him one of the upcoming podcast apps for the iPhone, and someone said, hey, let's make it
so that there's a news feed. So you can see the other podcasts that people have listened to. You can see
what your friends have listened to, then you can comment and like on it.
And he immediately responded, no, why would you do that?
If something was so genuinely important and worth listening to, people would say,
I'm going to copy and paste a link to this podcast episode, and I'm going to send it to a friend.
And I just think about his belief in the genuine quality of something, standing out,
that it's not even just that we'd have an instant reshirt button to 100 people,
but that something would have to rise to the level of being so relevant.
relevant, that I would send a link to a friend. And if I think about the things in my life
or things that I've been sent by friends, you know, the quality is so much higher. If I think
about social dilemma, so many people texted a Netflix link to their friend and said, you have
to watch this. I remember getting texts from everybody and everybody hearing about it through
text. I mean, you know, Francis, your testimony was sent around by text to everyone around
the world because it was just so riveting and so compelling. And I think if we lived with
the standard of what is so genuinely worth sharing. It's almost like the worth sharing movement
as opposed to the, you know, sharing everything, every little minor thought, every little minor
twitch, every little neuron firing off. Having that share to everyone just creates a mass
society of noise. And I feel like that's what we're living in now. It's just a mass society
of noise. And this is a tiny, tiny change that would make an enormous difference in every country,
in every language. And it's not a one-click fix to the world, but it might be one-click safer.
Yeah, I think it's one of these things where this is a change that Facebook could show
that they were serious about decreasing extreme polarizing divisive content very, very easily.
And they could show that they're willing to trade off a very small amount of profits for
the safety of the platform.
And I think it's a wonderful content-neutral, language-neutral solution.
It's not about picking winners or losers.
It's not about censorship.
about changing the dynamics of the platforms, that it's less twitchy and reactive, and it allows
us to be thoughtful. And I think we want social media that helps us connect with people we care
about. We want social media we enjoy. And I think that all those things are true about this
change. So let's make this change. Let's make Facebook one-click safer. The Center for Humane
technology is launching a campaign to pressure Facebook to allow a maximum of two levels of sharing
per post. Back to Dinella Meadows' framework, this lower leverage point is one we can push on
immediately while we advocate for longer-term systemic reforms. You can join our campaign at
one-clicksafer.tech. And if you work at Facebook, we know there are so many of you who want
to help. Share this episode. Advocate internally for making Facebook one-click safer.
Check out one-clicksafer.tech.
Francis Haugan is a specialist in algorithmic product management
and an advocate for public oversight of social media.
She believes the problems with social media are solvable
and that we can design social media that brings out the best in humanity.
It took extraordinary courage and personal risk
for Francis to blow the whistle against a $1 trillion company.
Please support her whistleblower protection
by donating to whistleblower aid.
visit gofundme.com slash F slash Facebook dash whistleblower.
Your undivided attention is produced by the Center for Humane Technology,
a non-profit organization working to catalyze a humane future.
Our executive producer is Stephanie Lepp,
and our associate producer is Noor al-Samurai.
Dan Kedmi is our editor-at-large,
original music and sound design by Ryan and Hayes Holiday,
technical support on this episode from Count Elrich,
and a special thanks to the whole Center for Humane Technology team
for making this podcast possible.
You can find show notes, transcripts, and much more at HumaneTech.com.
A very special thanks goes to our generous lead supporters,
including the Omidyar Network, Craig Newmark Philanthropies,
and the Evolve Foundation, among many others.
I'm Tristan Harris, and if you made it all the way here,
let me just give one more thank you to you
for giving us your undivided attention.
Thank you.
