Unlocking Us with Brené Brown - Dr. William Brady on Social Media, Moral Outrage and Polarization
Episode Date: March 27, 2024This is the second episode in our series on the possibilities and costs of living beyond human scale. In this episode, Brené and William discuss group behavior on social media and how we show up wit...h each other online versus offline. We’ll also learn about the specific types of content that fuel algorithms to amplify moral outrage and how they tie to our search for belonging. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Hi, everyone. I'm Brene Brown, and this is Unlocking Us.
So we have a very interesting conversation today. Everybody in the series, I could talk to them for
hours and hours, but I keep getting the fish eye from folks here like, wrap it up, we're hitting
an hour. We're doing a series, and this is a little
bit different. We're doing a series that's going to cross over between Unlocking Us and Dare to
Lead. And we're going to talk about the challenges, the possibilities, the costs of living beyond
human scale. And when I say living beyond human scale, I mean, we are really socially, biologically, cognitively, spiritually
wired to be in connection, kind of IRL in real life with each other, with our families, with
our friends, with community. And we are living in this environment of social media, artificial intelligence, machine learning. We have access to 24-hour news and
50 different channels, all with different ethos and ethics and financial models. Everybody wants
our time and everyone's saying what they think we want to hear. And it just seems beyond human scale right now. And so
I'm talking to folks who can help us make sense of it and help us kind of, I don't know, get
underneath the machine to figure out who we are and what we're seeing and why it can feel so
overwhelming. And if we can pull back, what are some of the great possibilities? Today,
specifically, I'm talking to Dr. William Brady. from October 9th to October 16th get amazing deals on shoes and boots on sale at 30-40% off
and you can shop new styles
during the Macy's Fab Fall Sale
from October 9th to October 14th
shop oversized knits, warm jackets
and trendy charm necklaces
and get 25-60% off on top brands
when you do
plus get great deals on cozy home accessories
from October 18th to October 27th.
Shop in-store or online at Macy's.com.
Dr. William Brady is an assistant professor of management and organizations at Northwestern University. He's in the Kellogg School of Management. really sits at the intersection of emotion, group behavior, and artificial intelligence
and social networks. He really wants to understand from a behavioral science perspective and
computational social science, what's happening with us, especially online and what's happening with us, especially online, and what's happening with us when we get into groups.
He is published in many, many academic journals, and you possibly have seen his work in the New
York Times, the BBC, Wired, the Wall Street Journal. He has made a lot of contributions,
and he's very early in his career. He's made a ton of contributions. He's been selected for the
Association of Psychological Science. He's got the Rising Star Award. He's been selected for the Association of Psychological Science.
He's got the Rising Star Award.
He has a BA in psychology and philosophy with distinction from UNC Chapel Hill.
He's got a PhD in social psychology from New York University, and he just completed a postdoctoral
fellowship from the National Science Foundation where he worked at Yale before he got to Northwestern. And we're
really going to talk today about one of his areas of expertise, which is moral outrage and moral
outrage on social media platforms, the differences in how we show up online versus offline, how
social media algorithms amplify moral outrage, and why and how we're rewarded for engaging in that kind of behavior
online, ideological, political extremes on both sides. And we are also going to talk about bots
and how bots are used to troll and divide nations and divide people.
It's a really interesting conversation. I'm glad you're here. Let's dig in.
William Brady, welcome to Unlocking Us.
Thanks so much for having me.
I'd usually say my pleasure, but I'm like, oh, my pleasure and my pain to have you. Your research is, I guess it's a lot of things. It's important,
very relevant, and kind of scary. I feel the same way as a researcher,
although I hope by the end of the conversation I can convince everyone it's not all bad. But
there are things we need to think about. There are things we need to think about,
and I'm not sure that we're incentivized to think. So before we get started,
tell us your story. Where are you from? Where'd you grow up? How'd you end up here?
Yeah, well, it's funny. I grew up in North Carolina in the Bible Belt in the 90s, and
I think that's actually where my interest in studying moral psychology came from. So in other
words, how we come to hold the moral
views that we do and how we interact with other people surrounding those views and the group
identities that we form. Because if you grow up in that context, that's one of the most salient
things. When you meet someone, they ask you, what church do you belong to? And if you're not part of
that group, it also comes with a lot of pros
and cons there. So, I was always interested in how we moralize things. And then growing up in
college when Facebook literally first came out when I was a freshman in college, it struck me
that there's some interesting connection with this new social media context and how we talk about moral issues and
political issues. And actually, my entry point into the research I do now specifically was my
history as an animal rights activist. And I thought, well, social media actually is a good way to
get information out there about this cause that I care about. And at first, I was super excited about it.
But I think as we've seen, there's a lot of pros and cons of political activism on social media.
In some ways, it can really raise awareness of political issues. But then on the other hand,
lots of toxicity is involved in that. And I think we're going to talk about some of my research on
why that is today. I'm going to take about some of my research on why that is today.
I'm going to take you back into your story, if that's okay.
Of course.
Usually this is where the researchers are like, can we get to the data? But I'm like, we can,
but as a qualitative person and ethnographer, let's go back. So it makes a ton of sense. Like, I didn't know that you grew up in the Bible Belt. I mean, as someone who studies social learning, as it impacts moral outrage, what was your experience? Were you a cerebral kid? To go, I guess I'll go a little bit into more of my family details. My mother is actually Jewish, and that was kind of interesting.
My dad was Catholic.
And so, part of me was always a little, you what are the assumptions going into this thing
that we all just take for granted, going to church and these Christian beliefs. And I think
the thing that got me thinking a lot actually were some negative things that I saw associated with
Christianity as it is in the South. And of course, this is not to say, there's a lot of positives, I should say. Like the sense of community it gives people is amazing.
But a specific story I remember is
when I first started going to my high school,
there were all these Christian pastors
that would somehow, they were allowed onto our campus
and they would be basically holding up these signs
that were like,
LGBTQ people, of course, they weren't using those terms, should burn in hell,
and like women should not be allowed to get abortions, etc., etc., all the kind of major evangelical talking points.
And it was just interesting because some of it, I would even consider hate speech, and it was just sitting there on our campus.
It was just not questioned whether or not it should be allowed because it was an example of religious freedom. And these
are the kinds of stories where I started thinking about how can social norms develop where we just
don't question that this is something that is defensible or is allowable. And it's just
interesting to me when you grow up in a culture
where these are the norms, you just kind of go along with it. And some people believe it,
some people don't. But at that time anyway, it was something where even leadership at the high
school didn't stop to think, maybe this is offensive to some people in our community.
And I really do think that's a function of social norms and social learning. In other words, what we come to find as common and appropriate in our community. And I really do think that's a function of social norms and social learning. In other words, what we come to find as common and appropriate in our community.
Were you at a Christian school?
This was a public school in North Carolina. I guess I don't have to say which one,
but it was one of the biggest in the state.
Wow. Okay. So you didn't have the language that you have now, obviously, as a professor and a researcher and very prestigious academic background.
Were you emotionally reacting to that as a high school student or were you thinking about it?
Was it both a cognitive questioning and emotional questioning or were you already a thinker?
Well, I was 16, probably had more hormones going on than I do now. So, mostly it was emotional,
and it did lead to some confrontations because there were a lot of us, 16, 17, 18,
not everyone was fine with what was going on. And so, yeah, there were some confrontations
because there was a lot of emotions running high. I so, yeah, there were some confrontations because there
was a lot of emotions running high. I mean, you have gay people in our school who are feeling
attacked and they should feel upset. And then you have people who grew up as fundamentalist
Christians who feel outraged at someone else's outrage. So, it was an interesting time. I
remember in school, the kind of response
from the school leadership was, oh, well, everyone needs to get along. And it's like, well, okay,
as a teenager feeling very emotional, why should I try to get along with what is clearly an example
of hate speech? So yeah, at that time, I think I initially had an emotional reaction and then going into
college, learning about moral psychology, learning about ethics and philosophy, that
was my entry point into thinking about this from an intellectual perspective.
But I think actually there's, and it's funny because now I also study the role of emotions
in our moral psychology and our social media behavior. And I think it's important to have those emotional reactions as a basis to understand
what is going on, because it can help you also understand what's going on for the other person
that you might not agree with and why they can say, oh, I don't see what's wrong with this,
because for them, they've learned something different. They've learned a different rule of
what is appropriate, for example. It's really, as someone who studies emotion,
it's really interesting to hear you say emotion as a jumping off point for, I mean, I guess the
affect or emotion underlying moral outrage on both sides is what we have in common. Is that kind of what you're saying?
That's right. And the interesting thing that I find in my research is no matter how much
people disagree with the moral view of another political group or another person,
moral outrage tends to be a universal signal. We can still recognize that that person is upset or they feel offended, even if we completely disagree with the cognitive component. Like,
what are they actually believing and what are their views? So, moral outrage serves as this
universal signal that communicates to people, I'm offended, you're offended, and it actually
allows us to communicate about our social norms.
Wow. Very interesting. And I really appreciate you,
I don't know how often you talk about your high school experience as a platform
for being a scholar now, but I think it's helpful. It's important to me, and I think it's also
a prophetic tale because right now there's a fight in Texas to put pastors in public schools.
I wish we could look back on history and
say, wow, we've come a long way, baby. But as the feminist t-shirt used to say, we haven't come a
long way and don't call me baby. It's like we have not. We're still here. All right. So I want to
talk about two of your articles. I will say it was hard to narrow them down because even going back
to kind of, I think, would you say it was a postdoc at Yale? That's right. Yeah, postdoc at
Yale. Even that research for me has been really interesting. So the two articles I want to talk
about today are, one is kind of a summary of more academic studies. Social media algorithms have
hijacked social learning. And this is really about, I would say, the intersection of moral
outrage and social learning. And then after we talk about that,
I do want to talk about, as we come into the 2024 election, I do want to talk about a second
article. Authorship on the second article was Almag Shimshon and then Jay Van Bevel. Okay.
And that article is Troll and Divide the Language of Online Polarization. So I feel like we have a great segue into social media algorithms have hijacked social learning. The subtitle here is we make
sense of the world by observing and mimicking others, but digital platforms throw that process
into turmoil. Can anything be done? So we've talked a little bit about what motivates your
interest in studying moral outrage. I want to start with some definitions because it feels like that's important. How would you define moral outrage? And then how would you define social learning? Before we get to the intersection of where all hell breaks loose between these constructs, let's define each one. So moral outrage, I think it's best to consider it as three different components.
And this is generally a good way, I think, to consider emotions. So first of all, we can think
about the eliciting conditions. In other words, what triggers moral outrage? So the key characteristic
of what triggers moral outrage is that we detect that there's been a transgression against our sense of right and
wrong. So, it's fundamentally linked to what we talked about earlier, our sense of morality.
And then it comes with a typical experience, which is usually described as a mixture of anger and
disgust. So, consider it like negative, high arousal. And then it comes with also, I think, something specific to moral outrage.
These outcomes that are very relevant to the transgressions against our sense of right and
wrong, we want to punish people. We want to hold them accountable. So, when you put those three
things together, an emotion that is related to anger and disgust, triggered by a breach of our
sense of right and wrong, it usually leads us to punish
or want to hold people accountable. So imagine you're a vegan, you see something about factory
farming, it elicits this feeling of maybe anger or disgust in you, and then you typically want to
hold someone accountable or punish. Those are the key characteristics of
moral outrage. Okay, so I'm already flagged as an emotions researcher by something you said.
I want to check in about something that I've seen. I don't research this area, but I think I've seen
it. You said disgust, and what was the other word you used?
Anger. Anger and disgust. Is it fair to say that I often see contempt?
Yes. Some of these fine-grained distinctions I think you can definitely make. I'm trying to
paint a picture of outrage as this kind of constellation of things that usually we describe as outrage.
And I think contempt is a great example
where what's the difference between outrage and contempt?
It's kind of difficult to say, especially,
I think the key though is like,
if we're in the domain of morality
and you're reacting to some kind of transgression,
I think that's when you get into the moral outrage realm.
And contempt could
be considered a part of that for sure. The reason why I'm asking is when you say
disgust and contempt, I get anxious because it leads me to this question right away. And I'm
trying not to get too inside baseball with emotions, but it leads me right away
to worry if dehumanization is a slippery slope, if moral outrage, if what we feel is disgust,
which can be inherently dehumanizing, and part of it is contempt, which is like,
I'm so much better than you that your opinion is you're not even worth it to me.
Like, what was the example you used, farm?
Some example of some cruelty happening in a factory farm environment.
Factory farm, cruelty.
And then we move from witnessing to experiencing emotionally.
And then it makes sense to me, but I want to check out that I'm following you correctly, that the next phase is punitive.
Right. So I think it doesn't have to lead to that, but it often does, or at least it motivates us to think in those ways.
And to your point about dehumanization, I think it definitely can be associated with dehumanization.
And in fact, both work from my group and a couple others have shown that outrage in the context of online spaces is associated with hate speech.
And I think that's related to your point.
It can motivate us to lash out in these ways that are dehumanizing i do want to make
a distinction though outrage isn't inevitably that i think it does potentially have some upsides
i've always thought about it having both good and bad outcomes that can come with it and we can talk
about that more but yes it's certainly the case that it has been linked to things like hate speech in online spaces, and then an offline world, of course,
things like dehumanization and violence even. Yeah, I do want to talk about that because
I'm grateful for moral outrage in some cases, and I think it can be an important catalyst for
social change. And so I love that you're holding some inevitable tension, as most researchers do, around binaries of if this construct's all bad or all good.
Because I do thank God for moral outrage.
And oh my God, it's a living hell.
It can be both, right?
Okay. So let's talk about your research and how social learning processes amplify online moral outrage.
Definitely.
So one of the things I've thought a lot about as a researcher is the following.
What are the differences in the social world or the social
context when we're having face-to-face conversations versus when we're in a social
media environment? And on one hand, there's a ton of similarities, right? Basic social psychology
applies in both cases, but there are some things that are unique to social media platforms. And
some of them are unique in the sense that they just literally don't exist in offline settings.
So, for example, the influence of algorithms, which we're going to cover extensively.
But then other things are like more continuous.
In other words, there are, for example, group size.
There are groups in offline settings, of course.
But on social media, groups are massive, usually are in much
larger social networks. And the other thing that is related to social learning is the idea of
social feedback and social reward. Now, let me break that down a little bit. When we're in
offline face-to-face settings, we're actually highly attuned to how another person is responding
to us and feedback they're giving us. So even in our interview right now, if you make a joke,
if you smile or seem positive, it sends a signal to me, we're having a pretty good rapport. But
if you were like, you know, grilling me, then maybe I would say I need to change what I'm
talking about. Just example. But in the online case, it's really interesting because what I'm talking about, just example. But in the online case, it's really interesting because
what I've argued is that you get this social feedback that's very streamlined in the sense
that it's quantifiable. We know exactly how many likes, how many dings we're getting when we post
something. And it's also delivered in ways that actually mimic what has been described in
psychological research as variable reinforcement patterns.
And basically, what that means is we get the likes and the shares delivered to us in ways that
actually make us more likely to pay attention to them and to kind of be affected by those in ways
that affect our behavior. And so, what I've studied in some of my research is the fact that when we are getting rewarded for expressing moral outrage, people give us likes, they share the content.
It actually predicts us being more likely to express outrage in the future.
So it turns out we're very sensitive to that social reward, especially when it comes to our moral outrage expression. So there's a key social learning process there
because of the way social feedback
is delivered to us on social media.
We learn to express more.
But the last thing I'll say about this
is it turns out to be even more complicated
because moral outrage,
as some of my research has shown,
is some of the content that's most likely to get amplified by the algorithms that deliver us content on social media.
And so now it becomes this feedback loop where, in general, I might get rewarded for moral outrage by people, but the algorithms are amplifying that content. So people are more likely to reward
outrage in the first place because the algorithms show it to them. And so now I'm actually getting
extra reward for expressing more outrage. And what is the inference that I make? Oh,
everyone likes this, so I should continue to do it. And it's not necessarily a cautious process
like that. But at the same time, we've all had
that experience where certain posts, we get all this feedback and we're like, oh, that was a good
post. And you might subtly start to do those kinds of posts more over time.
I just don't get it.
Just wish someone could do the research on it. Can we figure this out?
Hey, y'all.
I'm John Blenhill, and I'm hosting a new podcast at Vox called Explain It To Me.
Here's how it works.
You call our hotline with questions you can't quite answer on your own.
We'll investigate and call you back to tell you what we found.
We'll bring you the answers you need every Wednesday starting September 18th.
So follow Explain It To Me, presented by Klaviyo.
What's the relationship between social learning, and these might be constructs that you haven't studied, but I would love
your point of view, between social learning and kind of an ego fragility and ego protection
of everyone trying to navigate online. Does that make us even more vulnerable and susceptible to it?
Yeah, that's interesting. I don't study this, but my wife is actually a therapist, so I thought about this. But it is interesting. I think
one of the key things that is communicated to us in this context of social learning,
where we're basically responding to reward or punishment that we see, or we're responding to
what we see in the environment, how common, how appropriate is it? The things that we do, we're very sensitive to
whether we're getting the likes or not. And if you are someone who has low self-esteem, for example,
maybe your ego is a little fragile, actually there's research on this. You're even more
sensitive to the variation in social reward
that comes from the platforms but the thing that's interesting is it's only partially how
much social reward we get is only partially a function of the things that we did there might
be random factors that come into play like when did you post the content did the algorithms show
that content to people we're not always aware of that.
And it's interesting because that can govern how we feel.
Did we get less social reward or something like that?
So people who are more insecure can be the most susceptible to variation in social reward.
That's generally a negative thing because it actually is a poor indicator of how much
people approve of the things that you're doing.
And of course, if your whole behavior is driven by trying to seek approval,
it's going to send you on all kinds of strange and random paths.
And I think we've seen examples of that from social media figures who have kind of
rose to power and sometimes have a strategy to appeal to their audience.
In your study, did you find differences in outrage expression
between ideologically extreme networks and other networks?
We actually do see some subtle differences here
in terms of how people are responding to social information.
So for people who are already in ideologically extreme
networks... How would you define that? How would you define that? Let me stop you there.
Oh, right, right. Ideologically extreme is referring to their political ideology. And so
someone who is more extreme would be someone, say, who is extremely left on the political spectrum or extremely right
on the political spectrum, just referring to U.S. politics. We can actually estimate that in online
settings by looking at the accounts people tend to follow. And because we know the political
ideological extremity of the political figures that they follow, we actually can make an inference to say,
hey, if you tend to follow 95% far-right people,
you're most likely to be more politically extreme yourself.
So that's how we determine that.
And what we find is that
people who are in these extreme networks,
it's actually interesting,
they are less sensitive to the variation
in likes and the social reward that they're getting.
They just keep expressing outrage no matter what.
And there's two reasons why we think that's going on.
One of them we've studied very well, which is that if it's a common thing in your group,
so if you're in a politically extreme network, of course there's more outrage that's spreading around in your network,
then you just know that it's something that is normative. So you keep doing it regardless of the likes that you get. Also, it might be less likely to draw likes
because it's pretty common. It could also be the case, though, that people develop almost this
habitual way of expressing outrage just because they learn this is how I communicate in this network. And they're not even really sensitive to like
punishment or reward. So that's less studied, but those are the two explanations that we draw.
Does that mean if we look at, let's just use the standard distribution of data bell curve,
like for extreme. So let's say we have 20,
and it's probably not 20, it's probably less than 20, but we have 20 far left, 20 far right.
Are you saying that the 60% in the normal bell of distribution, the moderate are more sensitive
to the social learning feedback than the tails? Correct. That's what we find in our studies,
specifically when it comes to outrage expression. So I should add that caveat. But it makes sense
when you think about it as well, because people who are not on the extremes, there's almost more
room to express more moral outrage because you don't not most people i mean actually some people
do come to the platforms already like riled up and ready to attack people and actually those are
people who are most likely to express outrage in the first place but there's also a learning effect
for people who are more moderate where depending on the networks they're in and the types of social
reward they're getting if they do choose to express outrage they're actually more sensitive
to that feedback.
That makes total sense to me, like in my own experience with my own life.
I mean, what do you make of it? If you take it off your researcher hat and just your William hat with your wife who's
a therapist, does it make sense to you just on a really personal level how in that normal distribution, 60%, we're testing ways of being and we're testing ways of
belonging and we're more susceptible and vulnerable to that?
Yeah. I mean, first of all, it makes total sense. And by the way, you hit the nail on the head with
that word belonging, because I think a lot of what describes some of these processes is our natural tendency to try to find belonging
vis-a-vis groups that we're in. So if you are someone who is trying to discover your
identification with a political group, I actually have some research on this. It turns out if you
express outrage,
that's one of the easiest ways to signal to other people that you are a genuine and committed group member. Not to say that people do this strategically, some people might, but there's
all these social layers going on here. And so it's something that is inevitable. As humans,
we are inherently social creatures and in every single interaction we're in,
unconsciously, we are scanning the social environment, taking in social information,
figuring out, how do I relate to this person?
How do I relate to this group versus the other group?
This is something that is inescapable.
Yeah, and I would say that belonging isn't our hard wiring.
And in the absence of belonging, there's a lot of suffering. And I would say that a lot of people
have learned to leverage that really well in terms of offering belonging in exchange for
certitude and moral outrage. I think that's part of it. Would you agree or no?
I would agree. And I'm not a religion scholar, but there's no doubt that going back to my early experiences we discussed in the beginning, there's no doubt that one of the functions of
Christianity in the South is it gives people this community and this belonging. And
it can come with a lot of good things, but it can also come with a lot of dark things
because one of the most fundamental findings
in social psychology is as soon as we identify with a group,
the consequence of that is we feel more belonging
with the group, but we also contrast ourselves
with outgroups.
And it's really interesting to see
how quickly our brain does
this. You can assign people to completely arbitrary groups. Like if we come into an
experiment and I say, hey, guess what? You are a circle and I'm a square. All of a sudden,
we view our groups as a competition, even though they're completely made up.
This is a fundamental feature of our psychology.
Yeah, it validates so much from the
belonging research I've done and the desperation to belong, especially as we feel collectively
more uncertain, more emotionally dysregulated. I wonder sometimes, can moral outrage be an
emotion-regulating, can it serve an emotional regulation function?
For sure. I'm glad you brought that up because I've done several studies where I actually
message people on Twitter and I ask them, like, by the way, like, why are you expressing
moral outrage?
And also I study how outraged people are versus how outraged people perceive the author to
be.
And we can talk about that later.
But one of the things that comes up in this research, people actually, it's surprising, people will gladly express to me why they're
expressing outrage. There's a cathartic component for a lot of people. And I think in that sense,
it's an emotion regulatory tool because by expressing this, at least in the short term,
it actually can make you feel better about getting it out. And I think getting back to
our conversation about what are the positive functions of moral outrage, the truth is in the
U.S., we live in a highly unequal society, whether it's race, gender, economics, and a lot of people
have feelings that they need to get out and in a way either to just challenge the status quo or express feelings about that.
And I think in that sense,
outrage really serves this emotion regulatory tool.
You know, is it sustainable?
Does it make you feel good in the long run?
I'm not sure about that,
but certainly in the short term, it can serve that role.
Yeah, and I think one of the systems of oppression
in the country is a pathologizing of outrage where it is completely warranted.
And so what we're saying makes sense. Tell me why we lean on prime information in social learning and what is prime information? God, this was so crazy to me. This was so good. Yeah. So let me now get a little bit into
the psychology of social learning, which we were alluding to earlier.
Yeah. One of the really interesting things about how we learn from other people is that
we actually don't do it in a way that is always entirely accurate. So what do I mean by that? Actually, we have biases that
drive our social learning. So you referenced the term, we introduced this term prime information,
which is an acronym that refers to prestigious, in-group, moralized, and emotional information.
So four types of information. Can I stop you and say it one more time?
Oh, yeah. The prime is prestigious,
in-group, moral, and emotional, right? That's right. Okay. That's right. So, the reason why we
focus on those four types of information is because it turns out we have specific biases
to learn from those types of information that are very well studied in the social science literature.
So, for example, why would we be biased to learn from someone we view as prestigious?
It turns out that this is actually very functional and it leads to efficient social learning over time.
The reason is because what does prestige usually signal or what does it usually
represent? It represents someone who has been successful in some context. And so, if you are
learning and choosing who you should be learning a skill or some information from, you actually
want to learn from the successful person because then you don't have to learn all the mistakes
that other people are making, right? And time through evolution there's evidence that we have developed
this bias because of that function and you can actually make the same argument for all the other
four dimensions so for example why is it useful to learn from in-group members rather than out-group
members in-group members have better knowledge of the immediate environment that you're in. That's why you're in that group. And so it's most efficient.
And then finally, when we think about moral and emotional information, those are the types of
information that tend to be very relevant to either social or physical survival. Emotional
information, you need to be able to pay attention to snakes, right? This is a classic example.
Yeah.
Because if you're not, you're going to get bitten and you might die. So,
our brain actually prioritizes moralized and emotional information because it generally
helps us navigate the world. So, the really interesting thing about this is, okay,
we have these social learning biases. What happens when you attach those to an environment
where you have algorithms on social media
that have a specific goal in mind?
Their goal is to amplify content
that draws in our attention and draws in our engagement.
Now, why does that happen?
Or why is that the goal?
Because that's how social media platforms create advertising revenue. And so the interesting thing about that is, well, if I asked
your listeners, what type of information do you think is most likely to draw on engagement? Well,
the answer is, if you've been paying attention, the type of content, the prime content that we're
biased to attend to and learn from. And so as a side effect of this goal to promote engagement,
incidentally, the social media platforms are amplifying that prime content, the prestigious
ingroup, moral and emotional. And so what we argue is it actually creates a situation where
there's this feedback loop. We naturally interact with that information.
We click it, we share it, we post it ourselves.
The algorithms amplify it because of how they're designed.
But then what happens is the environment
actually gets oversaturated with prime information
and we're just learning to produce more.
And so that's why in several contexts, especially when it comes to politics, we basically begin to learn that it's appropriate or that it's common to express prime information more than it really is. And that can lead to conflict rather than to collective problem solving and cooperation that actually in offline settings, this bias usually helps us navigate.
And the reason is because when you think about it, negative moral information in the offline world is a lot more rare.
For example, if we're trying to detect cheaters in our social groups, they tend to be rare.
We punish them, they go away. But in the online world with this artificial inflation, now it's like,
and we've probably all had this experience if we ever try to read politics on social media,
it's like the whole entire world is burning, right? Like there's so much negative and moral
information and it can be really taxing for someone who is not as plugged into that community.
I want to play back what I think I heard you say, what I think I've read this article several times.
Prime, us kind of privileging prestige in group, morally and emotionally kind of aligned, that hardwired instinct completely becomes
screwed up and broken when it hits an algorithmic social media world. And the algorithm primes
shit that should not be primed. Is that what you're saying in a more sophisticated way?
I think that's definitely one way to think about it.
I would make it a little more nuanced by saying
the algorithms tend to promote content
that we are attracted to naturally, right?
And so think about like a car wreck.
When we drive by a car wreck,
we all turn our heads and we pay attention to it
it's just something like no one doesn't do that like everyone does that but that doesn't mean
that we want to keep seeing car wrecks right like that's just not what we would prefer in fact i
have some work on this led by uh steve rathjeegs a postdoc basically what we show in a survey is that
everyone realizes that there's a lot of this negative emotional and moral content online.
And we even click on it a lot of times, but people report that they don't want to be seen as much as they are.
So we often recognize that there's this discrepancy between what we naturally get drawn into and what we prefer to see.
But the algorithms don't know that at the moment. And so even if you
click on something and you don't necessarily want to be like, oh, I just wanted to like,
I couldn't help it. I had to check that out. They're going to keep promoting that.
And I think that process helps explain some of the stuff you're talking about
and all kinds of other content. And actually, there's one other thing I would mention about
this that's very important to this conversation, which is that it also explains why a minority of extreme political individuals often dominate the political social media space, especially on TwitterX and Meta.
Because the algorithms are amplifying their content, it tends to have more prime information baked into it, especially the moral and emotional.
And so they're getting amplified as if these minority extreme users are the majority.
And that is what we argue really skews our understanding of social norms,
what is common in the environment.
We actually think, and I have a study on this,
that these really outraged people are more common than they actually are.
That starts to mess with our understanding of groups.
Hello, I'm Esther Perel, psychotherapist and host of the podcast, Where Should We Begin,
which delves into the multiple layers of relationships, mostly
romantic. But in this special series, I focus on our relationships with our colleagues,
business partners, and managers. Listen in as I talk to co-workers facing their own challenges
with one another and get the real work done. Tune into How's Work, a special series from
Where Should We Begin, sponsored by Klaviyo.
Okay, let's talk about solutions. Yeah, because I get really nuts around this, I think.
Explain bounded diversification to me. Yeah, so it's kind of a mouthful, but going off of what I was just talking about, we know one of the reasons why
things like polarizing conversations, things like toxicity, things like morality and emotion
is produced by a very small minority group of extreme users. In fact, there was a recent study that like 75% of that content
is produced by just like 20% of users or even less. And so the whole point behind this idea
of bounded diversification or now what I actually now call it representative diversification to be
more intuitive is that we want to try to change the environment so that people who are
not on the far tails of the extremity, if you imagine that normal distribution that you referenced
earlier, they're not overrepresented. So we want to actually have the opinions of less extreme
people also in the mix so that if you're a user on the platform, you actually have a more accurate
social understanding of what different groups are believing.
Because you want to know not only what your group actually is thinking, because that's how you tend to conform, but you also want to know what the outgroup is actually thinking. of Republicans is they're all like hate speech, like fear-mongering extreme individuals.
There's data suggesting that that misperception, because that's not actually true, it makes you
more polarized and it makes you dislike the out-group more. Of course it would. And so our
goal is to think about how can we design algorithms that are counteracting this over-representation
of extreme individuals, both from the in-group and out-group, which is
why we have that idea of diversification. Most people in online context, you are getting exposed
to out-group members' thoughts sometimes, but it usually is like an extreme comment and your
in-group member is like commenting on that. And that's how we're exposed to outgroups and so our understanding
gets skewed so we want to create an algorithm that can improve this representation in people's
social media feeds especially in the election season that's coming up it makes so much sense
to me how it happens that if you have an opinion and the only outgroup opinion you hear is so far extreme
that it pushes you to the extreme because it ratchets up fear. Oh, I'm not close to what
this person believes. I'm a hundred miles from what this person believes. And I don't even want to get close to this person. So I'm going to take a sharp turn here. Is there any incentive for
representative diversification? Is that the new way of saying like, tell the truth? Is there any
hope on your part that meta will do this? Yeah, so actually, in the upcoming 2024 US presidential election, I'm doing
a big study that is actually going to use the blue sky social media platform. Jack Dorsey created
this, the former CEO of Twitter created this open source like Twitter alternative. But the cool thing
about the platform is you can actually implement your own social media
algorithms.
And so I'm working with an engineer to try to implement this idea that I've been telling
you about.
And one of the key things, to answer your question more directly, is that I actually
predict that even if you do this representative diversification where you're showing more
constructive arguments from
the political outgroup, for example, it's actually still going to maintain people's engagement
because although we have attention to toxic content and stuff like that, in the long term,
that stuff will exhaust users. And in fact, if you look at Pew Research, the majority of social media users
are actually exhausted by this kind of content, especially in politics. And actually, some of my
colleagues at the University of Southern California, they have this project called Social Media Index.
So like Matt Moitel and Ravi Iyer, they demonstrated that on Twitter and Facebook,
people are seeing a lot of this toxic content
and don't like it.
They think that there's content they're seeing
that actually they think is bad for the world
and that could lead to hate speech.
So my point is there's an incentive
at the user level to reduce this content.
And even though it's true that we all click on
like outrage-inducing stuff,
in the long term, I think user retention will not
be affected by improving some of the representation of that content so it's more socially representative.
I love this answer and I have to say it resonates with just conversations that I'm in
among people who have large followings like I do that there's for the first time a very
serious conversation about shutting it all down. I went off for a year, and I have to say it was probably the best year for me, mental health,
resetting-wise. It's a change-or-die situation, I think, for a lot of folks who are actually
influential. And I do believe that social media can be an incredible tool for activism,
for social change. I also think it's getting
increasingly dangerous without change. Is that fair? I think that's a great summary. And I also
think that these companies are aware of this, and I think they are actively trying to implement
things. But there's a fundamental tension with the advertising revenue goals of the platform. So what we're trying to do is to
provide evidence or tests that you can actually improve discourse without also just completely
removing engagement. I mean, people are not just robots. We do have these hardwired attractions
of content, but we also have goals and ideas of what we want social media to look like, even in political spaces. And I think
the goal of the algorithm we're designing is to try to get closer to what that might look like.
I mean, I think if we go back to belonging, at some point, people are going to become exhausted
from running toward the bunkers. And the bunkers are so tenuous
because if you disagree with the people behind the bunkers, you're cast away very quickly. That's
the hard part, right? And I think people want to see their ideas reflected more honestly. Do you
agree? 100%. And that's actually one of the main goals of this algorithm. If we can represent a wider distribution of people's beliefs, then what you're talking about is less likely to happen. And I totally agree. I've been frustrated by this. I'm someone who would consider myself generally progressive and on the left, but even someone like me who
considers myself that way, I've been in situations where I'm like, I don't know if I should comment
on this because I might get so much outrage toward me when I think I mostly agree with this,
but I don't know if I'll say it right. Like, we've all had that experience, and that leads to a
silencing of political views, which is just not a situation
in a democracy that we want to be in. I think if we really do believe in democratic conversation,
we need the range of views represented. Although one of the things that we argue is we can actually
cut off some of the far extreme stuff like hate speech and toxicity because that's first of all representing a minority of individuals who are going to be putting out that
stuff. We can still get rid of that because some people have argued for like wholesale diversification,
but a weird consequence of that in politics is that means I should be as someone on the left
exposed like far right ideology. That doesn't really make sense, right? Or for someone who's more moderate. So we argue that we should bound the diversification,
which is where that term bounded came from. That's a helpful add. So I want to move to this
because it's so connected with everything we've been talking about. So troll and divide the
language of online polarization. I would say the article really investigates how trolls contribute to conflict between groups by using highly polarized language and rhetoric. Is that an accurate assessment?
Yes.
Okay. How did you assess the online polarization? Yeah. So in this study, what we were able to do is look at the political conversations
that tend to demonstrate what colloquially we might call an echo chamber effect. So
basically conversations where if you're a Democrat or you're a liberal, you're mostly
getting retweeted or you're getting shares, you're mostly getting retweeted or you're getting
shares, you're getting comments from people on the left, and really there's like no interaction
with the political right and then vice versa. We can actually look at those conversations that
have that characteristic and we can start to analyze the language that is most likely to
predict when things become polarized in that way in the sense that you're really seeing this divergence in how people communicate.
And then it allows us to create this dictionary that basically allows us to predict
when is a message most likely to create this polarization in the social network.
What we ended up discovering is that a lot of that language related to our conversation in this episode is really centered around highly emotional, moralized content.
So calling someone evil, for example, as you might guess, that's going to typically polarize and you're going to have this type of conversation.
So that's how we measure polarization in that context.
Is this true or not true? Are there
dangerous actors involved in this outside of, let's say, the U.S. that are trying to divide
Americans and contributing to these really polarized conversations using the language
from your dictionary? Is that a truth or not a
truth? Well, we know, for example, in the 2016 election that there were definitely foreign
accounts that were being employed during the U.S. election. And what our paper shows,
if you analyze their behavior, they're posting a lot of content that contains
this polarizing content that we've shown empirically is associated with
polarization in online spaces. So it's hard to speak to what exactly is a strategy or what is
the intention, but we can see what has happened descriptively. Also, we see similar things,
by the way, with misinformation.
We have evidence that these internet research agency,
as the organization was discovered to be called,
the type of misinformation that they're producing and that is getting shared by them
is also the type of misinformation
most likely to elicit moral outrage.
So we've studied this very carefully.
Wow, say that last part again.
Right, so the misinformation associated
with some of these troll accounts
is misinformation most likely to elicit moral outrage.
And that's when you compare the other types of information.
So for example, political information
produced by New York Times, it elicits some outrage.
But what we found pretty systematically So, for example, political information produced by New York Times, it elicits some outrage.
But what we found pretty systematically is that the misinformation content is producing even more outrage.
And as we have discussed, that can also actually impact how it's spreading online.
So, I'm just, I'm like, pause, Cause like, I just got this message on my Instagram. It said, you have 150,000 some odd potential spam accounts following you. And it said,
would you like to delete them? And I'm like, yeah, delete them. And then it said,
there's too many to delete. And I was like, I don't know how to get out of this, but that's a meta issue. And then when I got into a real dust up a couple of years ago around my podcasting
platform, when we called a liaison at one of the social media platforms, they're like, you're in the crosshairs right now with some bots. And I was like, show me an example of a bot. And I was like, no, this is a woman from Milwaukee. Look at her picture. And they're like, it doesn't work that way. Like, is there a tool for dealing with this? I
mean, I'm thinking about the election, William, and it makes me super anxious.
Yeah, for sure. I mean, we definitely should give some credit to the platforms because they've
implemented a lot of tools that have, there's evidence that's reduced a lot of the bots and
misinformation. But the problem is, it hasn't worked perfectly. And there's no doubt that
in any election cycle, there's this threat, and they're doing all they can. But it's hard,
because especially in the age of generative AI, there's just so much you can actually produce.
Now, misinformation, it's not necessarily a supply issue. So we've always been able to
produce a lot of misinformation. It's more about the consumption. But I think with generative AI,
it's more difficult to tell what is blatantly false. It's very difficult. And I think my main
concern is not so much that we're going to see a massive increase in misinformation consumption,
but that given we know that AI
exists and is being involved in the news production, we're going to start to get
confused and maybe we're going to start to get tired of having to figure things out. And it can
actually generally reduce the trust in the information ecosystem. So that's my concern
in the upcoming election. And I hope, you know, we can all just try to do our best, you know,
platforms have things like signaling the veracity of news domains, but yeah, just, you know, try
your best, pay attention to content. If it sounds fishy, maybe look it up. That's where we're at
right now. I know. I think when I hear you say that, the thing that makes me nervous, because I suffer from this a lot, is discernment fatigue.
Yes, yes.
And confirmation bias.
I'm like, is that true or not? But I think I'd like it to be true. So I'm going to go with that's true.
One thing that I read, I don't remember where it was, my last thing for you that I loved. I think this was your idea, or you and a team. I would like a little note on everything I see that says this is why you're seeing this.
That's right.
I think, actually, from polls, we know that most social media users would love more transparency.
And what you just described is specifically about, yeah, like algorithm transparency. And I think that would be really helpful to at least making people aware
potentially of some of these biases and what they're seeing resulting from the algorithms
like we've talked about. I think that's definitely one small thing that could help,
but I do think that we need more education so people just understand how algorithms are
working and how they're selectively increasing certain content over others. It actually turns out most people are not aware. They're aware the algorithms exist,
but they're not aware of the details and how it actually works.
Yeah. I had a friend tell me the other day we were walking together and she's like,
oh, do you have to use the word algorithm? I stopped listening when you used that word. I'm
like, I was like, I get it. Me too. But I think we got to figure it out. Okay. I'm going to go
through these. These are rapid fire. Are You ready? Okay. Let's do it. Fill in the blank for
me. Vulnerability is in the context of online environments in the context of William Brady.
Vulnerability is putting your strongest convictions out there and being open to having them challenged, having respect for other people who oppose you.
Damn. Just wow. Everybody in the room is like this. Whoa.
Yeah, I just made it up. Yeah, I don't like your answer, but okay. Okay, you, William, are called to be really brave,
but you can feel your fear in your throat.
It's real.
What's the very first thing you do?
Fight or flight?
I think I'm confused on the question.
Yeah, like when you're really scared,
but you have to do something,
you're going to do it because you want to be brave,
but your fear is very real.
What's the first thing you do?
The first thing I do is suck it up. And if I really need to do it, I just have to dive in
the deep end.
You go. You just go.
I go. Yep. Take the plunge.
Okay. Last TV show you binged and loved?
I'm watching Shogun right now on FX, and it's amazing. I'm totally caught up in it.
Okay. I've heard amazing things,
but you're the first person I've talked to that's watching it. Favorite movie of all time?
Having just watched Dune II, it's up there,
but I think for me as a sci-fi nerd,
2001 Space Odyssey is my number one.
Oh, you're a pod door guy.
The pod doors, yeah.
Okay, a concert that you'll never forget.
This is slightly niche, but there's a Swedish hardcore band that got popular in the US in the
90s called Refused. And they did a reunion tour in like 2010. And it was just the best
concert I've ever been to. I can tell you're the first to answer this one.
Favorite meal?
I love Szechuan food.
What specifically?
I love like fried tofu covered in Szechuan sauce with vegetables.
You cannot go wrong there, right? You look like you feel bad about it, but
you're like, that's not probably, it's so good. What's on your nightstand?
I have two sci-fi books, the Silo series by Hugh Howley,
which I really am into right now.
And I also have a noise machine
because I love sleeping to white noise.
Me too.
A snapshot of an ordinary moment in your life
that gives you real joy?
Every day with my dogs,
I have two Husky Malamute Shepherd mixes,
big fluffy dogs.
They love the snow. Definitely
that. And last question, one thing that you're deeply grateful for right now?
My wife got me into individual therapy and I've been going for maintenance. It's been so great
to be able to just check in, have a time every week where I get into my emotions. I think as
men, we don't often have the chance or the
desire to do that, but it really forces you to, and I've been really loving it.
That's like my favorite answer that I've ever heard. It's so good.
William Brady, thank you so much for being with us on Locking Us. This was
important, enlightening, and just real. And so thank you for taking really heady stuff and making it accessible for us on
this podcast because you said it, we don't understand it. And a lot of people are doing
great work, but not translating it for us to consume and think about when we jump on our
social media platform. So grateful to you. Yeah. Thanks so much for the conversation.
It was a lot of fun. Thank you.
God, this is, it's so, I hope y'all think it's as interesting as I do, because I just think
sometimes I want to know, and sometimes I don't want to know, but I feel like as a leader,
as a parent, as a partner, as a person just trying to navigate
what is sometimes exciting and new and other times just feels like complete trash and bullshit,
I just want to tap out. I'm so grateful for people digging in to understand what's happening
underneath the hood. You can learn more about the episode along with all the show notes on
brennabrown.com. We'll link to a lot of William's really fascinating work
and articles about his work.
We'll have transcripts up in three to five days for everyone.
Also, we are sending shorter weekly newsletters
that recap our podcast and content for the week,
and you can sign up for those on the episode page.
You'll also get all of William's links
to where you can find him and what he's doing.
I appreciate you being here. I hope you're enjoying the series. There'll be comments
open on the podcast page. So if you've got questions about, hey, this is interesting,
I'd love to know more about this. Maybe we can point you in the right direction. Or
here's an interesting idea for this series on living beyond human scale.
We'd love to know what you think. All stay awkward brave and kind unlocking us is produced by bernie brown education and research group the music is by carrie rodriguez
and gina chavez get new episodes as soon as they're published by following unlocking us on
your favorite podcast app we are part of of the Vox Media Podcast Network.
Discover more award-winning shows at podcasts.voxmedia.com.
What software do you use at work? The answer to that question is probably more complicated than
you want it to be. The average U.S. company deploys more than 100 apps, and ideas about the work we do can be radically changed by the tools we use to do it.
So what is enterprise software anyway?
What is productivity software?
How will AI affect both?
And how are these tools changing the way we use our computers to make stuff,
communicate, and plan for the future?
In this three-part special series,
Decoder is surveying the IT landscape presented by AWS.
Check it out wherever you get your podcasts.