Radiolab - The Trust Engineers
Episode Date: February 24, 2023First aired in 2015, this is an episode about social media, and how, when we talk online, things can quickly go south. But do they have to? In the earlier days of Facebook, we met with a group of soci...al engineers who were convinced that tiny changes in wording can make the online world a kinder, gentler place. We just have to agree to be their lab rats. Because Facebook, or something like it, is where we share and like and gossip and gripe. And before we were as aware of its impact, Facebook had a laboratory of human behavior the likes of which we’d never seen. We got to peek into the work of Arturo Bejar and a team of researchers who were tweaking our online experience, to try to make the world a better place. And even now, just under a decade later, we’re still left wondering if that’s possible, or even a good idea. EPISODE CREDITS Reported by - Andrew ZolliOriginal music and sound design contributed by - Mooninites REFERENCES: ArticlesAndrew Zolli’s blog post about Darwin’s Stickers (https://zpr.io/ZpMeUnRmVMgP) which highlights another one of these Facebook experiments that didn’t make it into the episode. BooksAndrew Zolli’s Resilience: Why Things Bounce Back (https://zpr.io/7fYQ9iDYAQBu)Kate Crawford's Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (https://zpr.io/9rU5CGSit3W4)  Our newsletter comes out every Wednesday. It includes short essays, recommendations, and details about other ways to interact with the show. Sign up (https://radiolab.org/newsletter)!Radiolab is supported by listeners like you. Support Radiolab by becoming a member of The Lab (https://members.radiolab.org/) today.Follow our show on Instagram, Twitter and Facebook @radiolab, and share your thoughts with us by emailing radiolab@wnyc.org Leadership support for Radiolab’s science programming is provided by the Gordon and Betty Moore Foundation, Science Sandbox, a Simons Foundation Initiative, and the John Templeton Foundation. Foundational support for Radiolab was provided by the Alfred P. Sloan Foundation.
Transcript
Discussion (0)
Yeah, wait, wait, you're listening.
Okay.
All right.
Okay.
All right.
Ugh.
You're listening to Radio Lab.
Radio.
From WNYC.
Hey!
Yeah.
Ree-y.
All right, hey, I'm Chad Abumrod.
I'm Robert Prillwich.
This is Radio Lab, the podcast.
So, here's a story we've been following for a while.
Yes.
Comes from a friend of mine, Andrew Zolli,
who is a great thinker and writer.
He wrote a book called Resilience,
Why Things Bounce Back.
And he's a guy who thinks a lot about technology.
I have been interested in a long time,
for a long time, in the relationship
between technology and emotion.
And because, well, I've thrown more than one cell phone to the ground.
Andrew and I were having breakfast one day,
and he pitched me on this idea of doing a story about Facebook.
I remember.
I am not a huge believer in doing stories about Facebook,
but this story was wickedly interesting and profound in its way.
So he and I have been following it for a couple of years
up and down through this roller coaster events.
It really begins in 2011.
Well, let me back up for a minute.
One of the challenges talking about Facebook
is just the scale of the thing.
So, you know, there's 1.3 billion people on Earth
as of March 2014.
Those are active monthly users.
There's a billion people who access the site
through mobile devices.
Just to put that in perspective,
there's more Facebook users than there are Catholics.
That can't be true. Yeah. Yeah.
It turns out it is true, but their neck and neck.
Anyhow, the overall point is that when you have one out of every seven people on the planet,
in the same space, trying to connect across time and geography,
you are bound to create problems sometimes.
Facebook making headlines again tonight, the issue this time,
pro pro pro pro pro pro pro pro pro pro pro pro.
Before we go there,
we should introduce you to the guy
in our story who is the problem solver.
My name is Arturo Bejer
and I'm a director of engineering at Facebook.
Story begins Christmas 2011.
People are doing what they do every holiday season,
they're just getting back together with their families
and they're going to family parties and they're taking lots and lots of pictures.
And they're all uploading them to Facebook.
And at the time, the number of photos that were getting uploaded was going pretty crazy.
In fact, in just those few days between Christmas and New Year's, There are more images uploaded to Facebook than
the word the entirety of Flick.
Wait. You're saying more images were uploaded in a week to Facebook than all of Flickr all
time? Yeah. Which created a situation. The number of photos was going up and along
with the number of photos going up, the number of reports was going up.
What it means by reports is this.
Back in 2011?
If you saw something on Facebook that really upset you, you could click a button to report
it.
You could tell Facebook to take it down, which from their perspective is a really important
mechanism because if you're Facebook, you don't want certain kinds of content on your site.
You don't want nudity, you don't want like drug use,
hate speech, things like that.
So a day or so after Christmas,
they're about.
Facebook engineers come back to work
and they find waiting for them,
literally millions of photo reports.
Yes, the number of people that would be necessary
to review everything that was coming in,
it kind of boggled the mind. How many people would you've needed? I think at the time we were looking at it,
which is two years ago and again all this has grown a much since then. We're looking at like thousands.
Thousands like some giant facility in Nevada filled with nothing but humans. Looking at Christmas porn.
We were actually joking about this,
but we found out later,
there actually are thousands of people across the world
who do this for internet companies all day long,
which clearly warns its own show.
But for our purposes, just know that when our photo
was reported, a human being has to look at it.
Exactly right,
because there needs to be a judgment on the image
and humans are the best at that.
So our tour decided, before we do anything, let's just figure out what we're dealing with.
And so we sat down with a team of people and we started going through the photos that people were reporting.
And what they found was that about 97% of these million or so photo reports were drastically miscategorized.
They were seeing moms holding little babies reported for harassment.
Pictures of families in matching Christmas sweaters reported for nudity.
Pictures of puppies reported for hate speech.
Puppy's reported as hate speech. Yes.
And we're like, what's going on? Right?
So they decide let's investigate.
Okay.
So step one for Facebook.
Just ask a few of these people.
Why don't you like this photo?
Why did you report this?
The responses come back,
and the first thing they realize is it almost always
the person complaining about the image was in the image
they were complaining about.
And they just hate the picture.
Like, maybe they were doing a goofy dance,
someone snapped a photo, and they're like,
why did you post that? Take it down. Maybe they were at a party. Like maybe they were doing a goofy dance, someone snapped a photo, and they're like, why did you post that?
Take a down.
Maybe they were at a party.
They got a little too drunk.
They hooked up with their ex.
Somebody took a picture, and that person says,
oh, you know, that's a one-time thing that's never happened.
And again, take a down.
Arturo said there were definitely a lot of reports
from people who used to be couples.
And then they broke up,
and then they're asking to take the photos down.
And the puppy? What would be the reason for that?
Oh, because it was maybe a shared puppy.
You know, maybe it's your ex-wife's puppy.
You see it, makes you sad.
Take it down.
So once we've begun investigating,
you find that there's all of this relationship things that happen
that are like really complicated.
You're talking about stuff that's the kind of natural detritus of human dramas.
And the only reason that the person reporting it flagged it's like,
hate speech, is because that was one of the only options.
They were just picking because they needed to get to the next screen to submit the report.
So we added a step.
Our turn is team set it up so that when people were choosing that option,
I want this photo to be removed from Facebook,
some of them would see a little box on the screen that said,
how do I say photo make you feel?
And the box gave several choices.
The options were embarrassing.
Settling.
I'm setting.
Bad photo.
And then we always put in another where you could write in whatever you wanted
About the image and it worked incredibly well. I mean like 50% of people would select an emotion like
For instance embarrassing and then 34% of people would select other and we read those
We sit down and we're reading the other and what was the most frequent thing that people were typing into other?
It was it's embarrassing and we're reading the other. And what was the most frequent thing that people were typing into other?
It was, it's embarrassing.
It's embarrassing.
But you had embarrassing on the list.
I know.
That's weird.
I know.
That's, it's a, our turtle is like, okay.
Maybe we should just put it's in front of the choices?
As in, please describe this piece of content.
It's embarrassing.
It's about four of me. It makes me sad, et cetera. And when they wrote out the choices? As in, please describe this piece of content. It's embarrassing. It's about four of me.
It makes me sad, et cetera.
And when they wrote out the choices that way,
with that extra word,
we went from 50% of people selecting an emotion
to 78% people selecting an emotion.
In other words, the word it's all by itself.
Bruce did the response by 28%
from 50 to 78.
And in Facebook land that means thousands
and thousands of people.
Let me just slow down for a second,
I'm trying to think,
what could that be?
It's two people like full sentences or?
Here's thinking, it's always good to mirror
the way people talk.
Arturo's idea though,
which I find kind of interesting is that
when you just say embarrassing, and there's idea though, which I find kind of interesting, is that when you just say embarrassing,
and there's no subject, it's silently implied
that you are embarrassing.
But if you say it's embarrassing,
well then that shifts the sort of emotional energy
to this thing.
Photograph.
And so then it's less hot and it's easier to deal with.
Oh, how interesting.
That thing is embarrassing.
I'm fine, it's embarrassing.
It is responsible, not me. Good for my Turo, that's like that. It's embarrassing. It is responsible not me.
Good for a tour, that's like the subpar.
It's very subtle.
But it still doesn't solve their basic problem,
because even if Facebook now knows
why the person flagged the photo
that it was embarrassing and not actually hate speech,
they still can't take it down.
I mean, there's nothing in the policy
the terms of service that says
you can't put up embarrassing photos. And in fact, if they took it down, they'd be violating the rights of the terms of service that says, you can't put up embarrassing photos.
And in fact, if they took it down,
they'd be violating the rights of the person who posted it.
Like, there's nothing we can do, I'm sorry.
Oh, so they'd actually fence themselves in a little bit.
Yeah.
For me, I'd always put it in other.
I would just be like, go deal with it yourself.
That's what I would say.
Talk to the person.
No, honestly, that's the solution.
I, he wouldn't put it that way, but what he needed to have happen was for the person who
posted the picture and the person who was pissed about it.
To talk to each other.
To work it out themselves.
So our tour and his team made a tweak where if you said this photo was embarrassing or
whatever, a new screen would pop up and it would ask.
Do you want your friend to take the photo down?
And if you said yes, I would like my stupid friend
to take the photo down.
We put up an empty message box, just an empty box that said,
we think it's a good idea for you to tell the person
who upset you that they upset you.
I'm only 20% of people would type something in
and send that message.
They just didn't do it. They just said,
I'd rather you deal with this.
So our tour and his team were like,
okay, let's take it one step further.
When that message box popped up,
we gave people that default message that we crafted.
To start that conversation.
Just get the conversation going.
And it's kind of funny.
The first version of the message that we did was like,
Hey, I didn't like this photo, take it down.
Hey, I don't like that photo, that's a little aggressive.
It is, but when they started presenting people
with a message box with that sentence pre-written in,
it was almost immediately.
We went from 20% of people sending a message,
just 50% of people sending a message.
Really?
It's been surprising to all of us.
We weren't expecting to see that pick of a shift.
So that means that people just don't expecting to see that pick of a shift.
So this means that people just don't want to write.
They'll sign up for pretty much anything.
No, not necessarily.
Maybe it's just that it's so easy to shirk the responsibility of confronting another person
that you need every little stupid nudge you can get.
I see, okay.
That's how I see it.
So they put out this pre-written message.
It seems to really have an effect. So they're like, okay. If that works so well it. Okay. So they put out this pre-written message, it seems to really have an effect.
So they're like, okay, if that works so well,
why don't we try some different wordings?
Instead of,
Hey, I didn't like this photo, take it down.
Why don't we try, hey, Robert?
I didn't like this photo, take it down.
Just putting in your name, works about 7% better
than leaving it out, meaning what?
It means that you're 7% or likely either to get the person to do what you ask them to do.
Take down the photo or to start a conversation about how to resolve your feelings about it.
Oh, we're now measuring the effectiveness of the message.
So if I'm objecting, will the other party pull it off the computer?
Pull it off or just talk to you about it.
Okay.
They also tried variations like, hey, Robert, would you please take it down, throwing in the word please, or would you mind taking it down?
And the answer that would you please performs 4% better than would you mind?
Not totally sure why.
But they tried dozens of phrases like, would you please mind, would you mind?
I'm sorry to bring this up, but would you please take it down? I'm sorry to bring this up, but would you mind taking it down and at a certain point?
Andrew and I got we just wanted to see this whole process they're going through up close
so we took a trip out to Facebook headquarters, Menlo Park, California, this is about a year ago
so it's before the hubbub we met up up with Arturo who sort of watched through the campus.
Yeah, that's what the hammock is.
It's kind of a little like church hangout.
It's one of these sort of like socialist utopics, Silicon Valley campuses where people are
like in hammocks and there's volleyball happening.
We actually have a baby fox this year and they had foxes running around at one point.
So we were there on a Friday, because every Friday afternoon,
Arturo assembles this really big group.
Well, welcome to the meeting.
To review all the data,
when you got about 15 people crammed into a conference room,
like technical folks.
I'm a staff engineer,
I trust engineering at Facebook.
Dan Ferrell, I'm a data scientist.
Paul, I'm also an engineer.
A lot of these guys called themselves Trust Engineers.
And every Friday the Trust Engineers are joined by a bunch of outside scientists.
D'Acker Keltner, Professor of Psychology, UC Berkeley.
Matt Killing's worth, I studied the causes and nature of human happiness.
Meliana Simon-Thomas, and my background is neuroscience.
This is the meeting where the team was reviewing all the data about these phrases.
And so everybody was looking at a giant graph
projected on the wall.
It's kind of supporting your slightly u-shaped curve there
in that, especially in the deletion numbers,
the, hey, I don't like this photo, take it down,
and the, hey, I don't like this photo,
would you please take it down,
are kind of the winners here.
It's kind of interesting that you see the person
that's receiving a more direct message
is higher, 11% versus 4%.
One of the things they noticed is that anytime they use
the word sorry in a phrase, like hey Robert,
sorry to bring this up, would you please take it down?
Turn to the end, sorry, it doesn't actually help.
It makes the numbers go down.
Really?
Seven and nine are the some of the low points and those are the ones that say sorry.
So like don't just don't apologize, just don't apologize because like it shifts the responsibility
back to you I guess. No it doesn't it's just it's just it's just it's just it's just it's like it's
a linguistic psychology subtle thing. You're making that up. I am kind of but one of the things
that really struck me at this meeting on a different
subject is that the scientists in the room as they were looking at the graph taken in
the numbers, a lot of them had this look on their face of like, holy.
I'm just stunned and humbled at the numbers that we generally get in these studies.
It's Emilio Simon Thomas from Berkeley.
My background is in neuroscience and I'm used to studies where we look at 20 people,
and that's sufficient to say something
general about how brains work.
Like, in general at Facebook,
like, people would scoff at sample sizes at small.
Like, that's throughout Boyle,
who's a project manager at Facebook.
The magnitudes that we're used to working with
are in the hundreds of thousands to millions.
It's kind of an interesting moment
because there's been a lot of criticism recently,
especially in social science about the sample sizes, how they're too small, and millions. It's kind of an interesting moment because there's been a lot of criticism recently, especially
in social science about the sample sizes, how they're too small and how there's too
often filled with white undergraduate college kids and how can you generalize from that.
So you could tell that some of the scientists in the room, like for example, Dacker Kelton
who's a psychologist at UC Berkeley, they're like, oh my god, look at what we can do now,
we can get all these different people.
Of different class backgrounds, different countries.
To him, this kind of work with Facebook,
this could be the future of social science right here.
There has never been a human community like this
in human history.
In somewhere in the middle of all the excitement
about the data and the speed at which they can now test things.
The bottleneck is no longer how fast we can test how things work. It's coming up with the right things to test.
Andrew, throughout a question.
What is the statistical likelihood that I have been a guinea pig in one of your experiments?
I believe of 100 percent, but...
The data, any given person.
That's Dan Ferrell, data scientist.
And when we look at the data any given person is probably currently involved in what, 10 different experiments,
and they've been exposed to 10 different experimental things.
Yep.
That kind of blew me back a little bit.
I was like, I've been a research subject, and I had no idea.
Coming up, everybody gets the idea, and the lab rats revolt.
Stay with us.
This is Radio Lab, and we'll pick up the story with Andrew Zolli and I sitting in a meeting at Facebook headquarters.
This was about a year and a half ago. We had just learned that at any given moment, any given Facebook user is part of 10 experiments at once without really their knowledge.
And sitting there in that meeting, you know, this was a while ago, we both were like, did we just hear that correctly?
That kind of blew me back a little bit.
I was like, I've been a research subject and I had no idea.
And I had that moment of discovery on a Friday and literally the next day, Saturday.
This is scary.
The world had that experience.
Facebook using you and me as lab rats
for a Facebook experiment on emotions.
Barely a day after we'd gotten off the plane
from Facebook headquarters, the curve fuffle occurred.
A Facebook exposed for using us as lab rats.
As lab rats, lab rats, so we said Facebook,
messing with your emotions.
You may remember this story because for a hot second,
it was everywhere.
Facebook altered the amount of...
It was all over Facebook.
Story was, an academic paper had come out
that showed that with some scientists, the company had
intentionally manipulated user news feeds
to study a person's emotional response.
Seriously, they wanted to see how emotions spread
on social media.
They basically tinkered with the news feeds of about 700,000 people.
700,000 users to test how they'd react if they saw more positive versus negative post and vice versa.
And they found an effect that when people saw more positive stuff in their news feeds,
they would post more positive things themselves,
and vice versa, it was a tiny effect.
Tiny effect, but the results it was a tiny effect.
Tiny effect, but the results weren't really the story.
The real story was that Facebook was messing with us.
It gives you pause and scares me when you think
that they were just doing an experiment
to manipulate how people were feeling
and how they then reacted on Facebook.
People went apoplectic.
It has this big brother element to it
that I think people are going to be very uncomfortable with.
And some people went so far as to argue. It has this big brother element to it that I think people are going to be very uncomfortable with.
And some people went so far as to argue, I wonder if Facebook killed anyone with their
emotional manipulations, son.
If a person had a psychological psychiatric disorder, manipulating their social world could
cause them real harm.
Make sure you read those terms and conditions, my friends.
That's the big takeaway.
What you hear is a sense of betrayal.
That I really wasn't aware that this space of mine
was being treated in these ways, and that I was part of your
psychological experimentation.
That's Kate Crawford.
I'm a principal researcher at Microsoft Research.
Visiting Professor at MIT, strong critic of Facebook
throughout the kerfuffle.
There is a power imbalance at work. I think when we look at the way that
that experiment was done, it's an example of highly centralized power and highly
opaque power at work. And I don't want to see it in a situation where we just have
to blindly trust that platforms are looking out for us. Here I'm thinking of one
of an earlier Facebook study actually. Back in 2010, where they did a study looking at
whether they could increase voter turnout.
They had this quite simple design, they came up with a little box that would pop up and
show you where your nearest voting booth was and then they said, oh, well, in addition
to that, when you voted, here's a button you can press that says, I voted and then you'll
also see the pictures of six of your friends who'd also voted that day. Would this change the number of people who went out to vote that day?
And Facebook found that it did, that if you saw a bunch of pictures of your friends
who had voted and you saw those pictures on election day, you were then 2% more likely to
click the I voted button yourself. Presumably, because you two had gone out and voted.
Now 2% might not sound like a lot, but it was not insignificant again, I think, by the
order of 340,000 votes, the votes that they estimate, they actually shift it by getting
it to the guy.
These are people who wouldn't have voted to.
Who wouldn't have voted and who they have said in their own paper and published paper
that they increase the number of votes that they buy 340,000. Simply by saying that your neighbors did it too?
Yeah, by your friends.
Now my first reaction to this, I must admit, was, okay, I mean, we're at historic lows when
it comes to voter turnout. This sounds like a good thing.
Yes, but what happens if someone's running a platform that a lot of people are on and they say,
hey, you know, I'm really interested in this candidate. This candidate is going to look out not just for my interest, but the interests
of the technology sector. And I think they're, you know, they're a great candidate. Why
don't we just show that, get out to vote message and that little system design that we
have to the people who, clearly, because we already have their political preferences,
the ones who kind of agree with us. And the people who disagree with that candidate,
they won't get those little nudges.
Now, that is a profound democratic power that you have.
Kate's basic position is that when it comes to social engineering,
which is what this is, companies and the people that use them
need to be really, really careful.
In fact, when Andrew mentioned to her that Arturo had this group
and the group had a name, he actually runs a group called the Trust Engineering Group.
His job is to engineer trust. When Andrew told her that, Facebook users, that's his job.
You're smacking your forehead. I think we call that a Facebook bomb. She Facebooked really hard.
These ideas that we could somehow engineer compassion,
I think to some degree have a kind of hubris in them.
Who awaited a side, whether we can make somebody more compassionate or not?
Sorry, I don't know how they want to say this.
Let's see, how do we do this?
Couple months after our first interview, we spoke to Arturo Behar again.
At this point, the kerfuffle was dying down.
We asked him about all the uproar.
I know this is not your work.
This is the emotional contagion stuff.
But literally, like, hours after we got back from that meeting, that thing erupted.
Do you understand the backlash?
No, I mean, I think that, I mean, we really care about
the people who use Facebook.
I don't think that there's such a thing as,
as, I mean, if anything I've've learned in this work is that you really have
to respect people's response and emotions, no matter what they are.
He says the whole thing definitely made them take stock.
There was a moment of concern of what it would mean to the work.
And there was like this, is this going to, is this going gonna mean that, that we can't do this?
Part of me, like being on this coming here, is I actually want to reclaim back the word
emotion and reclaim back the ability to do very thoughtful and careful experiments.
So when I come back to the word experiment, where do you want to reclaim it from from what? Well, certainly like the word emotion and the word experiment all these things became really
charged.
Well, yeah, because people thought that Facebook was manipulating emotion and they were like
yes, but in our case, in our case, right?
And in the work that we're talking about right now, all of the work that we do begins with
a person asking us for help.
This was our Turo's most emphatic point.
He said it over and over that, you know,
Facebook isn't just doing this for fun.
People are asking for help. They need help.
Which points to one of the biggest challenges of living online,
which is that, you know, offline,
you know, when we try and engineer a trust offline
or at least just read one another,
we do it in these super subtle ways
using eye contact and facial expressions
and posture and tone of voice,
all this nonverbal stuff.
And of course, when we go online,
we don't have access to any of that.
In the absence of that feedback,
how do we communicate?
What does communication turn into?
I mean, I think about what it means
to be in the presence of a friend or a loved one.
And how do I get you build experiences
that facilitate that when you cannot be physically together?
Arturo says that's really all he's up to.
He's just trying to nudge people a tiny bit so that their online selves are a little bit
closer to how they are offline.
And I got to say, if he can do that, by engineering a couple of phrases like, hey Robert, would
you mind, et cetera, et cetera, well then I'm all for it.
Why not take the position that to create a company that stands between two people who
are interacting and then giving them boxes and statuses and little advertising and so
that this is not doing a service.
This is a way to wedge yourself into the ordinary business of social
intercourse and make money on it. You're acting like this group of people now is going to try to
create the the moral equivalent of an actual conversation. First of all, it's probably not
engineerable and second I don't believe that for a moment. All I'm thinking is they're gonna just go and
figure out other ways in which to make a revenue enhancer. No, I don't think it for a moment. Well, no, I'm thinking is they're gonna just go and figure out other ways in which to make a revenue enhancer.
No, I don't think it's one of the other.
I think they're in it for the money.
In fact, if they can figure this out and make the internet universe more inductive to trust, less annoying,
it could mean trillions of dollars.
So yeah, it's the money.
But still, that doesn't negate the fact that we have to build these systems, right?
That we have to make the internet a little bit better.
That's fine.
This idea, however, that you're going to have to coach people into the subtleties of the
relationship.
Tell them you're sorry.
Tell them just, you know, here's the formula for this.
He doesn't want, he did some of you, you need to repair that.
Here are the seven ways you might repair that.
To do all that, it's, it's if the home art card company, instead of living only on Mother's
Day, Father's Day and, and birthdays, just spread its evil wings out into the whole
rest of your life.
And I don't think that's a wonderful thing.
I think, you know, I have a slightly different opinion of it.
I mean, you had to keep in mind how this thing came about.
I mean, they tried to get people to talk to each other.
They gave them the blank text box, but nobody used it.
Right? So they're like, okay, let's come up with some stock phrases that yes are generic.
But think about the next step.
After you send the message saying,
you know, Jad, I don't like the photo, please take it down.
Presumably then you and I get into a conversation.
Maybe I explain myself, I say,
oh my God, I'm so sorry.
I didn't realize that you didn't like that photo.
I just thought, that was an amazing night.
I just thought that was a great night.
I didn't realize you thought you look so sorry.
I'll take it down.
It's cool. Now presumably we're having that
Conversation as a next step. Why do you presume that? How many of the birthday cards that you've sent to first cousins have resulted in a conversation?
Maybe not. I mean, that's the thing. That means these things are actually not they're really the opposite of what you're saying. They're conversation
Substitutes, maybe maybe they're conversation starters
their conversation substitutes. Maybe.
Maybe their conversation starters.
Maybe that's the deep experiment.
Are they conversation starters or substitutes?
Well, I hope they're conversation starters.
Yeah.
Because maybe that would be a beginning.
It kind of in my mind goes back to
like the beginning of the automobile age.
This is how Andrew puts it.
There was a time when automobiles were new and you know they didn't have turn signals.
The tools they did have like the horn didn't necessarily indicate all the things that we
used to indicate.
It wasn't clear what the horn was actually there to do.
Was it there to say hello or is it there to say get out of the way.
And over time we created norms.
We created roads with lanes.
We created turn signals that are primarily there
for other people so that we can coexist in this great flow
without crashing into each other.
And we still have road rage.
And we still have road rage.
We still have places where those tools are incomplete.
Thanks to Andrew Zolli many many many many thanks. Yes definitely. For bringing us
that story and for reporting it with me for so long and all our true. keeps who you kept bringing back into the studio to keep you. Yes. Thank you very much to
Arturo and the whole team over there. And by the way, they have changed their name. It's no longer
trust engineering. It is the Facebook protect and care team. Really? Yeah. We had some original music
this hour from Moon and Night. Thanks to them. Props to Andyals for production support, and also Andrew Zolli put together
a blog post.
If you've got a radio lab dot org, you can see it, which covers some really interesting
research that we didn't get a chance to talk about, and if you've ever sent an email
with a little smiley face, you're definitely going to want to read this.
RadioLab dot org, I'm Chad Abumrod.
I'm Robert Kroich.
Thanks for listening.
RadioLab was created by Chad Abumrod and is edited by Soren Wheeler.
Lulu Miller and Latif Nasir are our co-hosts.
Dylan Keave is our director of Sound Design.
Our staff includes Simon Adler, Jeremy Bloom, Lackob wrestler, Rachel Q6, Aketty Foster
Keese, W. Harry Fortuna, David Gable, Maria Basco, T.R.S.
Sindu Nonna Sanban Dane, Matt Kewti, Anna McEwen, Alex Niesin, Saur Cari, Anna Rasqued
Bass, Saur Sanbach, Ariana Wax, Pat Walters, and Molly Webster, with help from Andrew Vinyales.
Our fact checkers are Diane Kelly, Emily Krieger, and Natalie Middleton.
Hi, this is Ellie from Cleveland, Ohio.
Leadership Support for Radio Lab Science Programming is provided by the Gordon and Betty
Moore Foundation, Science Sandbox, Assignment Foundation Initiative, and the John Templeton
Foundation.
Foundational Support for Radio Lab was provided by the Alfred P. Sloan Foundation.
The foundational support for radio lab was provided by the Alfred P. Sloan Foundation.