Your Undivided Attention - Trust Falls — with Rachel Botsman
Episode Date: January 14, 2020We are in the middle of a global trust crisis. Neighbors are strangers and local news sources are becoming scarcer; institutions that used to symbolize prestige, honor and a sense of societal security... are ridiculed for being antiquated and out of touch. To replace the void, we turn to sharing economy companies and social media, which come up short, or worse. Our guest on this episode, academic and business advisor Rachel Botsman, guides us through how we got here, and how to recover. Botsman is the Trust Fellow at Oxford University, and the author of two books, including “Who Can You Trust?” The intangibility of trust makes it difficult to pin down, she explains, and she speaks directly to technology leaders about fostering communities and creating products the public is willing to put faith in. “The efficiency of technology is the enemy of trust,” she says.
Transcript
Discussion (0)
Imagine a world where trust has collapsed.
You can no longer trust the government or the media.
Every fact you learned in school is now a fiction.
What's happening to you now is your only reality.
The question remains, can you restore trust?
It's a question our guest on today's show has been researching for years.
So how many chapters of trust are we had in human history?
And is this unique what we're seeing or have we been there before?
That's Rachel Botsman.
And she divides the history of trust into three chapters.
In the first chapter, trust was a local affair.
When we lived in villages and small communities and sort of physical proximity,
trust was largely based on family and friends and close relationships.
Local trust existed for a long period of time.
And that arrangement worked until suddenly people found themselves surrounded by strangers.
When we went through urbanization and mass migration and the industrial revolution,
we realized that we needed new mechanisms.
And now we've reached the second chapter, institutionalized trust.
It was genius, really, that we started to figure out that trust didn't have to flow directly between people,
that trust could flow through intermediaries.
So whether that be large institutional systems or even the invention of brands,
all these kinds of mechanisms from contracts to insurance,
that really allowed trust to scale and human interaction to change in a way that we'd never seen.
But technology has once again reshaped our interactions with strangers,
and the mechanisms of trust are breaking down.
The next chapter of history, Rachel says, is frighteningly devoid of structure.
Now, it should be clear, I still very much believe local and institutional trust are important
and that it's not like, you know, one chapter closes and another one opens.
but what technology inherently wants to do is to take this trust that was hierarchical
and distribute it back to people and often strangers.
It becomes harder and harder to know whom to trust.
And when you don't know what or whom to trust, that creates a vacuum,
a vacuum for bad actors and misinformation and people that actually know how to manipulate that vacuum.
And that's the polarization and sheer chaos that I think we're seeing all around the world today.
Every day that passes is a day that we lose trust in some of these systems.
We're losing trust in our leaders.
We're losing trust in our discourse.
We're losing trust in the democratic process.
And the risk isn't just that we hurtle back into an era of local trust.
It's worse.
With the onslaught of new methods of deception and bots and deep fake technologies,
we may give up altogether.
We maybe get trust apathy.
So the question remains, how do you?
you reboot trust from that state?
This image of trust being in a state of decline and trust in a state of crisis isn't
accurately describing that what actually is happening is we are giving our trust away too
easily.
Rachel wrote the book, Who Can You Trust?
And has a podcast called Trust Issues.
And she and I had a conversation about how we navigate this next chapter of trust together.
I'm Tristan Harris.
And I'm Azarask.
And this is your undivided attention.
Defining trust is probably one of the hardest questions.
And I've given a lot of thought as to why it is so hard to define.
One of the issues is that the language we have around trust is often the language of money.
And this really informs the way people then think about trust.
So they talk about building trust and trust as a currency as capital and trust really isn't a physical thing.
You may have manifestations of it like a handshake or a contract, but it's more like happiness or love in that sense in that it's a belief at the end of day.
It's a human belief.
The way I define trust is deceptively simple.
Trust is a confident relationship with the unknown.
And the key words there, obviously, are confidence and unknown.
Often people think of trust as knowing the outcome, of knowing what to expect of someone.
But actually, if you know what the outcome is, very little trust is required.
The very essence of trust is actually about not knowing and coping with uncertainty.
And that's why trust is incredibly multifaceted, incredibly contextual, but also very
fragile. So situate us for a moment on, first of all, your personal story, how did you get
interested in trust? I was actually working at the Clinton Foundation, so 14 years ago, and
starting to see the way technology was changing, the way information could flow, the way it would
change, the flow of value and potentially trust. Now, that seems really obvious now, but
this was still the very early days of things like eBay and Amazon.
and Netflix. And so I just started immersing myself in it and couldn't find answers as to
how did these marketplaces work. And so that led to my first body of work around what people
then called the sharing economy, which was essentially around how technology could enable us
to trust strangers. And what I find really interesting is sort of the first five, six years,
the conversation, the feeling around this was completely different. You know, it was all about
empowerment and democracy and decentralization and putting control back into the hands of people
and micro-entrepreneurship. That's why I think language is really interesting because language
often reveals the sort of sentiment towards things. And then 2012, 2013, I started to feel
things shifting and really started to become fascinated with this question as to how could people
say they don't trust institutions, banks, the media, government, yet getting cars with strangers
and that maybe I didn't understand what was going on at a more fundamental level to the way
trust works in our lives, the way trust flows in society, and what is really happening and about
to happen. And that led to sort of this next body of work around how trust was shifting from
institutions, so a trust that historically had always flowed upwards to experts and CEOs and
referees and regulators and how messy that was becoming through platforms and networks and
marketplaces and that many things that we were seeing in all different areas of our lives
were actually a consequence of this really big trust shift. So why don't we start with first the
excitement phase, how exciting that was that trust could create that relationship of certainty at a
How did we get from that to losing faith in the system?
Yeah, I mean, I think it's like we've almost forgotten that, happen that phase.
Every entrepreneur had this vision that we could essentially change human relationships
through technology that we could bring each other closer together, not drive each other
further apart.
And I think that was coming from the fact that these systems work surprisingly well.
Yes, of course, there are issues with eBay cell.
as an Airbnb host.
But the fact that you could transat with a stranger
and stand a stranger's home
and that even buying drugs off the internet
that these systems were,
that there was a system of accountability
and that suddenly many, many people
that had been sort of marginalized by big systems
could suddenly take back control.
And I think that was very much tied
to an overall faith in the good of humanity.
That's what I remember the most about
that period was just, wow, people are that good to each other from a distance, meaning I can
buy that book from that person on eBay. I've never met. And they could totally scam me.
And yeah, maybe they'll hit a reputational point, but they could start a new eBay account
or whatever. And for the most part, that people were really good to each other.
Yeah, I mean, I cannot remember a negative question in an interview for the first, literally
five years. And when I wanted to start talking about unintended consequences if these things got
too big. They didn't want to hear it at events. All it was was positive enthusiasm. And I remember
the first article I read around rent inflation in San Francisco and then reading more and more
about landlords commercialising supply on Airbnb. And these things started to point to issues of
scale. And then the tide turned and all of a sudden, I think people looked at it and said,
oh it might be called the sharing economy or it might be called social media or it might be called
crowd financing but is it any different people started stripping back the experience and the
apps and the design and sort of the brands and started going back to what were the intentions of
these companies and what were the unintended consequences as they scaled so I think that was all
going on and then the second thing that started to really rise up very very quickly
was a realization that if you remove this sort of hierarchical way of looking at trust and
filtering information or protecting worker rights or whatever it may be, what sits underneath
these things? As much as human beings like the idea of decentralization and democratization
and self-empowerment, when things go wrong, we desperately still need a center. We still need that
leadership. That's where I think people started to get really scared that there was no safety net.
There was no backstop. And this was the early days of things going wrong. It still, you know,
hadn't affecting an election. There still wasn't really talk of addiction. And that still was
to come. But I think people realized there was a void in keeping them safe that if you remove the trust
and faith that we had in institutions and you completely put it in networks, who is responsible when
things go wrong. There's no one to call if things start to break down. There's no centralized
police force. There's no centralized customer service. I think we have to walk through the
transition from local trust to institutional trust to distributed trust. How gameable was local
trust? I mean, if I asked a friend in my village, where are the berries or where are the fish,
you know, if someone really lied to you and they were in your tribe, you know, there's consequences.
So that's that local situation. And institutional, if money is a store of value, institutions or
brands are a store of trust. They're able to accumulate long-term reputational authority on these
kinds of topics. Yeah, I think it's though a transition from local to institutional. The change
the flip was very clear to people. It sort of moved from, you know, I am responsible for making
this decision about whether this other person is trustworthy. So I understand that responsibility
because it's really clear and it's really linear and it's really, really direct and tangible. You know,
like if you give me your metal pot and I trade it for a chicken and that's my responsibility,
that's a stupid trade.
And we understood that transition from, right, oh, now I'm abdicating that responsibility and that
decision making to the institutions, right?
So the food authority or the CDC or whatever the institution is will sort of put their stamp
of authority and tell me whom I can trust.
And so I think as human beings, we really understood the transition of those phases.
But it's not clear today how that works, because we have this mix of institutions and these
new systems. When I first started looking at this, the hope and the emphasis was how do you
create trust between the two people, that the platform was just the facilitator or the
enabler. And I think in many ways, people understood that relationship, right? So it's between
the driver and the passenger or the buyer and the seller or the lender and the borrower, the host and the
guess. And the platform's role is just to facilitate those two things. And I get that because it's
clean. And suddenly it's changed, right, where we're saying, whoa, whoa, no, like, I don't want
that responsibility. Like, that can't just exist with me. And so where does trust lie? That question
has shifted, even in the last three to four years, that the design of these systems is no longer
trying to just push it down between sort of the two actors or multiple actors on the platform. It's actually
starting to say, where does this really exist? Where should it lie? Imagine if we actually successfully
transitioned Uber from being a centralized company worth billions of dollars with an office in San Francisco
into a decentralized thing, where there actually is no corporation sitting in a real building
with real CEOs. It's just code. And the code does the same thing that Uber does now. If there's now
some kind of problem or a driver starts misbehaving, who do you call if that trust starts to break down?
What happens if, like, 50% of drivers just decided they start conspiring to say,
what if we all start driving people to the wrong places all at once?
Suddenly, there'd be no one to call.
There's no human being you can get who runs this new decentralized Uber.
I think that's kind of the essence of some of the things we're talking about.
No, it's so true.
And I can't emphasize enough how much I think these issues are tied to scale and complexity.
And the reality is that we are quite lazy is not the right word,
but you almost don't want to overthink trust decisions.
Like you couldn't ever really leave the house.
If you're always thinking about everything and everyone that you needed to trust,
you really couldn't function.
There's this kind of interesting relationship that you're making me think of
between trust and the use of conscious energy.
Because if we have to research and understand, you know,
who everyone is and where everything came from
or exactly which journalist, you know, wrote this
and whether they have the credentials or whether they're biased,
the world getting more and more complex.
means that we have to rely more and more on trustworthy systems.
And this is what's so damaging if the basis of those trustworthy signals starts to break down,
if we question the CDC, if we question whether Facebook is giving us that news,
our conscious energy is the last thing we have to devote to making choices.
And so the more we have to apply it to investigating the trustworthiness of everything,
the more exhausting the world becomes.
I think you're absolutely right.
The newer emerging systems of trust are pointing their fingers at,
sort of the older forms of institutions
and saying that, you know, you're outdated
and regulation doesn't work and blah, blah, blah.
And then these older institutions
are also being pointed up saying you can't trust them.
And so suddenly it's thrown back on the individual.
And I think fundamentally what we're searching for
is a feeling of being back in control.
That's what we feel like we've lost,
whether it be control of our information systems,
whether it be control of our energy
and how we spend our time.
But then I think it leaves,
These are these key questions that we see coming up now, the question that's been asked
over and over again, like would you trust Facebook to be the arbiter of truth?
Do you trust that Uber can fix regulations and protections around workers?
The question that we're fundamentally asking is the same.
Can the platform play the same role as the old institution?
I don't think we know who we want to be the facilitator, the mediator.
the arbiter, like it's a changing nature of roles and characteristics and identity.
So the mess is to be expected, to be honest.
One of the problems I have with the way that tech industry is currently handling a lot of these problems,
it's very defense-oriented. It's very whackable oriented. You know, Facebook took down
2.2 billion fake accounts that it's AI caught. These are 2.2 billion new false accounts.
Isn't that as many people as actually are on Facebook?
Yeah, there's 2.7 billion people on Facebook.
There's 2.2 billion fake accounts that they took down in one quarter, in one three-month period.
Wow.
So if I flip a coin, it's like real, fake?
Well, and the line is I'm sure they caught all of them.
Oh, yeah, they definitely caught all of them.
So what it really points to is something that's missing infrastructurally from the Internet,
which is authenticated trust and identification.
Why do we have a driver's license in a society?
Why do you have a gun license?
Why do you have a passport?
Because you have to be able to say, you know, certified by some authority, I am who I say I am.
But what happens when in the Internet, you're not logging in with your passport?
Anybody can basically be anyone else on the Internet.
So when I can pretend to be anyone commenting on that Reddit thread or that New York Times article
and create a flame war of outrage leading up into the 2020 elections, if not around the world everywhere that's going on already,
this is so serious that when you realize you can't actually deal with this channel.
And we're best off shutting down certain channels and certainly advise people that they
can't count on the normal shortcuts that they use to discern that this might be trustworthy.
An interesting place to go from there is the way that China is combating deepfakes.
What's interesting about the China model is moving from a stance of defense to deterrence.
So they have instituted a law that says you can share a deepfake.
But if you share it without labeling that this is fake, then you,
you actually get thrown in jail.
No.
It's not throwing everybody in jail
for whatever speech they do.
It's saying if you are not labeling something.
So there's a really interesting idea here
that Facebook or Twitter could implement.
Obviously, they're not going to throw people in jail
if you post something and don't label it.
But their version of jail is like
you have violated our platform guidelines.
We're going to kick you off for 48 hours.
We just basically disable your account for 48 hours.
We could actually implement a secondary sanction.
So that if you are caught retweeting
or re-sharing a post in which a deep fake is not labeled,
you are also a deep platformed, you're taken off.
And imagine they do this for mainstream media
before, say, a CNN or a Fox News or a BBC,
where if you publish on mainstream television a deep fake video
without labeling it as such,
then they'll actually disable your account on Facebook for 48 hours.
That creates a real disincentive,
and it makes all players much more careful.
It's really interesting because Facebook,
they are essentially a government, but they don't have the monopoly on violence.
They have a monopoly on attention. They have a monopoly on attention. So they can't put you in
physical jail. They can put you in attention jail. Right. And now this gets uncomfortable because
who are they to put those laws in place? I don't like that proposition either. However, we know
that no laws is a million times worse than some basic policies. And they obviously have community
guidelines and policies now. We're just asking them to strengthen that list. So you know,
there's another really interesting thing here. And this is taken from Mongolia, where when they're
passing amendments to their constitution, the way they do it is they take a representative
sample of citizens, a thousand or something like that. They bring them to the capital and they go
through a multi-day deliberative process to decide. It's super cool. This is deliberative polling.
Deliverative polling, deliverative process. Facebook could be doing deliberative polling all around the
world to figure out what are the appropriate norms for that country. So it's not who are they to
decide, but they are the platform by which each group gets to decide. And that looks a lot more like
the way nature works, where it's a decentralized governance process. The question is, who are we to decide?
Well, we are we to decide, right? That's our job as a liberal democracy to make decisions about
our values and then enforce them using behavioral techniques.
Where is wisdom right now? Who has the wise?
view about how to even arbitrate these questions.
Yeah, I don't know if I look for wisdom.
You sat in those meetings with regulators and entrepreneurs and VCs and the big tech
companies and it's become too much, you're wrong, I'm right, you don't understand, this
thing is new, it's too complex, you're a dinosaur, you're from the old world, we've intentionally
or unintentionally created a polarization between who knows and who doesn't know and who's right
and who's wrong.
And I think what is generally missing
from so many of those conversations is empathy
that everyone's afraid in some way.
Everyone's scared that they've done something wrong
or they've got to give up power and control
or whatever it is or they're irrelevant even.
You know, when I was watching the Facebook hearings,
it was funny, but it was also deeply sad
because it was showmanship, right?
It was like, I'm going to humiliate you.
I'm going to bring you down.
I'm going to score points.
And that's where I think so many conversations, well-intentioned conversations
that are trying to find a way through that.
That's where they end up.
I've felt it on panels.
I won't do panels anymore on this topic because the moderator wants you to sit on a side.
They want you to, right?
They're like, pick aside.
Are you for or against?
I need to know.
I need to wait my panel.
And you're like, well, I can't answer that question.
What's your diagnosis of why that's true?
I think it has a lot to do with identity.
I think people want to know what side you're on, whatever the issue is.
That's why I've become a lot more conscious in my work in any label that is a binary or a polarising label,
like remain, leave, anti-pro, for and against, right-left.
I learnt this the hard way because I made this series on anti-vaxxers.
And one of the things I wanted to be very careful of not doing was pitting the expert against the anti-vaxxer.
what I realized from speaking with anti-vaxxers and really trying to understand where their views
come from, I realize that they care about the same thing that I do. They care about their
children. And I know it sounds such an obvious point, but we lose sight of this. I think often
in these conversations, we care about the same thing, but our views on how we get there are very
different. And it's really, it's really hard to do, you know, like I'm very pro-vaccinations. I, you know,
I had measles when I was a child and lost my eyesight for a while.
It took every bone in my body to not get angry and defensive and even to sort of shut these
people down and you know what was going on my head was like just stupid, right?
But they weren't stupid.
They were incredibly informed and at certain points in the conversation I was actually like,
my God, maybe I have got it wrong because I didn't know that about the CDC and that relationship
to that farmer company and so I think when we sort of open ourselves up to really trying to understand
the belief system and what someone else cares about, it's not the solution, but it's a way to find
more common ground. Well, and per the attention economy, it's never been easier to lose the
context behind someone else's statements. Technology creates the ability to connect with someone
across the world, but you don't know that person's world because you're just seeing 140 characters
of text with them. And so it goes back to your point about if trust is our relationship with the
unknown and trust is scaled by technology, it's not doing a good job of pulling in the full
contextual space that that other view might be living inside of. And then there's this co-evolutionary
force of increasing polarization, increasing identity, which means that it's easier than ever
to project the least charitable view of anything you see onto a person in front of you. Yeah. I'm
thinking a lot about beliefs, this idea that you and I and others can't have any kind of shared
sense of reality because we don't know what is true or force or what is fact or fiction,
that the stage on from that, the term being used, which is brilliant, is this idea of reality
apathy, that we reach a stage where we don't care, going from a world where we're both
seeing the same things, to not knowing whether what we're looking at is true or false, fake or real,
to not really caring.
Yeah, I mean, I think the issue of caring is really important.
I was just talking with someone on the phone last night who does work on elections around the world.
It's talking with her about how at the end of the day, if you don't know what to trust,
you just go back to trusting the people around you, right?
Like, I'm an imaginary world where you don't know if anything you see on social media is true.
Like, it could all just be false.
So I don't know what to trust.
I'm tired of it.
I don't really have time.
I've got to feed my kids.
You know, what are we going to do?
I'll just trust the people around me because that's just a lot.
You go back to local trust.
You revert back. Yeah. We contract. When people to stop trusting what they see and hear
outwards, they contract and they look inwards. You know, we were talking about trust surveys,
which I take with a pinch of sort because I think they miss how contextual and subjective trust is.
But I found it really amazing that key theme that was emerging was that the most important trust
relationship in people's lives is starting to become the employer and the employee. And I actually
found that really frightening, that people are starting to turn to the people that are employed
by for information on all these things that we used to get from a variety of sources.
And I think that's exactly what you're talking about.
But then you're just as good as sort of a dark age's hearsay world, right?
And we lose science.
We lose, you know, any kind of gated institutional structure where we were trying to progress.
I think next year we'll see the first case where a piece of evidence in a
a really, really high-profile case is classified as a deep fake.
It's only a matter of time.
I shouldn't even put this out here, but some defense attorney
are realized is that this is an unbelievable strategy,
that they can throw all photographic and video evidence into question.
You're making me think about trust is important everywhere in society,
but there are certain foundational places that if you lose it, you kind of lose everything.
To me, that deep fakes represent something like a civilizational epoch, which is a, my co-factor at the Center for Humane Technology is called the vanishing point of human authority, where our minds are no longer an instrument to discern reality because we've actually entered into a phase where the technology we've created is sophisticated enough not to overwhelm our strengths, but to undermine the basis of our weaknesses for discerning what's true.
And that's like a crossover point that, you know, once you cross-over point, you know, once you cross.
it, you don't uncross it.
So then you're not just losing faith in the military, the police, the judges, the Supreme
Court.
You're starting to lose faith in the mechanisms, what constitutes as evidence.
And that deeply concerns me because where do we go from there?
When we can't even trust proof and that even if something isn't a deep fake, even calling
it to question, benefits the defendant.
I don't think we really have an alternative for the legal system.
This is going to go much deeper than what happens in courts.
This is going to get very personal.
Imagine you get a text message from somebody you don't know, but it's an image of you and them.
And they're like, hey, I met you at this conference.
I don't know you for remember me, but I was going through my phone, found this photo of us.
And I just wanted to reach out.
And you're like, I don't really remember you, but they send some more photos.
you start chatting back and forth.
And this is actually a spearfish against your own memory.
If I want to generate images of people that you can't help but feel familiar with,
it's really easy.
I just take your top 10 Facebook friends.
And I generate a new deep fake face, which is sort of the average of their features.
And this hacks the last, you know, 10 years of friendship that you've had with these people.
And, you know, I add in a couple people that you liked on Instagram,
and now they're cute and familiar.
And we don't really have any defenses against this, except to just doubt everything.
There are also things that companies like Apple could do.
And that is, when you take a picture, they could sign the picture with the depth data that comes out of it as this was taken on a real phone.
Why with depth data?
Because that way you can't just take a picture of a picture.
And, you know, Apple or Samsung or Google could be in a race to the top to become the trust company.
Right. And we've talked about this, that instead of focusing on privacy, the ultimate value that people should be competing for is trust. And this applies on multiple levels. Trust for, are you going to privilege the truth or accurate sources of information? Trust for, are you on my side or is your business model about exploiting me and treating me as a resource? So trust is going to become the ultimate currency of the future. And Apple and Google and Samsung are our best position to establish and compete for who can better earn our trust.
trust. The thing that freaks me out most is not actually the deep faked images. That's really
freaky. It's the deep fake text. It's like perfect Photoshop but for text. This can be
happening all the time. You'll have no idea. So let's imagine like, you know, to break this down for
people, I could go under Reddit. I could go into some channel that's super controversial,
like the abortion channel or the gun control channel, right? And I can look for the post that got
the most extreme blame wars, et cetera. And what I could do is then build a kind of classifier
that learns the language patterns
that tended to be
the most outrage-producing things
and then I can generate
a whole bunch of other text comments
in other topics
that sound indistinguishable from truth.
Here's another example of this
which I think will strike home
for most people, medical stuff.
Like when something's wrong with me,
I go online and I search for it,
it's now trivial to train
one of these text generators
on top of WMD
and just generate an entire siteful
of just reasonable sounding, academically plausible seeming.
It'll be in the language of the Mayo Clinic.
It'll look indistinguishable as text from the text you'd find in the Mayo Clinic.
That's right.
And you're going to have no ability to discern.
Is this true or is this false?
And the reason why I like if this example is, one, it's terrifying.
And two, you can see that any area where it is not your core expertise is going to be really easy to fool you.
Right.
How do you reboot trust?
the native thing to do for our psyche is to look at what local people around us, people will
increasingly trust local sources of information. So local newspapers, because they have local
reporters and so on. So knowing this, certain state actors are creating fake local newspapers,
especially in the U.S. swing states leading up to the election. This happened in 2016.
But what happens when I can make a super credible-looking fake news website? And now China,
or North Korea and Saudi Arabia are competing to lure Americans into different persuasive-looking
local news websites. And we can't rely on the normal shortcuts. So until we point the mirror
back at ourselves and see that that actually is the way that we derive trust, we use these
little shortcuts like I learned this in the Persuasive Technology Lab, if the website says that it was
updated 10 seconds ago or yesterday, the recency of its latest update makes it look far more
credible. It's a super simple credibility persuasive signal to hack in terms of trust. Or as every
web designer knows, if you want to make the web page feel trustable, have an about page that has a
smiling face of the people behind the website, even if those people don't exist. Exactly. Right.
There's nothing that we can do to stop this until people actually asks more first principle
questions about how do I know what I can trust and privilege the sources of information that have
been around for longer periods of time and have track records.
Historically, what are technologies that realized at their full potential became a threat?
And the technology was changed because they knew the human consequence if this technology
actually ran at its full potential.
When have we done that?
When have we actually pulled something back?
Because we've realized the human consequence, what is amplifying us as humans, that if it goes
past this point, it's a disaster.
Well, I think nuclear weapons represent humans coming face to face with godlike power to destroy ourselves.
I think scaling it down to a much more easier to deal with issue was the chemical industry.
You know, it used to be in the 1950s that there was no regulations on the chemical industry,
and we just assumed things were safe until proven dangerous.
And you could just dump mercury in the water.
And I think that's kind of where we are now,
that we kind of assumed all the technology was safe until proven dangerous.
If you take all these different features of how technology has impacted us,
information overload, shortening attention spans, polarization, conspiracy theories, election engineering,
they actually all have to do with hijacking a human weakness,
basically running over the mental environment and not caring.
Which is why I think it sounds deceptively simple,
but some of the solutions actually lie in friction.
I've spoken about in the past this idea of trust pauses.
whether that be around a person, a product, a piece of information, someone we're voting for,
whatever it is. Trust actually likes the friction. The efficiency of technology is kind of the
enemy of trust. And so to stop the hijacking of human weaknesses, are there ways that we can
design these trust pauses into systems? So we simply ask ourselves, are we sure?
Yeah, what are some examples of trust pauses that could be implemented that you've been thinking about?
so it's funny once you sort of give it a term right you start to see very small small examples of this idea so i was signing my kids up for a national savings account here and i did the usual thing of like just going through 38 pages of terms and conditions in two seconds and this pop-up box but not in an annoying way like in this really bright obstructive but warm and sort of even funny way just came up
and said, are you sure? Because, you know, there's no way you've read this information.
And therefore it becomes a conscious choice, right? I'm not going to sit there and read
the pages and pages of terms and conditions, but I'm aware of what I'm handing over at that
point in time. I think it's slightly different, but the little bump you get in Instagram that
you're all caught up, that's a pause. That's something that makes you more mindful, right? So
do I really want to repeat this scrolling behavior? Or is this?
complete waste of time. I think it's interesting, Monzo, Monzo is a very hot fintech startup in the
UK, that their sign-up process was actually too quick, that it was, you know, you compare how
long it takes to, say, open a traditional bank account and days, even weeks, and that you could do it
on their platform in minutes. So they actually had to introduce a spinning circle. I thought that
was really interesting, right? So these aren't complete solutions, but they are ways that make you very
conscious of the way the system is working and the choices that you're making and who you're
giving power to and what you're giving up and what you're superseding control to. And I think that's
some of the solutions lie in intentional friction. And so I'm just intrigued if sort of the language
of the last 10 years has all been about efficiency and automation and speed, whether the language
of the next decade will be more around friction and slowness and resistance and pauses.
And I think there's something in that.
I'm reminded of consent-based architecture.
As soon as I go to Europe from the United States, I see the difference, right?
Every page you go to the internet suddenly pops up with this website would like to use cookies
and use this link to click on this thing.
And of course, no one reads these things and everyone just hits the blue buttons.
Just get out of my way.
And it goes back to this notion of,
what is the distribution of cognitive labor?
We can't research everything.
It would be exhausting to live in that world.
And so it comes back to where do we want to devote and invest that conscious energy?
Where is it important to read the 38-page contract?
It's a fundamental question.
One of my favorite trust theorists is a guy called Diego Gambetta,
and he has a brilliant way of putting it,
where he says trust has two enemies, not one, bad character,
and poor information.
And you think of so many problems in the world today,
they're actually tied to that problem.
Attention may be finite, but trust isn't that way.
And so what we actually need to do is be more mindful and careful
about the products and the information and the people and the companies
and the leaders that we're trusting.
How do we rebuild trust once it's been lost?
Because I found this interesting.
There's a study with, I think it's Cass Sunstein,
basically showing if you're a politician and you've made a mistake,
Is it a good idea to publicly apologize?
Or if you publicly apologize, people just use that as evidence in a low-trust society to see how bad you are,
and they don't really listen or trust you in your apologies.
You know, I think Mark Zuckerberg has famously apologized 15 times over the last decade.
And so what does publicly apologizing get you and how do you successfully rebuild trust or reboot it when you've lost it?
I mean, that's a critical question for our time.
It's a critical.
I'm actually just writing a piece on it right now.
and to be pedantic about language, because I think it's really important.
It's part of the problem is in that question,
which is this idea of building or rebuilding trust.
So why is that a problem?
Well, it's a problem because not just because you're sort of alluding to the fact
that trust is this physical thing,
it's because it makes you think that you're in control of it.
How do I rebuild or rebuild trust, right?
Well, you know, we'll have a strategy or have a plan or we'll,
or we'll get someone in communications to advise us, whatever it is.
If I do the right thing that trust will just, I can control the outcome of it.
Trust is not something that you can build.
Trust is something that is given to you and that you have to earn it.
Now that may sound like a really subtle distinction, but it's huge
because I think so many leaders, business leaders, political leaders,
whatever the sphere is, is that this belief that if I control the setting and I control who's
interviewing me and I control the timing around this and I say the right thing, then I can
rebuild trust versus the public, the citizen, the user, the customer, they will decide when
they're ready to give their trust to you. Now that's not to say like in the period in between
that you shouldn't be trying to earn that trust and continuously earn that trust. Of course.
But I think what frustrates people is this idea, well, I made the apology, or I put out that
statement, or I did that press interview, or we made those changes to our terms and conditions,
or we paid that fine, or we fired the CEO, or whatever the thing is.
Like, I've paid the price now, and we're not through it.
Why isn't trust back?
Well, people aren't ready to give it back to you because you haven't demonstrated that you deserve it.
So that's the first thing that the timescale and the control that,
often the person who's lost trust is different from the people who are giving their trust.
And we underestimate that.
The second thing is that when you look at particularly companies, but this applies to politicians
and even us as individuals, right, in relationships, when something goes wrong, what do we point to?
We point to capability problems.
We point to functional glitches, systems, bugs, design errors, algorithms,
engines, 40 products, whatever it is. Look at it in banking. Look at it what's going on with Boeing
right now. Even we work with the business model, Facebook, the system, right? So the solutions
become capability fixes. If you look at whatever Facebook brings out, people almost laugh at it,
right? Because they're all capability fixes, will increase security measures, will change transparency
around ads, none of it is about character. And until people address the character side of the
equation, and the character side has to lead, particularly during a crisis, and these, you know,
through the lens of trust, you're really talking about integrity and empathy. They're the
traits that people are looking for. You have to put those front and center, because if you're
focusing on safety and product reliability and the competence and you can rely on this thing,
versus the character,
you're sort of going to the how versus why this happened.
And so you don't have any reassurance
that's not going to happen again.
So if you were Mark Zuckerberg,
what would you do that would be a demonstration
of having the character that is worth the re-earning of our trust?
Sort of a contradiction in that question
because it wouldn't be a strategic decision about figuring out what I could say.
The premise is I would be the kind of person.
person who's coming from like some kind of deep humility. And I'm not calculating the humility.
I am coming from the place of I screwed up. You've hit the nail on the head. And this is why it's
so difficult now for them to recover because everything seems calculated, right? Everything seems like
a group of people in a room making a decision. Like even, I think the announcement was made
today where, you know, it's from Facebook or whatever by Facebook that's appearing now on WhatsApp
and Instagram and tying the Facebook brand now. It's at the bottom of your feed.
No, I didn't see that.
Yeah, it's just got it.
Like, get rid of it.
It's interference, right?
And now you can see, I can imagine the conversation.
They've hired a branding agency, right?
And people don't realize that WhatsApp and Instagram are part of our ecosystem and we keep
being hit with this.
So why don't we attach the Facebook brand to the WhatsApp brand, right?
But the intention still feels about them versus what really is in the best interests of users.
So I think the leaders of Facebook, they are in the,
only thing that is going to work now is a very, very grand gesture around their
intentions and motives to generally demonstrate that their intentions are in the best
interests of users. And that has to lie around the business model. I think anything else,
anything else, to be honest, just is a waste of time. It's not going to move people on.
Well, you know how much I agree with you. That's why we came up with, you know, this sort of
description that sometimes listening to tech leadership is like watching a hostage in a hostage
video. Like the things that they're saying don't make any sense until you see the gunmen holding a
gun at their head from offstage. And the business model is that gun. And you're like, oh, that's why
they're acting so crazy and saying all that gibberish. And I want to be clear that, you know, I've met
many tech leaders and I don't think they're bad people. I think they are trapped. And I think the things
they are told internally that make complete sense internally don't work externally. You know,
I sat on a panel with Ruth Porat, who's the CEO of Alphabet, the holding company of Google.
And I remember this moment where she said that there's no trust issues with Google.
People perform trillions of searches on our platform every single day.
And my mouth like nearly dropped.
But then I realized like everything that's put in front of her, the indicators, the barometers, like the way they measure trust is in the same way that they measure
growth and profits and money. And so I think it's often the internal narrative doesn't help their
decisions and how they really need to behave in the way the external world actually perceives what
is going on. Totally. Yeah. I mean, you come up with a narrative to tell your employees that,
you know, Facebook will say, well, we're just like a post office. We're just delivering messages
to people. You don't blame the post office if a bunch of bad people start sending bad messages
through the thing. We're just a neutral platform. Like all that is so, it sounds so convincing. And the
problem is when you're in that environment, you're living inside of the filter bubble. I mean,
we think filter bubbles are bad outside Facebook. They're actually much worse on inside of many
of these corporations, as you said, and there's no incentive to talk about things, you know, humbly.
I don't know if you know this, Rachel. I used to be a tech entrepreneur. I had a small
startup company and Google had acquired us, but it actually came through kind of a failure.
Like I, the company was kind of dead. We kind of had stopped growing. There wasn't much we could do.
And here I was, I was 26, 27 years old.
I had 11, 12 people working for us.
And when you're running a startup and you've raised venture capital and you've got millions of dollars and you've got employees and their families relying on you and you've got to be successful, like if you doubt what you're doing, where can you safely express that doubt?
You know, I say this because as a tech founder, you're not often able to go anywhere, right?
You can talk to your co-founders about some things, but you can't fully doubt the whole thing.
You can talk to your board maybe a little bit, but they need you to succeed.
You can talk to your family, and they'll be there for you emotionally, but they won't understand the issues.
And so there's no safe place to go, if you think about it, where you can actually completely epistemically doubt the foundation of what you're doing, whether it's even good at all, right?
And I say this because we started this group called Doubt Club, where my founder friends and I,
would, there was only a few of us, it was under hyper secrecy, and I talk about it now,
you know, six, seven years later, but we gathered in a room and went around in a circle
under complete confidentiality and expressed, you know, what are our doubts about our
companies, our missions, and then just in our lives. It's like a support group. And, you know,
it's remarkable how helpful people found it to be, because you got to actually think the thoughts
that, you know, were kind of boiling in your head, but you couldn't kind of go there.
And it led to several of the people either getting into talent acquisitions or abandoning their
companies or projects or doing something else.
And everybody found it very helpful.
But what I worry about is at this level, at this scale, where $500 billion of Facebook stock
and a trillion dollars of Google's market value are reliant on not asking those questions,
it's too costly.
And your cognitive dissonance will settle in.
to want to be thinking those thoughts in the future, so don't think about them now.
Yeah, and it's, it's actually related to something Mark Benioff talked about.
As a leader in a tech company, there's always this, this push and pull tension between
sort of these three layers of trust that we always need to be thinking about, which the first
is trust in yourself.
So do you genuinely believe that you're making good, ethical, the right decisions, trust in
others, and then the trust other people have in you?
And to your point, I think many.
leaders that once you lose that faith in yourself, where does that take you? How do things start
to unravel? There isn't enough permission to actually say, I don't know. I don't know. It's very
interesting. It's like we're all obsessed with the trust crisis on the outside, but no one talks
about the trust crisis on the inside. Yeah, that's what I was getting at. Like, what is the
crisis of faith and confidence that I haven't seen that piece yet? Like, I've never seen anyone
sort of beautifully write that.
You know, you asked me what I would do if Zuckerberg, if I was, I would let someone in at that level.
To my deepest fears, the things I'm really struggling with, the things I know, the things I don't know,
things I thought I used to believe, things I no longer believe, things people tell me, things that I want to believe are true.
I'd really talk about that in a dialogue.
What does trust look like in the 21st century?
Is there any bright spots of hope in ways that you think this?
is turning around? I do. I mean, this sounds very conceptual, but it is actually really
reassuring that trust isn't destroyed. You know, I found it really reassuring to think of trust
like energy that it continually changes form. And I think it will find a new form. And that's what
we're living through now that I think will take parts of the institutional world and parts of the
distributed world and these two things, there will be examples where these two things come
together in media and in finance systems and in voting systems and in science and education
and knowledge and that's that's what I remain optimistic about is like how do you take these
older forms of faith in institutions and systems and merge it with what technology inherently
wants to do and that's where I think the bright spots will live.
Rachel's been so great having you on the podcast.
Thank you so much for coming.
Pleasure.
Take care.
We are moving to a low trust world, right?
Which means that we're going to value face-to-face interactions more
because those are one of the few things you're going to be able to trust.
So that means for technologists, if you want to get ahead of this wave,
the thing to do is figure out how to make experience.
that get people off screens, which are low trust environments,
and into real-life situations with people that you know,
which are high-trust situations.
Yes. Overall, I mean, so much of the humane technology conversation
and movement is kind of a back-to-the-land movement for the human psyche.
We already know how to do high-trust things.
It's in person.
It just so happens that rebuilding trust nicely corresponds
with the things that would also rebuild the social fabric.
Your undivided attention is produced by the Center for Humane Technology.
Our executive producer is Dan Kedmi.
Our associate producer is Natalie Jones.
Noor al-Samari helped with fact-checking, original music and sound design by Ryan and Hayes Holiday.
Special thanks to the whole Center for Humane Technology team for making this podcast possible.
A very special thanks to the generous lead supporters of our work at the Center for Humane Technology,
including the Omidiar Network, the Gerald Schwartz and Heather Reisman Foundation,
the Patrick J. McGovern Foundation,
Foundation, Evolve Foundation, Craig Newmark Philanthropies, and Knight Foundation, among many others.
Huge thanks from all of us.