Your Undivided Attention - With Great Power Comes... No Responsibility? — with Yaёl Eisenstat
Episode Date: June 25, 2019Aza sits down with Yael Eisenstat, a former CIA officer and a former advisor at the White House. When Yael noticed that Americans were having a harder and harder time finding common ground, she shifte...d her work from counter-extremism abroad to advising technology companies in the U.S. She believed as danger at home increased, her public sector experience could help fill a gap in Silicon Valley’s talent pool and chip away at the ways tech was contributing to polarization and election hacking. But when she joined Facebook in June 2018, things didn’t go as planned. Yael shares the lessons she learned and her perspective on government’s role in regulating tech, and Aza and Tristan raise questions about our relationships with these companies and the balance of power.
Transcript
Discussion (0)
I don't really care about rank.
I don't really care about getting up the ladder.
My goal was never to be CEO of Facebook.
In June 2018, Yael Eisenstadt,
coming from a career in the CIA fighting extremism in Eastern Africa,
and then serving as a national security advisor to Vice President Joe Biden at the White House,
accepted a job offer from Facebook.
They had hired her to help the company uphold the integrity of democratic elections worldwide.
I care about the mission.
I care about what is wrong in the world,
and I care about how I can help fix it.
And the point of this title, this shiny title, Head of Global Elections Integrity Ops,
is because that is what I was being asked to come do.
But almost immediately after she walked in the door, she ran into problems.
Day one was orientation.
Day two, my very first meeting with my manager.
First thing she let me know was, I'm changing your title.
Your title is now manager.
And then within a bit of time, it became crystal clear that she said
and I remain the single-threaded owner of elections work.
Before she even had a chance to get started,
the power and authority AEL had been promised was dripped away.
Had they ever taken the position seriously in the first place?
You can't say that I had made mistakes yet.
They can say that I didn't understand their business or whatever.
This was day two.
There are people from government who've gone into Facebook
and have really interesting roles.
For me, however, I was never empowered to actually do any
real work there.
Yael's story is bigger than her.
It's a glimpse of how things operate
at Facebook, whose voices are
heard, and what kind of change the company
is really prepared to make,
what kinds of problems they take seriously
or don't?
Today on the show, how did a small
Jewish-American woman in her 20s,
with fair skin and a pixie cut,
end up on the ground in Kenya and along the
east coast of Africa fighting extremism?
And what got her from there to a job
as national security advisor,
and then to Facebook with a promised mandate of protecting global election integrity,
only to leave six months later with a view that these companies cannot regulate themselves.
Finally, and most importantly, what are some solutions?
How can policy and government protections force the realignment of big technology's interests with our values
and the future of the democratic experiment?
The day we're releasing this episode, June 25th,
Tristan is testifying in the U.S. Senate in a hearing on, optimizing for engagement,
understanding the use of persuasive technology on internet platforms.
The big tech platforms can already predict us better than we think possible.
Many people think that Facebook listens to their conversations via their phones' microphones
because the targeted ads are just too on point.
Using our own data and the creepily accurate predictions their data voodoo dolls of us make,
the platform's asymmetric power over us will only grow.
From a regulatory standpoint, how can we protect against this growing power over?
In the second half of the show, Tristan and I talk through an emerging new framework using an old and enthrined concept, the fiduciary argument,
for how to characterize our relationship with platforms that have an increasing dominance as the social sense and decision-making GPSes of our lives.
I'm Tristan Harris. I'm Azaraskin, and this is your undivided attention.
Yeah, Al, it's such a pleasure to be talking with you.
I know you because you're the policy advisor for humane tech,
but as I've got to know you, that's possibly the least interesting of the roles you have had.
You've worked in some of the most dangerous and most beautiful place in the world.
You've worked in the White House, alongside some of the world's most powerful people
as national security advisor to Joe Biden.
In your 20s, you joined the CIA, going straight to the hotbed of extremism.
You went to Somalia and all along the eastern coast of Africa sitting face to face with the other.
getting to know them, their families, their communities, turning them from others into friends.
Then you made a huge change.
You came home and you changed careers.
You went from government and then you went to the private sector and then to technology.
If I'm characterizing it right, realizing the danger at home was greater than the danger abroad,
and that's a profound conclusion to reach.
Most recently, you were the global head of election integrity at Facebook.
That lasted six months.
We'll be coming back to that too.
Yael, welcome to your undivided attention.
Thank you. I'm so happy to be here talking with you.
So why were you in Somalia? What compelled you to go? It seems like such an unlikely fit.
So, yeah, just to clarify a little bit, I spent a lot of time in Somali communities along the border, but in Kenya.
Back then, we actually didn't have a presence in Somalia.
I've always just been really curious and drawn to other cultures, to sort of,
the global landscape to challenges, problems, beauty, everything around the world.
I mean, I grew up here in the Silicon Valley.
And as a teenager, I remember telling my parents this was in the 80s,
and I told them I didn't want to just be in this bubble where nobody really knew what was going on
outside of Palo Alto.
And so they let me go overseas as a 15-year-old for a year.
So this itch in me started very young.
But I was really drawn to Africa a lot during my college years.
Oddly enough, as a musician, through guitar on my back and went off to West Africa and
just kind of hung and played music and got sick and all that fun stuff.
Incredible.
And then it turned more into a professional interest.
I was always drawn to foreign policy and foreign affairs, but really wanted to understand
deeply how we all interconnect, how, what are the human sides to all of these different
issues we're dealing with around the world, and how can I connect more deeply with people
around the world, both to better serve the United States, in our foreign policy, in our global
presence, and just also as a personal curiosity. So Africa was someplace mostly, honestly, through
music and art that first drew me in, and then it became more of a political and professional
interest. I joined the CIA before September 11th. So when I joined, it really was, again,
And as I said, with this just curiosity of the world, I wanted to work on foreign policy issues, wanted to spend time specifically working in Africa.
And then after September 11th, of course, it went down a different path a little bit.
We did wake up that day knowing the world had profoundly changed, knowing that our careers had profoundly changed, and just wanting to play our part in helping figure out how to both prevent something like that from happening again, but how to also make the world a safer place in general.
And so then what did extremism feel like on the ground?
So I spent a few years living in Kenya from 2004 to 2006.
and amongst a variety of my roles, one of them was that I really was the one in charge of some of the
kind of counterterrorism work. Kenya had been the site of a few major attacks. Our embassy had been
blown up in Nairobi in 1998. There had been these attacks against a hotel and an airline
and Mombasa a few years later. So it really was this country that was at the center of some of our
major efforts. One of the things in my purview was to also be our representative to the
and to northeastern province, and those are the communities where a lot of the Somali
and a lot of the more Muslim populations lived.
And I just made a choice.
The way I looked at it was, how can I spend real time talking to and getting to know
people in communities that could be really vulnerable to being exploited and really vulnerable
to outside influence and show them.
them that, hey, you might have never actually met an American. You might not know much about
us. I'm here. I'm happy to talk. I want to get to know you. You get to know us. But really spent
hours like drinking tea with sheikhs and imams and engaging with youth groups and women's groups
and just really having lots of dialogues to understand what people's concerns are. But
the most important part was how can we really make sure?
that people understand us as who we are, as opposed to just what they see on TV or what they
might be hearing from somebody who might be trying to exploit them.
So I want to zoom out now.
You're coming out of government.
You could do pretty much anything.
And you decide to take a job at Facebook.
So it was within the business integrity part of Facebook.
And the title was the head of global elections integrity ops.
It's working in the belly of the beast.
You were there from June 2018 to November.
2018. When I think back to that moment, you know, the 2016 election with the Russian election
like psycho-hacking is lingering. We have the 2020 election coming up, and that's, of course,
looming even bigger now. You had a couple big Democratic elections coming up around the world as you
joined. What was that like? But first, like, how did you make that decision? In 2013, I started
thinking that it was time for me to leave government. When I came back from Kenya, I had just
the amazing honor to work at the White House as one of Vice President Biden's National Security
Advisors. And after doing a job like that, I just knew that I didn't want to get absorbed back
into big bureaucracy afterwards. And it was a time where I really wanted to see what does the
private sector bring to bear on some of the same challenges. And so my first pivot actually
was funny. Somebody said, what would you want to do if you could do anything? And I said, well,
I'd love to go work for some big, bad corporation with the huge presence in the world.
particularly in Africa, and help them figure out how to do it better.
My first job out, I ended up at ExxonMobil, working on their corporate social responsibility
or corporate citizenship issues.
I am one of those people who fundamentally believes if you have the opportunity to go into
an organization that is not disappearing, that's having a profound impact in the world,
and that it definitely could be steered into a good or a bad direction, the type of person I am,
I have to take that challenge, as opposed to standing from the outside and screaming about it.
I want to go in and at least see if there's something I can do.
So I spent two years at Exxon.
I moved to New York in 2015 and was doing some consulting work.
And I think for me, the pivotal moment where I started thinking about the tech industry was watching our presidential elections heating up.
And I was watching particularly in social media world.
this polarizing effect that was happening to the point where nobody could talk to each other
anymore. And I had never really published anything publicly. I'm former CIA officer. You don't
really go into media. You don't really put your name out there. But there was just, I don't
remember the exact thing that happened, but it was something in the elections that just pushed me
so far into being concerned about the polarization that was happening here and that was being
purposely exacerbated that I wrote this piece in Time magazine exploring why had it been easier
for me to sit down and have an open conversation with the suspected terrorist along the
Somalia border than it was for me to talk to an American on the opposite side of a hot button
issue. And what did I think that meant for the future of our country? What did I think this
polarization meant for our ability to actually tackle any of the huge challenges in the world
that we care about and that matter both for our security, for our democracy, for all the things
I care about. And in this piece, I started exploring that I wanted to see if there was a way
to take that same hearts and minds work I'd been doing overseas and bring it back here to
do it home. How do I help bring more civil discourse? That's where it started. How do I help foster
civil discourse here in the U.S.
Because, you know, before Facebook, it was the media with the cable news networks.
They started really, in my opinion, fomenting a lot of this.
One of the critiques we'll often hear is, but this is nothing new.
We've always had stuff like this.
What is your response?
Yes, there's always been multiple voices and there's always been fearmongering and that's
always been a part of our discourse.
But there were still actually guardrails around how you could.
talk about political issues. You know, the fairness doctrine, it was this FCC federal communications
commission act that basically said you had to show both viewpoints of any political argument or
any political discussion. And we all used to sit around and either watch the CBS evening news or,
you know, the NBC evening news. And the fairness doctrine was what regulated how they could have
those conversations. We've since gotten rid of the fairness doctrine. Part of me want to
is there a way to have a fairness doctrine 2.0?
But as I was thinking about this, it occurred to me,
even when I was just chatting with Tristan the other day
while we were going through all of these ideas.
And, you know, he talks a lot about the Saturday morning cartoons,
how we don't have that anymore.
And I kind of joked, and then I realized it's not a joke.
At first I said, yeah, we've lost Mr. Rogers.
But then I realized we really have.
in this 2019 landscape of what we see online on social media with the way even on television
with the way the most salacious content wins mr rogers would never succeed today that is
such a profoundly sad thought speaking of someone who grew up uh watching mr rogers that that feeling
that the show gave me that sort of reminds me of like what childhood was really about yeah
So I started speaking out about these issues, then there's the moment where I out my CIA passed.
That made everyone start calling me.
I start speaking out at tech and innovation festivals about civil discourse, about how to get off the platforms and back to speaking to each other, about what can the tech industry do to start reversing this course.
And I did this podcast interview and I was asked my thoughts about Facebook.
And in my long answer, I made the following statement.
I said, you know, I'm sure Mark Zuckerberg didn't set out to destroy democracy.
But I do question who he has at his decision-making table.
And I guarantee you it's not somebody with my background.
So I guess I kind of manifested it.
A few weeks later, a recruiter called, and we started talking.
And then the day that Mark Zuckerberg was testifying on the hill, they called me that day and said,
we're having, she actually said, we're having an emergency meeting about you.
we know we really need you and want you, please give us till the end of the day.
And they call me back and listen, the joke about Facebook knowing more about us, well, it's not
a joke, know more about us than we know about ourselves.
Yeah, it can predict us better than we can do it for ourselves, yeah.
I mean, they came up with the title that spoke so much to the core of who I am that it would
be impossible for someone like me to say no to, to ask me to come help work on elections,
integrity issues. Fundamentally, I'm still a public servant at heart. I still care more than
anything about these effects on democracy, these effects on polarization, on everything that's
happening in the U.S. and negotiated a lot of like, well, I need to know what this looks like and how it
works out, but it was impossible for me to say no to that offer. Yeah. I mean, Facebook is the
world's best at targeting. I've been hearing Facebook hiring high-level government people. And my
skeptic side says, you know, that's a kind of impact washing. What was going through your head
there? What were your concerns going in? During some of the interviews and in the conversations
with the recruiter, I was very clear. You know, I actually laid it out. If you want somebody to do it
this way, then don't hire me. If you want someone to come in and really dig into how did we get
here and what are, and you know, kind of laid out some of my thoughts, then hire me. Like I was very
clear on who I was and how I would approach things.
What were some of those other questions that we were thinking about?
How did we get here? What else?
One of the questions I asked very clearly, I said, listen, I have been the outside the box
person brought in to help a company fix things before.
If you were bringing me in to be an outside the box thinker and to help you possibly
steer the ship in a different direction, I need to know that there's actual support for that
because you're setting me up to fail if you are bringing me in to bring a different
perspective, but then nobody actually wants to hear it.
I need to know that there's support from the top
on down to the bottom, that this is what people want.
So you've said what you want.
They've agreed to it.
You're ready to go in and make a change.
What was the difference between what happened
and what you were promised would happen?
You know, I'll actually just tell you what happened on day two,
and that will pretty much explain my entire experience there.
So I don't really care about rank.
I don't really care about getting up the ladder.
My goal was never to be CEO of Facebook.
I care about the mission.
I care about what is wrong in the world
and I care about how I can help fix it.
And the point of this title,
this shiny title, head of global elections, integrity ops
is because that is what I was being asked to come do.
Day one was orientation, day two,
my very first meeting with my manager.
First thing she let me know was,
I'm changing your title.
Your title is now manager.
And then within a bit of time,
it became crystal clear that she said, and I remain the single-threaded owner of elections work.
Okay.
You can't say that I had made mistakes yet.
They can say that I didn't understand their business or whatever.
This was day two.
There are people from government who've gone into Facebook and have really interesting roles.
For me, however, I was never empowered to actually do any real work there.
A lot of the people I worked with were really excited I was there and were really hungry
for me to contribute to the conversation.
Higher up, I was never allowed to.
I mean, Ayal, this seems so exceptionally profound.
Because what we're talking about here, as we said,
this is the hearts and minds generator machine.
We're talking about the integrity of the democratic experiment,
the entire world over,
and that they, some sense, betrayed their promise to you.
They absolutely betrayed.
their promise to me.
Wow.
What do you take away from that?
You know, again, as I said, there's lots of things to take away.
As I mentioned, because I can only really speak to my experience as opposed to everybody
else's, you don't hire a former CIA officer and then ask them not to look under the
rugs.
There was a lot of this like, let's not look backwards, let's only look forwards.
You can't understand the problem if you're not willing to dig up the skeletal.
of how we got here. So that was one of the lessons. It's, just look forward. Like, let's move on.
What are we doing next? As opposed to, well, wait a minute. Fundamentally, how did we get here to
begin with? And first of all, the company's just gotten too big. Too many people are competing
to try to get to the top. And so some of that is just kind of that bureaucratic, messy,
middle management chaos. But the few things that I did try to do while I was there.
Yeah, what did you try to do and what were not allowed to do?
You can make all the changes you want that are whack-a-mo changes, right?
This government is telling us X, Y, or Z, how do we reactively handle that?
It's more the proactive.
How do you, you know, a few things that we try to do proactively were to shut down.
And the broader...
Looking forward to problems that you thought were going to happen.
There was one very particular example that we tried as our team to put a plan together for, and it was just shut down.
And the questions back were, well, what's...
what's the prevalence, what's the scale?
And I kept saying there's not prevalence right now.
My team is saying that we think this is what's going to happen the week of the midterms.
Let's build out a program to make sure it doesn't.
So first of all, talking about future threats and trying to build ways, to me, from my team, that just got shut down.
But more importantly, it was, I do remember I was in India, actually, with the team doing some research ahead of the Indian elections.
And I just asked the question of one of the people there who's been there a long time and is pretty senior at Facebook.
And I said, you know, you're doing great work trying to like figure this out around the world.
But have we ever sat back and asked the broader question of who do we as Facebook want to be in this space?
Like if we ever really sat back and had that tougher strategic conversation of who do we want to be?
And this person said, no, we haven't.
But what I meant by that is if you really want to address these issues, you have to make a fundamental decision at the leadership level of what is more important.
my so-called fiduciary responsibility to my shareholders.
Yeah.
Or my responsibility to the broader society.
Yeah.
And unfortunately, the one thing I don't remember ever being a part of a conversation of there
was anything that actually said,
but our business model is the reason why this is all happening.
That is such a profound point.
There's this kind of denial, I think, going on.
We're making the world a better place.
Look, look, connection, connection.
Look at all these positives and just then blinding.
by that goodness and unable to ask
the really simple question of like
given this business model, what has to
happen? Oh, we all have to live in this sort of like
amygdala limbic world
where we're our most aggressive
cells. So
you've said some pretty profound things
about Facebook that at least for me when I hear
them like, oh yeah, I couldn't
possibly trust them to fix this problem
themselves. Why can you talk
so freely about what was gone
on Facebook?
When I left, it's interesting, I was actually
sitting on my couch watching the blowback on the news from that New York Times piece
Denied Delay, Deflect, the one about Facebook and a lot about Cheryl Sandberg. And I'm watching
the news and I get this email from this HR person at Facebook reminding me that I haven't
signed my paperwork yet. And what does that mean? That means that in order to get your
severance package and health insurance and all that fun stuff, they want you to sign a non-dispairage
agreement. Listen, I don't need to go toxic just for a fun, toxic talking point about
Facebook to self-promote and get all of the news. To me, fundamentally fixing this is so important
for the future of everything I care about that I wrote back to them. I said, I won't be signing the
paperwork. Thank you. Left the severance on the table. And I need to be able to maintain my
ability to have my voice to use my experiences and my knowledge to try to help fix it, which is
when I'm trying, in part to do now working with you guys as well.
Yeah.
Yeah.
So just to put a dot on it, when employees leave these major Silicon Valley companies,
they almost are always offered, especially the higher up you are,
a severance package that says, like, in exchange for you, not saying anything disparaging
about us, we won't say anything disparaging about you.
You get this money and you walk.
you thought it was more important to keep your integrity than was to take that money.
That's what I just heard.
Absolutely.
Absolutely.
I worked very hard to claim my voice after leaving the CIA.
I worked very hard to get to the point where I was brave enough to speak out.
And I know that sounds weird, but yes, speaking out about your CIA pass is a very uncomfortable thing to do.
And there was no way that I was, A, going to allow Facebook to be the one to silence my voice.
But B, and it's not like they offered me millions of dollars, I don't want to overstate the case, but what is happening with the social media industry right now and some of the issues that we'll get into and some of the things that I think need to be fixed are so fundamentally important to the future of this country and to the future of people I care about, that that was way more important.
But also my integrity is the number one thing that matters to me.
there was no way I could take that money and silence myself.
Yeah, that makes perfect sense.
I really want to get into, like, what are solutions here?
Yeah.
But is this why?
Like, why do we need government intervention?
There's this sort of salacious, self-dealing meme in Silicon Valley,
which is, oh, government's just too stupid to understand this.
So if they try to come in and regulate, it's just going to be a mess.
We don't want that.
We're going to break innovation.
Just like in any company, in any place in the world,
world, there are smart and not so smart people in government as well. The talking point that
the U.S. government is too stupid to figure this out and therefore leave us alone, who does that
talking point benefit? The talking point benefits the companies that don't want government to
regulate them. First of all, I'm not going to get into a whole Civics 101 speech here, but there's
lots of different parts of government, and I do think it's worthwhile recognizing the difference
between an elected official who may grandstand on a Senate or congressional hearing a bit
versus the civil servants who are working every day in government to actually protect citizens,
craft good policies, all of that.
I would consider myself as one of those people.
So I don't think I am so stupid that I can't figure this out.
But as long as we continue to erode trust in the ability of government to step in and handle any of this,
then it lends to that talking point of we here in private industry
or we here in the Silicon Valley are smarter.
What we are building is the backbone of the American economy.
At this point in time, I think many people have lost that moral high ground
of being able to say we're the ones who will fix it.
Government is the only thing the size of which can counteract tech,
and this is just another self-dealing meme that lets tech eat more and more and more of the world,
the public square, without any repercussions.
Right.
I would also offer, when you're talking about senators and congressmen and women during hearings,
don't forget that that same person is also dealing with what's going on in North Korea,
is also dealing with what's going on in Venezuela,
is also dealing with, you know, the manufacturing industry in the United States
and what the future of work looks like.
I mean, they're dealing with every single issue in the world.
Hey, this is Aza.
Yale's point here really resonated with us,
so we're going to pause here and explore it.
more. Government officials are dealing with a lot. What do they and their staff members need from
us to be able to understand these issues better? Tristan and I have some ideas. What I find
interesting about those hearings when Mark Zuckerberg went in front of Congress is they were
five hours or something long. I mean, there was multiple sessions, hours and hours and hours
of questions. But what does popular culture remember about those hearings?
Senator, yes. There will always be a version of Facebook that is
It is our mission to try to help connect everyone around the world and to bring the world closer together.
In order to do that, we believe that we need to offer a service that everyone can afford, and we're committed to doing that.
Well, if so, how do you sustain a business model in which users don't pay for your service?
Senator, we run ads.
I see. That's great.
And what do people take away from that one memory is that Congress doesn't get it.
And we would never, therefore, trust them to regulate these companies.
And I think the point that Yale is making is that it's not about the five hours of testimony.
In the attention economy, it's a race to figure out what can I get people to remember and hold on to.
And I think that if I was Facebook, I don't think that they did this, but I would have wanted that question to happen because it forced people to have just one memory leaving it, which is that we shouldn't trust government to regulate.
And I think we have to examine that question because the fundamental thing here, you know, with our guiding philosophy at C.H.D. is we have Paleolithic emotions, which are on a fixed clock rate. Our evolutionary instincts aren't changing. We have medieval institutions that get updates about every four years with some new people in it. And then we have godlike technology that's increasing at an accelerating rate. So just imagine a world where the clock rates of your car are getting exponentially faster while your steering wheel is still lagging behind every four years. Like, that doesn't work.
you're going to go off the cliff by default.
And so that's the issue is we have to align these clock rates
so that our Paleolithic instincts match up with upgrading,
you know, the frequency and wisdom of our medieval institutions,
upgrading with, you know, the slowing down probably of our godlike technology.
Because we don't want self-destructive godlike technology.
It is intrinsically self-terminating if we cannot align the clock rates
of the guiding and control mechanism with the speed and evolution of tech.
And I think that's why we have to refute this idea that the government can't regulate it.
We need government to regulate it.
What was powerful for me about that point that Yale made is, you know, I've caught myself thinking that set of thoughts.
I'm like, I don't really think government has what it takes to understand technology, especially as getting more and more complicated.
So if they don't understand it, then I don't really think that I'd want them to regulate it because they're going to mess things.
up and that I want to change my own internal memetics to being, ah, it's then my job as a
technologist to help upgrade the capacity of Congress, of our legislative system, whether it's
by writing articles or explanations or getting to know people, whatever it is.
Like, I should be asking the question, cool, how can I help?
Right.
The reason that people have so little faith is because they are dealing with more complexity,
more problems at tighter and tighter timescales,
it is understandable,
and with more political misincentives
and the whole thing and all that.
But what we want to add as a co-processor,
that's like something like an Office of Technology Assessment
would add like a moral co-processor
that can do faster updating on, you know,
here are the issues with technology,
and let's actually farm that out to some expertise
so that we can get some better ideas and policies
at a faster rate.
And we used to have an Office of Technology assessment,
and we can bring that back.
And the whole point is, at an age of exponential tech, where the issues are only getting
crazier and more complex, we need to add some speed and wisdom to the oversight power
of government, one way or another.
And there are a few components that go into building that co-processor.
One is helped from people in the inside of tech companies, and another is from people
on the outside who work with government to get them up to speed.
Then there's the question of form.
How can we better match the clock rate of government with the clock rate of technology?
How do we as technologists expand our government's capacity?
Let's get back to Yale and hear what she says.
So what can we do?
Yeah, well, the biggest problem, the elephant in the room that we're not going to be able to
completely upend in the way we want is our capitalist system.
Sure.
This is the, oh, you're just talking about capitalism, so it just replaced capitalism.
Whenever people go down this route, and like, of course, that's a really interesting, big
conversation. But, you know, if we just replaced, because we're talking really here about the
negative externalities of the technology, we create, polluting a public resource environment. So if we just
replaced all of our gas and our polluting technologies, our extractive energy technologies, we have
regenerative technologies, solar and wind, and you kept capitalism the same, that would be a world
that I would much rather live in. So we don't have to replace all of capitalism to make a change
to live to a world where we want to live in it. But at the same time,
The idea of unfettered capitalism matched with unfettered innovation is just not a sustainable
situation.
And so there's a number of things we need to do.
When you start to hear people go, well, data privacy isn't the most important thing, or, well, changing the business model isn't the most important thing, or breaking up Facebook or antitrust, these are all pieces of a larger puzzle, and every single one of these pieces matter.
So for my lens, to take a step back, the thing I care most about and the thing I look at is responsibility.
And so responsibility, accountability, liability, these are all sort of the same terms.
But, you know, government, we know what government's responsibility is.
And it doesn't mean that they always do it right.
And it doesn't mean that they're perfect at it.
But we know what their responsibility is.
Their responsibility is to protect the citizens of this country.
Their responsibility is to take care of the most downtrodden.
But really, the protection of our democracy, of our citizens is government's responsibility.
What nobody has defined yet or what I haven't heard a definition of
is what is the responsibility of these companies.
Whose responsibility is it when a real world situation happens off platform
that was enabled, exacerbated, or, you know,
happened because of something that happened on your platform?
And that's where I, that's one of the three pieces I really care about
is whose responsibility is it and how do we get to that?
And so one area of that,
which is debated a lot, is the idea of how do you define a Facebook's responsibility in some of this?
And one piece of that comes down to this idea of should we or should we not reform CDA 230,
which is the Communications Decency Act 230.
So this piece of legislation was written in 1996.
And so for listeners to know what that did in particular is that it meant that platforms were not responsible legally for the content that a user uploaded.
So because they had no liability, because you don't know what the users are going to upload.
And it was this deregulation that said platforms are not responsible that let software as it ate the world, let deregulation eat the world.
So now, as you were saying, 1996, it's been a long time.
We've learned a lot since then.
A lot of the conversation around CDA 230 right now, even that starts become polarizing, right?
It gets broken down to if you get rid of 230, then you are anti-free speech.
On the one hand, you have this argument of it'll kill the entire internet.
That's one big argument.
Another argument is, well, freedom of speech is more important.
So all of these arguments, it polarizes us even around the CDA 230 conversation.
You're either pro or against free speech.
You're either pro or against innovation.
And I don't think it's any of those things.
There's a nuance missing there.
This is the term that I've certainly been helping to champion, which is the freedom of speech is not the same thing as the freedom of
reach. And it's that nuance that gets confused. That means we always go down the path of,
ah, this is about content moderation, not about the systemic change that needs to happen.
That is my favorite line, because the way I look at it is, a better way to look at CDA 230 is
I actually don't want Facebook and Google and Twitter to be regulated the same way as the New York
Times. I actually think the New York Times is more responsible. I look at a Facebook or other
platforms and I say, as long as they're curating my content, right? They aren't just putting everything
in order of everything that's being posted in front of me.
They're curating what I'm seeing.
They're amplifying content.
They're doing it in order to keep our eyes on the screen,
which CHT talks about all the time.
And they're doing that in order to sell ads, right?
And my thing is they're not a publisher.
They're not a media company.
We need to figure out what they are and regulate them accordingly.
But instead of it being an all or nothing,
either they're media or they're not,
either they're protected by 230 or they're not.
I say figure out what they are, which is a digital curator or a digital amplifier or whatever
term you want.
And actually, that's more dangerous because they're the ones who not only decide what we see,
but they decide what they're going to amplify.
And figure out if you fit into that category of curation and amplification, and maybe
there's a threshold of how many users you have, then you need to be regulated as such.
So the other thing that they're allowed to hide behind as long as 230 is not amended is everything that I saw happening a lot of the things that we're asking Facebook to deal with.
Like let's say there's a blackout during an election in a certain country and that country's laws say that you cannot show political advertising for two weeks before an election.
The answer was always you show us what's happening and we will take it down.
It's always pushing the responsibility onto somebody else.
And yes, they try to train the machines to take down certain content in advance.
But at the end of the day, there's no actual legal responsibility for what is on your platform.
And as long as there's no responsibility, they will continue to push that onto the users and say, well, we're doing everything we can.
But it's the user's responsibility to flag it to us.
At the end of the day, this falls on government as well.
I can be as angry as I want to be at some of the things that are happening in the social media world, but can I completely blame a company like a board at Facebook for being 100% committed to their business model and profit as long as that's not illegal, right?
As long as there haven't been rules written and they're not breaking the rules, then I can say, I hope your better self knows that there is a better way to affect society.
but those guardrails are not in place.
And so that is...
So what I'm hearing you say is that their behavior,
the behavior you saw directly at Facebook,
it's egregious, but it's not illegal.
That's right.
And until we shift, and the role of policy here,
is to shift the responsibility,
the closing the balance sheet of the externalities against society.
Companies are just going to continue doing that,
regardless of who's in charge.
For sure.
And the externalities part,
is so critical to it, right? If we cannot outlaw this business model, which I hope there is a way
to someday outlaw this business model, but if we can't, then how do you make it so expensive that it is
no longer the smartest way to operate? And part of that is how do you quantify the externalities?
And this is a lot of what I know CHT has also been looking at, right? How do you decide,
how do you quantify the attention extraction, what effect that's having on public health,
what effect that's having on polarization, or even on productivity, which is something you can
quantify and then put in terms of GDP and then decide how to tax. I know tax is an evil, dirty word
in the Silicon Valley, but these externalities are affecting society. And as long as you can
figure out how to quantify that, you can hopefully make the business model unsustainable, but bigger
than that. And this is something that I know Tristan is starting to talk about quite a bit.
and you guys will certainly be talking about
as that fiduciary responsibility.
Eza again.
Okay, fiduciary responsibility
or a fiduciary relationship,
it's an old concept
and honestly one that I didn't know
before diving into this work.
It's a way the law in the U.S.
makes sense of relationships
where one party has asymmetric power over another.
I ask Tristan to explain.
There's constitutional law,
which defines a relationship
between individuals and government.
There's legislative law or contractual law,
which defines the right relationship
between individuals and each other in society.
And then there's fiduciary law,
which is between doctors and patients
and therapists and clients
that has to do with essentially protecting
the asymmetry of power.
I mean, just imagine a world where every single doctor,
if you live in the United States,
every doctor's business model
was to not give you the drugs
that would help you the most,
but just give you the drugs
they would make the most money from.
imagine that world like oh my god that would be horrific it'd be this sort of dystopia of hell
i mean uh kind of a hell of health care i mean um or lawyers where every single lawyer was like
oh now you told me all that information now i'm going to go sell it to the other lawyers and i'm
going to go manipulate you and go trade on wall street using all the financial details that i found
about you for in order for services to be rendered in this context like a priest or lawyer
they have to collect information from the client that could be used to compromise the integrity
of that client.
And the degree of that compromising information
is the degree to which it must not be
an equal contract relationship.
And the big deception in Silicon Valley
is that they are in an equal party's relationship.
We're just giving you what you want.
You clicked play.
You did this thing.
You scrolled.
You are an equal party in this relationship.
But that's missing in the first case,
the fact that there's 1,000 engineers
on the other side of the screen
with a huge amount of asymmetry of power
knowing what will persuade you to keep scrolling.
Or in the case of AI, an increasing level of predictive capacity.
So that asymmetry is growing because they can predict even more invisible features about you
that you don't know about yourself.
We say it's like Silicon Valley designs its products with behavioral economics,
which is to say with the economics of manipulation, changing choice architectures using that asymmetry.
And they defend themselves to Congress and governments using regular neoliberal economics
that humans are free, rational choosers, agents of their own design, making their own choices
throughout the world. So they're pretending that they're in this equal contract relationship
while actually being in an asymmetric relationship. Now when I say that, I don't want people to think
that, you know, we think this is like there's this diabolical manipulation happening. I think they
actually kind of, we have all collectively in Silicon Valley, slow walked ourselves into this
position of asymmetry without really realizing it. But now that we're here, there's a defense going on
where the last thing they would want is to be recognized for having this asymmetric duty of care relationship,
where they have to have a fiduciary duty of care carrying relationship with the people that they're serving
because they have such asymmetric power over the other's weaknesses.
As a government person and as a policymaker, you know, you want to be thinking about with this asymmetry of power.
Like imagine a world where priests are getting exponentially smarter like every passing minute and like the level of information they're doing.
And so you're trying to protect not just against today's level of asymmetry of what a priest knows about everyone in their town, but like the exponentiation of that asymmetry.
So it's very simple.
We have to go from a contract relationship, which has been false all along.
We've been sold a bill of goods that's not true to a fiduciary relationship.
Let's even just call that a caring relationship that puts your interests first.
And there's a professional standard and responsibility that your license or your ability or your ability or
capacity to provide that service gets taken away. You know, you have to have a responsibility
to the community that you are inside of and serving. And what's wrong with the technology
companies right now in the business model is it has none of that responsibility. And YouTube
is still recommending conspiracy theories and crazy stuff, and it hasn't fundamentally changed.
And so that's why we just need to just bite the bullet here and switch to a fiduciary model.
And that's the biggest, most powerful action that government can help make possible. And this is
actually being discussed right now in the UK with something called the duty of care. But
that's a little bit lighter and more ambiguous. I think we need something stronger. But this is
the kind of conversation that we really need to have is what are these companies and these products
in service to? And it's like what you talk about, Aza, like what is this technology for? Is it for
maximally manipulating the limits of the human nervous system with increasing asymmetry and asymmetric
power over the limits of our nervous system? Or is it for being in service of strengthening
our social fabric and strengthening our communities and strengthening the family and strengthening
democracy. We have to make this choice. And making this choice also means backing it up with
resources and regulations. Let's make sure we're at least making visible where the platform's
business models are at odds with democratic values and our best interests. The point is this is a
systemic problem. As long as it isn't illegal or there aren't major fiscal repercussions, companies will
always be incentivized to trade what is right for what is effective, and their millions of
AB tests will automatically and silently find all of our weak spots and choose against our values
in favor of engagement. The goal of policy is to find ways of making the externalities expensive
without legislating product decisions. I love your points about responsibility, because
right now the only way these companies have any incentive to deal with the problem is if someone
in civil society goes out of the way, often under-resourced, to do a whole bunch of research
to figure out where these ex-inalities are, right? We only discover that YouTube is surrounding
people with this sort of stepping stone path towards, say, pedophilia because of outside
independent researchers. Right. So one of the things that I'm passionate.
about is the idea of amplification transparency. It's a small but I think very powerful first
step, which just says, hey, you know, platforms, you're not yet responsible for any of this stuff,
but at the very least, we should see how many times you've amplified or recommended a piece
of content, because then we can decide as civil society whether that fits our values or not.
And right now they're just hiding it all. It's as if there's a patient that has cancer,
and you really want to go in and you have to remove just the right pieces of cancer from the body
without destroying the whole thing.
But because we can't see,
it's like there's a blanket over the patient.
We're just like having to jab through and guess.
So amplification transparency,
which technically is pretty easy,
seems like the first way of opening up these platforms
so that they have to have accountability
to the rest of civil society.
Yeah, I mean, so that's a perfect example
of one of the puzzle pieces, right?
When I know that there'll, you'll say that
and there'll be somebody in the solution space
who will go, but that's not enough.
I agree.
It's not enough.
No, no. Of course it's not enough, but it is such an important piece of it. Even if we do want to get to the point of responsibility and how do we figure out whose responsibility it is, especially in the CDA 230 conversation, we have to know how these things are being amplified. We have to be able to see that. So I think that's an incredibly important part of a larger pieces of this puzzle of how are we going to even tackle this. But the other thing, and this sounds like a bit of a shift, but one of the things I find very funny, so we're sitting here in Sanford.
You have some of the most brilliant minds here in the Silicon Valley that build incredible
technologies, build incredible companies.
And what I find fascinating is how you can have the smartest people working on these
things, but as soon as there is a problem, oh, that's too hard to fix it.
How many times have we heard Mark Zuckerberg or Cheryl Sandberg say, it's really hard?
We're sorry, we know we need to do better, but it's really hard.
And so this might sound like a harsh statement, but if it is so hard for you to figure out why certain content is going viral, the New Zealand attack is a perfect example.
If it is so hard for you to figure out why your algorithms are doing certain things, I would say shut it down.
I don't mean shut down Facebook.
Shut down your recommendation system and rebuild it or figure it out.
Because if you were smart enough to build this system, how are you not smart enough to be able to fix it?
And so it's just this weird cognitive dissidents for me when I keep hearing about how hard it is.
One of the steps you just offered is a perfectly viable step that nobody seems to want to do.
Let's make it more transparent about what these algorithms are doing, how the curation is happening, how the amplification is happening.
And if you don't understand it, shut down that recommendation system.
You know, that reminds me, like, what is the lesson of like the paperclip maximizer, like the, the,
AI, like you give it an objective function.
It goes off and you say, make paper clips.
And so it just turns the universe into paper clips because that's all it knows.
It's be careful what you wish for, right?
They don't know how to turn it off or to turn down or to fix.
That's exactly the moment where you're like, oh, hey, the AI is starting to do that thing
we've always really worried about.
Maybe we should turn it off.
Yeah.
I mean, if I can tie it back a bit to my government experience, when you bring people around
to table to talk about their hopes, dreams, desires, no matter how politically different they are,
face-to-face, people generally find some sort of common humanity, right? At the end of the day,
they love their children, they love their siblings, they want a better world. They may not agree
on the approaches, but in general they find their humanity. And that was, as long as I was in government,
really like spending an incredible amount of time face-to-face with people trying to build
these bridges, I always found that to be true. Radical,
Imam who had been preaching against the U.S. for years sat down and spent four hours
with me. It doesn't mean we are best friends at the end of the day, but we found a common
humanity. Even the people fighting on Facebook, if you were to take them off that platform and
sit them down in the same room, I really believe at the end of the day most people do want
to see the common humanity in people. And so I'm just bringing it back to that is why I am so
fundamentally concerned about the fact that these platforms are 100% unregulated.
They control the keys to our public square.
They control the keys to our deepest emotions.
They control the keys to how we are interacting with each other.
And there's no guardrails built in.
There's nobody who's saying, how do we slow this down?
So that we can make sure you are not completely destroying our ability to find common ground on anything
so that we can actually tackle the real challenges happening here.
This lack of responsibility.
Yeah.
That exists in this industry is something that I just find incredibly unacceptable.
So as a technologist, how do I go about helping build capacity for government?
That's such a great and interesting question.
I don't know the answer to this, but to figure out how to help overcome this
this unfortunate mentality of a lot of people, I'm not saying everybody, but a lot of people here of don't trust the government, don't work with the government.
the government's too stupid, or, you know, post-note and revelations, it's we can't trust the
government with anything, which I find funny because, again, Facebook still knows more about you
than the CIA and FBI ever will.
I mean, they do, right?
Yeah.
So not just because of what you post on Facebook, but because of how they're following your patterns
all over the internet.
Yeah, yeah, yeah, yeah, they can predict when you're pregnant, they can predict when you're depressed.
I want to talk to as many people in this industry as possible and show them, like, how can we get to the point
where you can trust your government a little bit more
to start figuring out how do we build this bridge
and do this together.
But also, I would really love to find a way
to inspire more people to actually think of government service.
We need some of these really bright technology minds
to work in government, to help government figure this out.
Every senator and congressman may not be a perfect,
well, a lot of them have no idea how the technology industry works.
But behind every senator and congressman
and is a whole staff of people who are working really hard on these things.
And I know there are some organizations out there that are trying to get more technologists placed as fellows, for example, on the Hill.
I think Tech Congress is one that's doing that.
Things like that are so important.
And it's important for what everybody cares about because government does not want to stifle innovation.
Government does not want to destroy Silicon Valley.
Government wants Silicon Valley to help the United States be the most thriving country we can be.
And so pitting government in the Silicon Valley against each other at the end of the day is not benefiting any of us.
I love a lot of the work that CHD does to try to educate and inspire people within companies to also think more humanely about the products they're building.
That's a huge step.
I do think there needs to be accountability at the leadership level.
And how would you implement that really fast?
That's something that a board has to do.
And as long as a public board's only responsibility is the shareholder's bottom line,
then that's never going to benefit greater society.
Well, I shouldn't say never depends on the company.
But, you know, government has to step up and say what is happening in a company like Facebook
and some of the things happening at YouTube, these are no longer going to just be no guard whales.
That has to be regulated.
Yale, thank you so very, very much.
This has been fascinating.
Thank you.
It's been great chatting with you.
Next week on the show, we interview Guillaume Shaslow, an AI expert and former software engineer at YouTube.
The YouTube algorithm has 10 billion videos, or I don't know how many billion videos, and it chooses the 10 to show to you in front of your screen.
And then you have just a tiny little choice between those 10 to choose which one you want to see.
So it has 99.99.99% of the choice is from an algorithm that you don't understand and you don't control.
Guillaume explains how on YouTube it's possible to start out watching kitten videos
and end up on flat earth conspiracies hours later.
And it'll tell us what YouTube could do to stop promoting this kind of algorithmic extremism.
Your undivided attention is produced by the Center for Humane Technology.
Our executive producer is Dan Kedmi, our associate president.
producer is Natalie Jones. Original music by Ryan and Hayes Holiday. Henry Lerner helped with fact
checking. Special thanks to Abby Hall, Brooke Clinton, Randy Fernando, Colleen Hakes, David Jay,
and the whole Center for Humane Technology team for making this podcast possible.
And a very special thanks to our generous lead supports at the Center for Humane Technology
who make all of our work possible, including the Gerald Schwartz and Heather Reesman Foundation,
the Omidyar Network, the Patrick J. McGovern Foundation, Craig Newmark,
Philanthropies, Knight Foundation, Evol Foundation, and Ford Foundation, among many others.