Your Undivided Attention - Spotlight — The Facebook Files with Tristan Harris, Frank Luntz, and Daniel Schmachtenberger
Episode Date: September 21, 2021On September 13th, the Wall Street Journal released The Facebook Files, an ongoing investigation of the extent to which Facebook's problems are meticulously known inside the company — all the way up... to Mark Zuckerberg. Pollster Frank Luntz invited Tristan Harris along with friend and mentor Daniel Schmachtenberger to discuss the implications in a live webinar. In this bonus episode of Your Undivided Attention, Tristan and Daniel amplify the scope of the public conversation about The Facebook Files beyond the platform, and into its business model, our regulatory structure, and human nature itself.
Transcript
Discussion (0)
Hey everyone, it's Tristan, and this is a bonus episode of Your Undivided Attention,
the podcast from the Center for Humane Technology.
As you may have heard, last week the Wall Street Journal released the Facebook files,
a huge investigation of the extent to which Facebook's problems are known inside the company,
all the way up to Mark Zuckerberg.
To respond to the story, I was invited by pollster Frank Luntz to talk about this on his weekly webinar,
together with my friend, Daniel Schmachtenberger.
Daniel is a founding member of the Consilience Project, which is aimed at improving public sense-making and dialogue.
And you may remember him from Your Undivided Attention Episode 36.
A problem well-stated is a problem half-solved.
Frank Luntz is a political and communications consultant, pollster, and pundit who deeply understands the hopes and fears of Americans.
So without further ado, here's my conversation hosted by Frank Luntz with Daniel Schmachtenberger about the Wall Street Journal's Facebook files.
As people get brought in to the Zoom, I don't know if I've ever done a Fridays with Frank that was more appropriate and more timely because of all that has happened in the last seven days.
Tristan Harris, congratulations. I'm winning two Emmy Awards. You're the first personal friend I have. That is actually a multiple Emmy Award winner. Good for you. You should feel very proud.
Thank you. And you're all.
already getting a comment from one of the people who is listening in.
And Daniel, what you are trying to achieve with a more constructive,
a more open, a more useful dialogue,
and the teaching of civility and decency in how we communicate in the public square
is something that we should all emulate.
I am a proponent of technology.
I am a supporter of it.
We're going to hear a lot of criticism.
with today because of the problems.
I do want to open up saying that I believe in it,
believe in what it is done for us,
and in fact, I'm going to do something I've never done
one of these Fridays with Frank,
which is I'm actually going to show some data
that we've not shown publicly until now.
We've been looking at technology.
Now people react to it.
This is important.
We ask the question in the opposite way
that most people do. How would your life be different if you didn't have all that technology,
the stuff that we use every day, Google, Amazon, YouTube? And the public action by almost two to one
say that the quality of their life would be better without that technology. However, then wanted to
know whether technology has made their life easier or more difficult. And overwhelmingly, they say
that technology has made it much easier to keep in touch with people, as well as issues
that are important to you. Another example, it's given people more choices. It's actually made
their lives easier to consume because you get more services and more products. And again,
numbers are overwhelming. In terms of making shopping hassle free, 63% easier, only 6% harder. We got more for
these. Saving money in the things you buy. Overwhelmingly easier. One more. Has it made it easier or
harder to get involved in politics by 47 to 10? They say it's made it easier. Again, I go back to
that very first statistic I showed you. Not easier, but better. The public has an issue with that.
So let me go to you, Tristan. And again, congratulations on your success. We've talked about this.
So you and I have known each other for a year and a half.
Our meeting was a chance encounter by a friend of mine who said,
I must sit down with you.
And I admit that I was going to not show up, or I was going to cancel the meeting.
And probably in the year 2020, you're the single most important person I met.
You look at that data.
You know how much people need and want and value technology,
but you also know the consequences.
What have you learned in the last seven days?
The Wall Street Journal has been pummeling Facebook
and really shining a bright light on social media.
What have you learned over the last week
that would be helpful for all the people
who are on this Zoom?
Thank you, Frank.
And yeah, really pleasure to be here with both you and Daniel.
So for those who don't know,
over the last seven days, last five days, I think the Wall Street Journal has released a new series called the Facebook Files.
This looks like it's the largest event. I would say they call this the largest event since Cambridge Analytica in terms of revealing research that the company has been aware of harms across the balance sheets of teenage mental health, increases in teen suicide, body image issues for teenagers, the radicalization of political parties,
There's evidence of the way that Facebook changed its ranking systems that then caused political parties to actually tell Facebook.
We know that you changed your algorithm and we switched it to, we know because we have to publish now 80% negative content about our opponents to even get any attention the way that we used to.
We know that publishers had to learn to publish more negative content to get any attention.
I just really recommend that people check out the Facebook files because it's really the first time that there's,
evidence of so many of the things, Frank, that you and I, because we've done one or two of
these before, you know, have been saying for a long time and that what we said in the social
dilemma, yeah, for those who don't know, the social dilemma, just also won a couple Emmy
awards. It came out a year ago. We're coming up on the one year. We just passed the one year
anniversary. And really what the social dilemma is about, to answer your question, Frank,
is it's not about technology. It's about these certain kind of incentive systems that are built
into technology. So if you take a look at Facebook, TikTok, Snapchat, YouTube, what do they
have in common? They seem like they're different products. Like one is a video broadcasting
site, that's YouTube. The other is a social networking tweet site, Twitter. So they seem like
different categories. But their business models are all optimizing for the same thing, which is
whatever gets people's attention. And so I think that is the generator function of all the
harms because in the same way that a values blind economy that's counting GDP, well, war is good for
GDP, prostitution and drugs are good for GDP, human trafficking is good for GDP, in the same
way, things that are good for attention that are not things that we want, well, body image
issues that have kids basically, you know, infinite looking at anorexia videos, that's really
good for keeping time spent up. Addiction is really good for keeping time spent up. Negativity and
outrage and things that go viral that are, as we said, in the social dilemma, fake news
spread six times faster than true news because the speaker who can say anything that they want
to, unconstrained, meaning they can lie, is going to do better than a person who has to wait
and say, well, what's actually true? But the unconstrained actor is going to win. So per your slides,
Frank, the thing here is not that it's about technology being good or bad. It's about the kind
of technology and incentives that we bind to the technology. And the business model of maximizing
engagement, what we found out in the Wall Street Journal articles. And I could run through some
of the things that we found, but basically Facebook knew, for example, that they were increasing
some of the negativity in society. And they had research showing that they knew that. But Zuckerberg
didn't want to change the ranking algorithms of Facebook.
it was going to hurt engagement. And now you could say he's just greedy or he just wants
the profits or he just needs to keep his share price up. He also is bound because he set up
a set of incentives. All of his employees, all those people at Facebook, most of them are
incentivized by how much they can get engagement up. So all throughout the company, imagine you
have a bonus structure where everyone's salaries and paychecks come in through maximizing
engagement. But then you find out that, let's say, 50% of that engagement is causing
genocides in Ethiopia is causing body image issues and kids. You can't say we need to half our
engagement because now all your employees are going to leave because they won't be able to get
their benefits. You've actually gone against the own incentive structure for your own employees.
So what is it? So I want to know what it is causing and I'm going to add a little bit of pressure
on you, which is that we have two members of the Judiciary Committee. By the way, I'm in Belfast.
I'm actually here in conflict capital of the globe.
And that's why I'm so happy, Daniel, that you're involved.
But I've just got one more for you.
You've got two members of the Jesus Shari committee.
Say that fast five times.
They have to deal with this.
What should they know?
If I gave you 30 seconds, what should they know that you know about what's happening?
Well, I think, Daniel, in a moment, I think we'll,
help elevate the conversation to what kind of changes needed because unfortunately, while I wish
that there was, you know, a couple of laws or bills that we could pass to get to some better state,
the challenge is that this is based, this is now baked into the infrastructure that we use is now
the fabric of our democracy. And virality, the thing that is causing some of this, I think of this,
everyone's now familiar with the idea of a lab leak in Wuhan, the Wuhan Institute of Viralogy,
that was doing potentially gain of function research on what viruses can go viral.
Well, people now know what R not is, the idea of something that can go viral and how many people
does it infect. The purpose of Facebook is to be the Zuckerberg Institute of Virology.
The purpose is to create and allow for things to go viral across the world and be spread to
millions of people and to literally take the R&OT to be as high as possible.
We want it to infect as many people and spread to as many people because that makes engagement go up.
And that's the core thing. So you say, what's the law that we can pass? What's the issue that we can change? It's not going to be as simple as that. I think we have to change the nature. We can't have it be the Zuckerberg Institute of Virology. It has to turn into something safer. So I will warn you that there is a Facebook executive that's on this conversation. So don't be surprised by challenge you. Harry Clark, who is one of the best minds in communication, public relations.
he's asking as part of the Q&A, one more, and this will go to both of you, why not just
boycott Facebook? Why is that a strategy you're considering? And I don't want to, I don't want
to sandbag you. Someone from Facebook is going to hear this. Why not boycott Facebook? He wants
to know. Well, I mean, if people could boycott Facebook and there was meaningful alternatives that
were not the same problem. I mean, TikTok has basically has the same problem. YouTube has
many of the same problems. Snapchat has different, but some of the same problems. So boycotting
and then going, there's nowhere safe to go. That's one of the issues. And the second issue is that
you can't really boycott it when your life depends on it. So one of the problems that's actually
in the article about teenage girls is that you can't actually say, it's not an individual choice
to say, I don't want to use these things because I'm going to ostracize myself. And if all my
friends are still on it, you'd have to get the entire world to boycott it together and move to
something else because it's fundamentally been baked into our lives. Small businesses have to use it
to advertise. How else they're going to reach their users and their customers? They have owned
the capacity to reach people. If we want this video to beach as many people as possible, we probably
want to post it on not some random tiny video site, no one's going to click on, but you want to
post it on the one that gets as many views and likes, et cetera. So you're going to post it on Facebook
and you're going to post it on YouTube.
So they have a monopoly on reach,
which makes it very hard for people to boycott it
and say, let's go somewhere else.
Daniel, you, even though you're involved in this issue,
I look at you as being essential to public discourse
because you're looking at it.
You're looking for solutions.
You're looking for results.
You're one of the strongest thought leaders
in how we talk to each other now
in society.
I'll ask you the question I asked Tristan, would you recommend a boycott, knowing that there's a
Facebook person on this conversation?
And do you have any solutions to the problems that Tristan has raised?
I think it's kind of like what Tristan said that there's a monopoly, but not a monopoly,
in terms of a government contract monopoly, but in terms of a network dynamic monopoly.
network dynamics create natural monopolies where one as you get increasing returns on the more people
that are in a network, then you fundamentally have to engage with that thing because there's
something like exclusive value offered there. If somebody decides they're not going to sell on
Amazon and they have a small business, they just can't compete with the ones that are doing that
or similarly if they're advertising on Facebook. So one of the things is when we built the laws around
monopoly and antitrust network dynamics didn't exist yet those were built before internet and
those dynamics so we actually have to take the emergence of the internet and the emergence of
network dynamics and metcalf law and say we actually have to rethink that monopoly didn't
just mean a crony capitalist government contracting it means can you can you for those of us who
only went to the university of pennsylvania can you dumb it down just a little bit so we so we understand
what you're talking about.
Nobody wants to use 20 different social networks and have to remember all the logins and find
some friends in one place and some friends in another place, just like they don't want to
use 20 different kinds of currencies.
So if there's a currency that everyone accepts, that currency kind of gets a monopoly value.
If there is a network that everybody's on and you can see your friends from high school and
your family and the news and all the things you're interested in, with one login, like those
stats you showed, people said it was easier and made their life.
worse. Like every one who has conveniences where they don't exercise and don't do the things that
actually strengthen them or read or study, there's a lot of things that make life easier and
worse. And so where something has a network dynamic where the more people that engage with it,
the more value it has. Because now everybody's producing content. Everyone I want to find is on there.
The AI will curate all the content to show me exactly what I want to see, but which part of me wants
to see. Well, it's going to, the AI is going to optimize based on my behavior and how long I
spend on site and how many things I like and comment on and share. And it happens that the things
that appeal to my existing biases and increase my sense of certainty in an uncertain world
and the things that scare me and kind of create emotional responses that make me less clear
about the fact I don't want to be on Facebook and go do other stuff with my life. And the things
that reinforce tribal identities maximize time on site and engagement. So,
So it's one of those things where you can manufacture demand from the supply side and then say,
we're just giving people what they want, but you're appealing to the weakest, lowest angels of
people's nature and then doing so with radical asymmetries of power.
So Richard Dreyfus, who's always been a friend of these Fridays with Frank, and he's just his brain is
incredible.
is there proof that social media is leading to incivility, leading to anger?
We may think it, but Daniel, is there proof of this?
Well, this is what you were talking to Tristan about regarding what the Wall Street Journal has been showing this week.
And there are obviously previous cases.
And it's this week and what will be continuing as more information comes out is,
Stuff that Tristan and Jaron Lanier have been saying would happen for nine years because the business model guarantees it.
Now there is increasing proof in the form of hard internal documents and disclosure.
But for anyone who's been kind of paying attention, the business model of maximizing people's time on site and maximizing engagement combined with the technology of behavioral modification AIs was bound to,
be antithetical to democracy and antithetical to health.
So Tristan can give the proof, but for the people who've been paying attention,
it's kind of like saying,
is there proof that deforestation is happening?
And as soon as you're looking at the financial incentive to cut down the trees in an area
where the trees alive are worth less than the tree's dead,
you're kind of like, it's going to happen.
Right.
And the same way that a tree's worth more dead than alive and a whale's worth more dead than alive,
In this case, our attention, if it's easily more sought with outrage, that will be the profitable
model. Us being happy or civil, talking to each other off screens and not on screens,
is not profitable for any of the social media company, specifically some of the data.
And again, I recommend people check out. I think it's the third article that the Wall Street Journal
released. They talk about, actually, due to some of my own work, Facebook changed its core
metric. It used to be maximizing for time spent. I was part of a movement called time well spent.
that was my first TED Talk. Facebook decided actually Mark Zuckerberg wrote in his January
2018 post, his yearly goal, his new goal for the year was to take Facebook in the direction
of time well spent, not time spent. He took my words. Then he said, we're going to change the
way we measure success at Facebook. We're going to use something called meaningful social
interactions, MSI. And this Wall Street Journal article, I recommend everybody reads it,
showed how meaningful social interactions, they were trying to give, they assigned different points.
So for example, you got one point, for a post, would get one point if it had a like.
It got five points if it got a reaction or a reshare without any text.
It got 15 points for what they call a non-significant comment.
And then it got 30 points for significant comments and significant reshares.
What that really meant was the more long comment threads, an article created,
which is to say more arguments, the more of those things got boosted to the top.
So whenever there was an argument, it was like, hey, let's put that at the front and
center for everyone's feet and then do that in a decentralized way for the entire world all at once
for two billion people. And when you basically highlight divisiveness and disagreement and
incivility, which is the thing, Frank, that you're trying to find. One thing you and I were just
at the Milken conference. And there's a lot of people who are funding things like, hey, how do we
do America in one room? How do we fund with hundreds of millions of dollars of depolarization for
the country? And let's have people together physically in rooms talking to each other. That's
great. But how's that going to compare to the four hours a day people spend seeing incivility
every single day. And if 90% of people became civil, but only 10% are left that are in civil,
then what does Twitter and Facebook show you? Well, they only show you that all the bad faith
in civil people. So that keeps just completely, you know, blasting over and plastering your whole feed.
And so it continues to look like the world is in civil, even if many people are starting to get better.
We cannot have that system with democracy period. Open societies cannot allow this situation to be.
And Daniel, I'd love for you to speak about that because I think it's, the reason why we're
why I wanted to have Daniel here, Frank, is I think this isn't just about less toxic social media.
How do we just rain in? Let's take the reins. And if we just move it five degrees this way,
we would suddenly have a better democracy plus, you know, Facebook. We have to look at a deeper problem
statement there to get to where we want to go. Okay, so I'm going to ask both of you,
and you can go in other order, solve it. By the way, we've got a lot of parents on this,
and I'm going to ask you in a moment to scare the living hell out of them,
give me your most frightening conclusion based on all the research you've done on young people.
But before I do, we're adults, do either of you have an actual solution that you're going to be presenting to Congress?
Either of you.
Daniel, do you want to try to describe?
I can say some things that would be.
directionally right. So we read all the documents around the founding of this country and know that
the idea of universal public education and an adequate fourth estate were considered prerequisite
institutions for democracy to function. The people had to be educated and they had to have
access to the information to participate in governance. There's a very deep question of
By the way, hold on one second.
Those people who are watching, because you're actually, we're 25 minutes into this,
and I can see the number of participants is actually growing.
I'm going to focus on kids in about five minutes.
So if you guys want to send out an email, want to send out a text to people saying,
tune in about five minutes, we're going to specifically focus on what social media is doing to your children.
So I suggest you stay on this.
Daniel, please continue.
So you can actually see how the critical,
role of the fourth estate following the printing press and it's been well analyzed the role that
the printing press had in the formation of democracy. We don't need a small educated nobility who rules
everybody because everyone can have access to textbooks and newspaper. They can be educated and
we can come to a town hall and participate in our own governance. But this was based on the
idea that we could get something like fair and independent news and all read the same thing and
then be able to have an educated discussion about it. So when you have an internet where there's
radically more information than anyone could begin to parse what information you see ends up being
determined by curation processes. I'm not going to see all the videos. I'm not going to see all the
news. I'm not going to see all the posts. And so it's not like we respond as rational actors to the
best information. We respond to whatever YouTube's algorithms and Facebook's algorithms put in front of me.
And they put it in front of me based on a business model that's maximizing time on site based on
engagement and it happens to be that that which appeals to my existing biases and emotions maximizes
time on site. So someone on the far right and the far left when they're looking at their newsfeed
and how they're coming to understand the world might see nothing in common. And so, and yet it's
representing the world to them. So you have to say if democracy doesn't exist without a fourth estate and
people having a shared sense of what base reality is. And the internet and specifically network
curation-based internet has destroyed the fourth estate irrevocably. How do you remake a democracy
post-internet network age? Because people can't do shared choice-making if they don't have a basis
for shared sense-making of what's going on. Yes, answer that question. Then how do you do it?
It's the right question, but I'm pushing you now. How do you do it? Well, you can see that China
decided, well, let's control our internet to not have radically divisive ideas that end up making people
against being good citizens.
And you can see that there's an effectiveness in that,
but it's antithetical to the idea of an open society.
So you either keep an open society with these type of network dynamics
and it just becomes increasingly chaotic and fails,
or you try to apply the China model.
Those are the only two things currently as the possibilities.
And what we want is how do you have something like open speech,
but with this degree of radical amplification possibility,
that doesn't become total chaos. And you have to look at what is the incentive for the
amplification. If there is a tool that can curate it and make stuff radically more amplified,
what is the incentive guiding it? And so let's say, for instance, if Facebook is the most powerful
behavioral modification machine in the history of the world, that can gather micro-targeted
information on people and then specifically put information in front of them to control their
behavior for advertisers. But the people who it's gathering information about and influencing are not the
customer, but it's gathering privileged information about people to then sell it to the customer
who is the advertiser. This should be a break of a fiduciary contract where you're not allowed
to gather privileged information about someone and then use it against them. If the user was the
customer rather than the advertiser being the customer, and as a result, the optimization algorithm
was not to sell people ads or to maximize time to sell them ads, but was to find the metrics that
actually correspond to people's real quality of life and the AIs were oriented to that,
we might start to get somewhere. But that's the beginning of a radically different business model.
An ad-based business model with AI-controlled behavioral mechanics will break democracies. They don't
go together. I trust on, I know you agree with this, but can you explain it to someone who only
went to Penn? Daniel, half of Congress is not going to understand this. They still
call the tape recorder a machine, or sorry, a microphone, speaking the machine. No, it's actually
a microphone. They don't even know what the internet is. Just done, go ahead. Well, I think the thing
that Daniel's saying is that an advertising supported, imagine, Frank, I put a brain plan in you
and I actually talked to the guys at Neurlink once about this, right? Imagine Neurlink, Elon Musk's
Neurlink project. I'm going to put a brain implant in your brain. It's going to shape the thoughts.
can give you thinking superpowers, but let's imagine that brain implant in your brain
that's going to intimately shape every thought that you have from the moment you wake up
to the moment you go to bed and your dreams, that someone attached the advertising business model
to that neuralink brain implant. So now you start having thoughts that you didn't even intend to have
and it's actually in control. We would just say immediately when I say it that way,
it should be clear, maybe we can have brain implants, but we certainly would never allow
brain implants with an advertising-based business model. What Daniel is saying is that
that we cannot have democracy and the primary brain implant of that democracy be an advertising
based business model. When Daniel says fiduciary, what he's referring to is a brain implant
that would have your best interest at heart, just like a doctor theoretically, is supposed to
have your best interest at heart. And a psychotherapist, you're going to tell the psychotherapist
all this privilege deep information that's deep in your psyche, they have to have your best
interest at heart. What we're saying is such a deep change that we have to have technology that's
humane with our best interests at heart. Now, the reason that's such an uncomfortable conversation
is that I believe Facebook stock price has not moved that much this week, despite the fact of
these awful revelations. And it's worth about a trillion dollars. And that trillion dollar valuation
comes from the advertising-based business model. So it's as if we had an entire industry of
psychotherapy that was based on a manipulative business model that was worth a trillion dollars,
but now we have to switch to what does it look like to be in the interests of people. And the
question is that's a very dramatic economic change. So I think it's more that our brain wants
to shy away from that because it's so uncomfortable that we'd have to make a change as deep as that.
That's one of the big things that has to change. I want you guys to know that one of the comments
is maybe we need more people like you in Congress that it's not your fault that you're speaking
the truth. It's their fault for not understanding it. And actually, that's not such a bad
idea. Let me ask you. I wanted to speak to what you said earlier about. I wanted to speak to what you said
earlier about what people in Congress would understand.
If because of who the people in government are or the structure of government, if it cannot
understand the nature of the issues it's supposed to regulate, and particularly as technology
is evolving rapidly faster than the people who are in there are able to understand the consequences
of, then it will just break.
If the regulatory apparatus can't understand the effects of what it needs to regulate,
it will just break. And this is the key thing that we're talking about is we're right now talking
about the case of social media tech following a business model. But we could also be talking about
CRISPR and tabletop CRISPR emerging where we're getting very, very close to cheap ability to make
bio weapons for everybody. And with regard to AI and generative text AI, we're getting very,
very close to the ability to make content in your voice saying anything that anyone can do
and flood the internet with more information that passes the Turing test. And so, and Elon said this.
He said, if we wait to regulate AI and the other technologies that operate this quickly, AI,
specifically who is talking about, until after the effects have been seen for as long as we did
the cigarettes or DDT. It's way too late. The effects will have been irreversible. So when we say
exponential tech, what we mean is exponentially faster to scale, exponentially larger effects that can
happen from exponentially smaller groups of people. It doesn't take state actors like it did to make
nukes, to make AI weapons, bio weapons, crisper weapons. And so the big question becomes,
in the presence of the speed and scale of emerging technologies, are processing,
of governance are just inadequate. They're too slow. They are too divided. And this is why China has
done a good job of saying, no, we actually have to control these technologies. Otherwise, they'll
break the country. How do we do it? But it's in a particular direction. If we want something like
an open society in the presence of exponential tech, how do we make a regulatory apparatus
capable of regulating what it needs to in time and ahead of time that is aligned with the civil
values of an open society. That's the central question of our time, I believe. I want to a simple
number, a number. Daniel, what percent of Congress, the House and the Senate, is really intellectually
not ready to tackle this issue? What percent? I don't know. I don't know them enough.
What would you guess? I would trust your guess over mine on this. Okay, Tristan, you've been testifying,
so you, I'm not letting you out of this. What percent don't really shouldn't be regulating this
that they do not know? I think it's very understandable why people are skeptical of Congress to
regulate this effectively when they hear senators ask Mark Zuckerberg questions like,
how do you make money? And he responds famously, Senator, we sell ads. If I was Zuckerberg,
I would have paid for that moment to happen. And maybe I did because it generates the impression
forever in people's minds, it sticks, Frank, in your language, into the minds of the public
that government cannot regulate this, right? So I've created the outcome that I want. And someone
I noticed in the chat was talking about, do social media companies own the members of Congress?
Well, let's say that this narrative gets really strong. Well, it's a trillion dollar market
cap corporation. It's not very hard to start buying off any of the members of Congress that get
critical. And a couple will make some public statements. But what's really what we're trying to do
here and what Daniel's really saying is that this is existential for our society continuing
to work. This will break our society. We've been saying that for eight years. The social
dilemma says that it's clearer than ever. January 6th, I had text messages from people. Joe Rogan
saying, oh my God, I thought the social dilemma, you know, it's so eerie. All this stuff is coming true.
You know, when I remember in the film, the people may remember the guy who invented Facebook's business
model, Tim Kendall. He was asked on camera, you know, what are you worried about is the consequence of
this. And I think he was recorded saying this in 2019 or 2018. And he said Civil War. And I remember
that I think Netflix wanted that to not be in the film because it sounded like it was too
aggressive a statement to make. It sounded like that wasn't where we were. That was before the
pandemic. And when people saw January 6th and they see the results and people should read this
Wall Street Journal article about the outrage economy and how it drove up basically more insanity,
it makes perfect sense. And the point Daniel's trying to say is that we're trying to ask
the question. If we don't want to beat China by becoming China,
then we have to develop an open society alternative to exponential technology that does not result in catastrophe and chaos.
And we're spending the whole time in Washington, D.C., and part of the reason, Frank, I wanted to do this with your audience,
because I know you have a lot of listeners that are in D.C. and that are curious to get to this point is that, you know,
we're trying to see who resonates with us and wants to have a serious conversation about the more comprehensive change that's needed.
But I'm being asked again and again, both in people who are texting me, emailing me,
And in this comment right here, everybody says, we got to do something, but I don't trust Washington to do it.
What's the answer?
They don't, the direct language, I don't want Washington making the decision for me or my children what we're going to see.
Well, I can speak to this is one of the things we've seen during COVID is that,
the breakdown of the sense of a shared trustable authority or institution.
And when it comes to news media and even when it comes to scientists and public intellectuals
weighing in on the science, there's just radical polarization of what would be a trusted authority.
And so, yeah, people rightly don't trust Washington, but they would also rightly not trust
private companies that have interests that are exactly opposite of their own and
radical asymmetries of data manipulation capability. So the question becomes, when you have technology
that is this asymmetrically powerful, who could you trust to govern it, given humanity's track record
with power? And yet, like, whoever had the cannons in the past is nothing like who has the
AIs and all the world's information, the behavioral dynamics on your children. And so as we follow
an exponential curve of power. There's this core governance question of we've never done all that
good a job of being great stewards of power. And now we have radically increasing exponential power.
How do we govern it? And the whole thing would we don't trust Washington is Washington was never
supposed to be a thing separate from of foreign by the people governance. It was supposed to be
that the state was given the ability to regulate predatory market interests while still allowing
healthy market interests to ensure the values of the people that were encoded into law could be
implemented with a monopoly of violence. But the state could only check the market if the people
check the state. And that like everything you can read by the founding fathers and the
declaration and the constitution was all about the people's ability to oversight and check the
state and be engaged in governance. When the people stop checking the government, the government
gets captured by the market and you get short-termism, crony capitalism, regulatory capture,
are those types of things. So ultimately, there is a cultural revolution that has to happen of people
who get committed to actually being able to make sense of reality together and make effective choices
together and participate with the remaking of 21st century governance, 21st century democracy,
republic, open society, whatever you want to call it, and the creation of institutions that
actually can be trusted because of the right types of checks and balances on power and oversight process
with this technology.
Okay, I'm going to ask us, I'm going to read it from Eric Schwartz.
Due to forceful regulation, banks are obligated to know their customers and there's no room
for anonymity.
Why shouldn't social media be governed the same way?
If there was no anonymity and the platforms were obliged to know how to access all participants
would that help, either of you?
I'll just respond something there.
We've been saying also for about five years, just like banking has,
know your customer laws and so on, KYC, there is eventually especially going to need to be that
with social media and online publishing because of what Daniel was talking about with GPD3.
So for those who don't know when you say GPD3 or deepfakes, people have heard about this term,
right? They'll just quickly say what this was. Daniel and I were actually just with one of the
top AI research people in the entire world. And they were saying how much this field has progressed.
you can basically say, write me a novel in the voice of James Joyce about the topic of,
you know, democracy or something like that. And it would write you, you know, a hundred page
book. I could say, hey, write me a article about why the COVID vaccine is dangerous and
with lots of charts and graphs and actually criticize with the names of the logical fallacies that
people are using that that makes them wrong about the vaccine and why they actually shouldn't
trusted. And it'll actually generate like a hundred page research paper with charts and graphs
that it'll take biostatitions like a year to actually parse out what's right or what's wrong
about that. That capacity to instantly flood the internet with information. And by the way, for those
who don't believe me, do a Google search for open AI. And I don't know what they call this video.
They can actually do programming. So you can actually tell this GPT3 AI, hey, write me a computer
program, a game where there's a moving asteroid. Whenever I hit the letter, the left and right
arrows, it goes back and forth, make the asteroid bigger. I say all that, and it writes the
entire code of the video game for me. I'm just saying in natural language what I want. So in the
next election, Frank, we're moving to a world, and this is not sci-fi. So people might hear this
as alarmism or moral panic and frankly who believes that this is moral panic, like listen to the fact
that we were saying these things eight years ago and all of them come true. We're about to
hit a world where in the next election, midterm elections, for those who don't remember in
2016, there is this popping up of these fake local news websites, the Denver Post, the Cincinnati
Herald, I don't know the names of the fake ones. You can basically make them up. GPT3, you can also
say spin up a local news website with the fonts and everything and the big sections at the top.
It'll generate the entire website and it looks perfect. It looks totally indistinguishable.
Then generate lots of articles about why the other side is untrustworthy and he beat his dog
and whatever. And it'll actually just generate thousands of these websites. We're getting so
close to that being possible. The reason I'm saying this is to answer Eric Schwartz's question
about why we need to know our customers, why we need verified identity. Because if we don't have
the sense that someone who generated this comment or this post or this text is a human being
that's traceable to some kind of ID, we're not going to have a working open society. So table stakes
going into the future is we're going to have to have some kind of zero knowledge proof identity
and people who are following this closely know that that's one of the big changes that's going
I need to make. When we talk about Congress regulating these issues, we're often talking about
looking backwards in time at the historical issues. How do we deal with these common threads on
Facebook or the stuff from like four years ago issues? We're missing the fact that what Daniel's saying
is the first derivative of how technology is constantly changing and generating new issues,
second and third order effects, faster than any of our governance process is keeping up. So what we really
need is a new kind of governance process. We need a Manhattan project for governing exponential
technologies that move faster. And I would actually say, you know, similar to the Einstein
Cillard letter that was written to FDR in 1939 that said, if we don't do this and open
society values don't have nuclear bombs, if Nazism gets the nuclear bomb or if communism gets
the nuclear bomb, they will run the future because whoever wields the power of exponential
technologies will run the world. And we're at another choice point today where we have to have
open societies consciously employ this technology and bind to the predatory negative aspects.
otherwise we're seeing what China is doing
and they're moving much faster.
Okay, well, I don't want us to become China.
And so there have been several comments about that
that just because China's doing it doesn't make it right,
they don't value freedom.
They don't value democracy.
They don't value the things that we insist on.
Agreed.
And that's why what Daniel was saying is actually,
go ahead, Daniel, sorry.
I'd like to connect what Tristan is saying
to the question that you asked about rigorous identity.
So obviously being able to know,
was this a human that produced it
or was this in AI, it'd be pretty valuable.
But even just, is this the human that it seems like it is?
Or is this a fake account sock puppet that is part of a state actor propaganda farm?
That's pretty valuable because we've forensically find those types of bot farms and fake actor farms all the time.
And what they can do is use Facebook split testing algorithm to see what is stickiest for certain
populations and continue to modify the content they produce, even without AI, to push vulnerable
populations. Now the AI just gets to take that exponentially. So obviously rigorous identity would be
valuable. But then this question that comes back again, it's like saying regulation would be valuable.
Who do we trust to have the rigorous identity associated with all of someone's online behaviors,
including through changes in government that will never forget that, where now some despotic
government comes in in the future or someone that I didn't vote for or didn't like, but now that is
an unerasable memory. So we get to see on either sides here, we're like, okay, we actually want
anonymity because we don't trust anybody. On the other side, the anonymity makes it to where there's
no possibility for justice or knowing what is true or not true. This is a hard issue. And it
corresponds to, it's pretty hard, as Tristan was mentioning about the war for the arms race for
nukes. It's pretty hard to make nukes. Enriching uranium is difficult. The precision
engineering that was needed for the rockets was hard. It's not hard to make drone weapons anymore,
home-based drones with homemade bombs on them. It's not hard to take papers that are written
about the cutting edge of AI and reverse engineer them and make crypto cyber weapons and AI type
weapons. The thing about exponential tech is that the idea of decentralizing tech, we have this
kind of romantic Silicon Valley idea of this means democratized power for everyone, but it also
means catastrophe weapons for everyone. And when you have catastrophe weapons for everybody,
and it's non-state actors, and there's no way to even be able to visualize it,
you can't have mutually assured destruction. You can't create equilibrium. So then the only
other answer so far has been, okay, well, either catastrophe weapons for everyone,
if you want something like freedom or ubiquitous surveillance.
Uiquitous surveillance is a good answer. If we know what everyone's doing in their basement,
nobody can do work on AI weapons. Those two answers are both so terrible, we need a better answer.
So in the presence of decentralizing exponential power,
how do you not have ubiquitous surveillance and not have centralized access to that
personalized data and yet still have something that can create anti-fragility in the presence
of that much power that even small actor groups could engage with?
There are some answers that are neither just allow the chaos to continue or become
oppressive, but we'd say civilizations typically teeter between a civilization
is how do we get a lot of people to participate in some way that creates order?
We can get the order through imposition and it becomes increasingly oppressive or we fail to get
the order and it becomes chaotic and it fails on either side. The idea of democracy is that we
can have emergent order rather than imposed because we can all make sense of the world together
and make sense of each other's values and have some basis for shared choice making. This is
ultimately cultural. I promise to focus on kids and have actually got a young person who's
watching this right now. I'm going to read her question. I'm a senior in high school.
Inspired after watching the social dilemma. I'm spending a year researching practical ways for
young people to limit their use of social media. I'm curious to know what your view in the
following. How do the tactics to curb social media use differ by age youth groups,
specifically preteens, teens, and college? What advice? What guidance would you give her as she
seeks to understand the difference between those three age groups. Can you say the age groups again,
preteen, teen, teen, and college? Preteen, teen, teen, in college. I mean, I will say that I'm actually,
I don't consider myself an expert on the developmental psychology of children at these different
ages. What I can speak of is so long as the business model of TikTok and Snapchat and
Instagram is I have to race down your brainstem and create artificial social obligation and
artificial social reciprocity that you feel the pressure of getting likes if the other guy doesn't
and you don't get it. That thing is just not compatible with children's development. It turns kids
into validation seeking machines and it creates social pressure and anxiety that's simply not,
it's frankly, it's not good for any of those populations, right? It's not good. I don't feel good.
None of us feel good when we feel like, you know, Frank, I'm sure you get this all the time.
So if you get a comment on Twitter, you post to Twitter a lot, I follow you, and you get 100 comments on a post that you make.
And 99 of the comments are positive, and one of the comments is negative.
Where does your attention go?
Of course.
But you've got 99 to 1 positive.
But our brain is not neutral.
Our brains focus where on the negative?
And do you think you're alone in this experience?
Or do you think that everyone feels it?
And do you think that children feel it more than you or less than you?
more than you. So what we have is essentially a system that creates overwhelming pressure and
negativity and the conditions for just not psychological health. Daniel has this nice line about how
what would be, how do you measure the health of a society? And it's a hard thing to measure
the social health of a society, but you could measure it as the inverse of the amount of
addiction in society. And then I know if you want to riff on that, but I don't have an answer to
your, I would like to answer her question. Go for it. Because the thing Tristan is saying relates to
the question someone asked about boycotts earlier. When you have a trillion dollar organization
with two billion people's behavioral data run by AIs, and then you have a person, just a little
person, the asymmetry of that info war, because the person wants to control their own thoughts
and behavior, but Facebook wants to control their thoughts and behavior. But it is a many orders of
magnitude asymmetric info war to control their thoughts and behavior. That's such a problem that you
can't just say, well, let's leave it up to the individual person. It's like saying let's leave up to
individual people turning their lights off as the solution to climate change and environmental
destruction. It's like, no, that just really doesn't work. This needs solutions that are at the
scale of where they're coming from. And switching the burden of responsibility from the joggernauts
to the individual is just like a bad move, both ethically and
from effectiveness. So that's what Tristan's trying to get to. But we have this person here who's
asking a question about what she can actually do. So I want to speak to it. And by the way, this has been
this has been Jordan's focus now since you all created. You got people agitated. You got people
focused. Now you've got to start to deliver for them. What would you say to her? I'm not going to make a
distinction between the three age groups. I'm going to say what's true in general. And then the developmental
application of it is something that you should totally work on. People are more susceptible
to addictive things, what we'd call a hypernormal stimuli, stimuli that gives more dopamine
or hit faster, whether it is sugar or drugs or social media type stimulation or porn. They're
more susceptible to that when they're in a hyponormal environment, meaning they're less
fulfilled than an evolutionary environment. In an evolutionary environment, humans evolved to be part of a
tribe where you were having a lot of rich social interactions, a lot of embodied movement, a lot of time
in nature, you know, those types of things. So the lonelier people are, the more susceptible
they're going to be to Facebook and Instagram, the less creatively fulfilled they are. If you can find
groups of friends, groups of people, and create richer possibilities of offline engagement,
it's the thing that'll help them the most. If they can have a place where there's drum circles
every Friday night and everybody's dancing and having drum circles, if there's musical things
where they're getting together and having kind of jazz fun stuff, if there's craft places,
if there's places where they get to have circling to do meaningful, like sharing about what's
going on for them, if people feel a richer fullness and other opportunities, they will be less
susceptible to that. Then you can actually start to have not only are there more fulfilling other
opportunities, but there's actually social connectivity and status associated with something else.
We've got seven minutes to go, and there are more people on this now than there were after 10 minutes.
I've never had that happen in any of these. Jared Carney asks, the primary issue for me is not
manipulation, although it's a big issue. It's surveillance. If Facebook's business model is predicated on
conflict plus knowing everything in trying to steer commerce. How do you counteract that?
I mean, this is really the topic of Shoshana Zubov's book, Surveillance, capitalism,
that when you control this much of people's lives and you can, let's say someone builds
an alternative social platform funded by venture capitalists. There's got some investors, I'm sure,
on this call. Many people might be wondering, well, what if we funded some alternative social network?
that because we want to do it in a way that doesn't capture people's data. Well, that will be funded
by venture capital. Venture capital expects, you know, big multiples of returns. There's actually no
way to exit that except by getting acquired by one of the existing big surveillance capitalism
companies, whether it's Google or Facebook, et cetera. You'd have to do it in some way where it's
an independent thing, something more like a Wikipedia or a blockchain type project where the data
is kept separate somehow. But you're not going to be able to transition these existing platforms.
Daniel do you want to jump in you could you need something that is like the
equivalent of HIPAA so which is the medical privacy regulations so if I have a
medical file so that I go into ICU somewhere they can pull up what I'm allergic
to and those types of things it's very sensitive information about me everything
that's in my life medical file that information could be sold to
to drug companies, it could be sold to lots of places that would be interested in doing stuff
with it. It's really, really important that that isn't sold. So there is state-based governance
of that data and how that data can be shared and not shared. If, and so when you're saying
manipulation is second to surveillance, the purpose of surveillance is manipulation, right? The gathering
of the data is to use the data for a purpose. And whether it's to affect how you vote or to
affect your market behavior or to affect how you affect the cultural zeitgeist that will affect
market behavior and boating of others or to be able to arrest you for things that we decide later
are seditious or whatever it was ultimately it's the control of behavior that we're gathering the
data for and so how do we create safety of the data and this is where the question has been do we
trust facebook with it no do we trust Washington with it no do we want there to be a place where there
is data so that we know that it's a person and not a bot putting the information out yes
So do we need to create a better checks and balances and oversight process to ensure
trustworthiness?
Yes.
Do technologies like blockchain do part of the thing where you can do provenance on data
and see how it moves and have things where the data only becomes released if certain
flags happen?
Otherwise, it's in a decentralized rather than the centralized database.
There's partial solutions there.
And we could work on full solutions.
But like with the HIPAA kind of thing, it's that that data, there's a Hippocratic
oath that is saying that that is being gathered by the doctor who has serving your
interest, not the pharmaceutical or anybody else's interest in mind. And we can see it's not
perfect. If Facebook's business model shifted, where the user was the customer, and
either that and or as a state, say it got made into being a state utility. So the interest of the
state, which would mean that people didn't take views in such antipathy to each other in the
state that was basically like second order treason and sedition. But where the interest of the integrity
of the state and the interest of the well-being of its citizens were actually what was directing
its AI and its use of that data, as opposed to advertisers. And there was the appropriate privacy
and protections on it. And I got to adjust the settings and say, I'm interested in learning these
things. I'm interested in being exposed to these other kinds of ideas. Here's what I want my time
on site to do. Of all the information, here's what I wanted to curate for me.
Now we have a situation where the behavioral data that happens by the fact that I click on
stuff.
I can't have technology where I click on stuff and not gather behavioral data, but who is storing
it for what purpose and how is it being stored becomes really critical.
And it could be stored where the legal binding of it kept it from being used for purposes
other than the ones that I am actually consenting to.
It's like personalized learning, Frank.
I mean, think about things like Khan Academy where it.
Yes, you're personalizing information.
You're gathering lots of information about where people look like they're getting stuck
or what lessons they might want to learn more of.
But the purpose of Khan Academy isn't to manipulate you into clickbait
and to make you hate the other political party.
It's to help increase learning, right?
So you could have, that's the kind of thing that we're talking about,
is personalized in the interest of helping society get wiser and more thoughtful,
not the direction of whatever gets their attention.
I wanted to answer the question.
You can't get these, this conversation.
doesn't happen. There is no news program that has a conversation like this. Like I'm
embarrassed. I have to keep saying to Daniel to simplify. I shouldn't have to do that.
Just like we said, just like we said that the fourth estate has broken, obviously so has
education, because the idea in the founding of this country following the Enlightenment and
modernity was that everyone could be educated well enough to understand the issues that law was going to be
created that would affect and bind their life. If I'm going to be bound by law, I need to
understand the issues well enough to get a say in that. If the people are not educated well enough
to have a say in the law that's going to bind their life, it's not a democracy. It's just a story
or a simulation of it, in which case monarchies might actually be better and not have all the attention
go to flipping back and forth every four years in internal divisiveness. If we really want it to be a
democracy, an effective fourth estate and education are prerequisites at the level that is needed for
people to understand the issues well enough to participate. Now we have to say using AI and attention
tech and all of the modern technologies, could we make new better educational systems and better
participatory governance systems where everybody can't fit in the town hall, but everybody can give
input that now these AIs can semantically parse and help create propositions that are not
vested interest group propositions, but that are saying, what would the best proposition that fulfills
the interests of everyone look like? And now we get to work on proposition crafting. We could use these
same technologies that are destroying open societies to build better ones. That, though, has to become
the central imperative of the time. And it does require a new cultural enlightenment. It does require
an educational rise to understand the issues well enough, or it just doesn't matter. It's over.
Like if Congress and the public don't understand the issues well enough, then of course the corporate
interests are just going to run and authoritarian nation states employing exponential tax. See, both
Facebook and Google on one side and China on the other side are deploying exponential tech towards
purposes, but neither of them are open societies. If the open societies are not also developing
and deploying where the power is, then they will just lose. And what we're saying is that the
open societies have to develop and deploy the full suite of modern technologies to create new
digital era open societies that can protect against catastrophe, but also protect against dystopias.
and that has to be the central imperative.
Okay, I got one last central imperative,
which is to keep people alive during this
at the, hopefully at the back end of COVID.
And I don't know of any example
that frightens me more or angers me more
than the crap that's put out against the vaccines.
And I go absolutely crazy.
And I read it and I shouldn't
because it makes my head explode.
Help me here.
What can we do?
We are coming down to the end of the vaccine process,
and we are not going to hit hurt immunity.
And the U.S. is not even in the top 15 on vaccines anymore
because of the crap that's being generated in America
about from social media.
How do we combat it when it comes to health?
Because Tristan, your idea that they're going to be able to create these stories
with these reports,
I've read them. I've seen them. They look real. And I know they're not. How do we combat it right now?
Because we don't even have a year to wait on this. We have to do this right now.
That's a big question. Nail is you? Sure. We're not going to. That's just that just loses.
in the same way that we're not going to prevent climate change from creating droughts and
affecting areas of poverty that will create massive human migration.
There are a lot of impending catastrophes that will just happen and we'll just lose that.
We could have done a better job with this pandemic at a million points.
And so this is why we're thinking a little bit more long range about what are the even
worst catastrophic things that we possibly have time to address and why are we doing so badly
at all of them and what would it take to do a better job at all of them? Because if we can't coordinate
for climate change or overfishing or biodiversity things or nuclear deproliferation or stopping
AI arms races or stopping CRISPR arms races or getting misinformation right, we just fail at everything.
So ultimately we have to get better global coordination capacity for global level coordination
challenges. If we get that, we get all the other things, otherwise we don't. And so you're
talking here about a shared sense making of what is true, what is base reality. But in
world where there's a breakdown of trust and a breakdown of trust in what is legitimate authority,
you will not get shared sensemaking. So then you either do have to become China and say,
I don't care if you don't believe it. And if you think it's terrible, we're going to force it on you.
Or you have to say, well, we're going to just fail. Or you have to say, oh, we actually have to be
able to recreate legitimate authority and not just people's faulty, faulty belief in it, but a good
basis for it. How do we do that? How do we recreate a shared sense of sense making? But also not
just that the social contract and social solidarity that if someone thinks something different than me
and they're a fellow countryman i don't just instantly have antipathy for them i try to steal man
rather than strong man what might be true what values might they hold in their perspective
because if i just go to a culture war with my fellow countrymen and china doesn't have that
problem but we get to amplify our antipathy towards each other using exponential information tech
then our country's just destroyed it's over open society is
There has to be a process of how do we do better sense-making, but also better understanding
of the partial truths and the values in each other's perspective so that we can find new
attractors together.
Okay, Daniel, if you're right, we're screwed.
Yeah.
If your conclusions are right, because we're not going to be able to do the things to
address the challenges that you lay before us.
It cannot, it will not happen.
Not in the time scale of the issue you're focused on.
For this time, yeah, we're screwed.
Tristan, you're going to have the last word.
You do not have to make people feel good.
I want you to end in 45 seconds with what you want people to take away from this conversation.
Don't screw it up.
I was just going to say that I think people think that trust can't be recovered from where we are.
And I feel just to say something very concrete is that if people were to communicate,
and feel like they're, in a way that reflected why people distrust the CDC of the David
show talked about, and if those institutions came out and said, yeah, we did flip-flop on this
and this. People need to feel like they're being told the truth with earnestness and sincerity
and not gaslit. And I think the problem in communication is when you communicate to large
audiences and you feel like people aren't going to get it, you simplify the message and you say
something that's not completely true, but you force it down people's throats and those who know
that that's happening get rebellious. And we're not going to create a unified understanding
until we come with sincerity.
And I think trust is the sort of the fuel that undergirds all of this conversation.
And I feel like sincerity and earnestness is the vehicle to reestablishing that trust.
And I feel like that's what we're trying to do now.
And just to name one thing is Daniel and I and some others are really looking to see in Washington, D.C.,
especially in the national security community, who resonates and understands with what we've laid out.
And we're not trying to be pessimist.
we're trying to be very clear about the space of problems and what it will take to actually
look ahead so we don't just have another COVID and in climate change, we mess those up too.
We have a chance right now to not mess up some of the future tech impending dooms that's coming
and we're looking for help.
So if you're interested, you know, hope we can all connect through Frank.
And I really appreciate, Frank, you're giving the opportunity to have this conversation.
I think I'm happy to do more.
I know we kept this to an hour, but I think it's incredibly important.
And each of the issues go very deep.
I mean, the teenage mental health stuff versus the polarization
and how do we deal with the exponential tech issues?
Well, this is incredible.
And as Harry Clark says, go see Ben's ass,
I need to find a way to have the two of you sit down with him.
And you don't have to simplify it.
Daniel, you can say it exactly as you say it right now,
and he'll be with you every step of the way.
You guys are brilliant.
You guys are the way it should be.
your parents, I'm sure, were or are proud of you.
You got a great education.
This was probably the most important session I've ever done.
And I'm grateful for you all for not pulling any punches,
for not dumbing things down.
And the next time I will go to sleep will probably be never,
after all the things that you've just said.
So everyone, thank you.
We're not going to do this that often.
I'm only going to do it when it really matters, and this time it really matters.
So, Daniel, I hope people pay attention to you, Tristan.
I hope you do another documentary, win another couple Emmy Awards.
You deserve them.
This session's done.
And Heddle, do me a favor.
Post this on YouTube now, unedited, complete.
Let's use Twitter to take a couple segments from this.
We're going to use social media the right way to get your message out to as many people's possible.
Everyone, good night.
Good afternoon.
Thank you.
It was an honor to be here.
Thank you.
Thanks, Frank.
Your undivided attention is produced by the Center for Humane Technology.
Our executive producer is Stephanie Lep.
Our senior producer is Natalie Jones,
and our associate producer is Nur al-Samurai.
Dan Kedmi is our editor at large.
Original music and sound design by Ryan and Hayes Holiday,
and a special thanks to the whole Center for Humane Technology team
for making this podcast possible.
Thank you.
