Your Undivided Attention - Feed Drop: AI Doomsday with Kara Swisher
Episode Date: June 2, 2023There’s really no one better than veteran tech journalist Kara Swisher at challenging people to articulate their thinking. Tristan Harrris recently sat down with her for a wide ranging interview on ...AI risk. She even pressed Tristan on whether he is a doomsday prepper. It was so great, we wanted to share it with you here. The interview was originally on Kara’s podcast ON with Kara Swisher. If you like it and want to hear more of Kara’s interviews with folks like Sam Altman, Reid Hoffman and others, you can find more episodes of ON with Kara Swisher here: https://link.chtbl.com/_XTWwg3kRECOMMENDED YUA EPISODES AI Myths and MisconceptionsThe AI DilemmaThe Three Rules of Humane TechYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Transcript
Discussion (0)
Hey everyone, it's Tristan.
Recently, I sat down with veteran tech journalist Kara Swisher for her podcast On with
Kara Swisher.
And we thought about sharing that interview directly here with you on your indivited
attention.
Because as you'll hear, you know, there's really no one better than Kara to challenge people
to articulate, you know, what's really going on about a situation.
A lot of people have called Aza and I fearmongers or doomsayers.
And the point of our AI dilemma presentation is not to sow fear or doom.
it's to say, we have to honestly assess the risks
so that we can choose to take the actions that are needed
to avoid those risks.
And I think this interview did a really great job
of distilling a lot of our current thinking on AI
since the space is moving incredibly fast.
So if you're a new listener
or you want to send this to friends' family
or broader network, it's a great way into the AI topic.
And if you like it and you want to hear more of Kara's interviews
with folks like Sam Altman, Reid Hoffman, and others,
go to wherever you're listening to this podcast
and search for On with Kara Swisher.
And now, over to Kara.
Welcome, Tristan.
Now, you and I met, let's go back a little bit when you were concerned about social media.
I think it was one of the first interviews you did.
It was 2016, 2017.
Right.
I think it was right after Trump had gotten elected.
That's correct.
And I was really choosing to come out and say, you know, we and I were both.
In a little booth in Stanford.
I remember that.
The Stanford radio.
Yeah.
Yeah.
But talk about for people who don't know you.
Both of us are probably seen as irritants or, or, or, um, or, or, um,
Cassandra's, I guess,
what she was right, but whatever, John the Baptist,
any of those precursors, lost his head, okay?
Talk about what got you concerned in the first place,
just very briefly for people to understand.
Yeah, so I guess for people who don't know my background,
I was a tech entrepreneur,
I had a tiny company called Apture,
we got talent acquired by Google.
In college, I was part of a class
called the Stanford Persuasive Technology Lab,
behavior design class,
and studying the field of social psychology,
persuasion and technology. How does technology persuade people's attitudes, beliefs, and
behaviors? And then I saw how those techniques were the mechanisms of the arms race to
engage people with attention. Because how do I get your attention? I'm better at pulling
on a string in the human brain and the human mind. And so I became a design ethicist at Google
after releasing a presentation inside the company in 2013 saying that we were stewarding the collective
consciousness of humanity. We were rewiring the flows of attention, the flows of information,
the flows of relationships.
I sort of said, you know, I'm really worried about this.
I actually thought the presentation was going to get me fired.
And instead, I became a design emphasis getting to study.
To give you a job of these worries, right?
It's better than me leaving and doing something else.
Right.
But I tried to change Google from the inside for three years before leaving.
When you look back on that, I think they wanted to have you there.
You're kind of like a house pet, right?
But you know what I mean?
Like, oh, we've got to design.
Hopefully I'm a friendly house pet.
Yeah, I know.
But they don't like the house pets the bite.
And you started to bite.
Yeah, well, I think, you know, it's funny now because if you look at when we get to AI,
which we're going to get to later, people who started AI companies actually started with the notion
of we can do tremendous damage of what we've created.
There's a whole field of AI safety and AI risk.
Now, imagine if when we created social media companies, Mark Zuckerberg and Jack Dorsey and all these guys said,
we can wreck society.
We need to have a whole field of social media safety, social media risk.
And they had actually had safety teams from the very beginning figuring out.
They hated when you brought up negative things.
Yeah, they hate it. They denied that there was even any issue. And it was hard to see the issue. And we had to fight for the idea, you and I, that there was these major issues. Addiction, polarization, narcissism, validation seeking, sexualization of kids, online harassment, bullying. These are all digital fallout of the race to the bottom of the brain stem for attention. The race to be more and more aggressive about attention.
So I was frustrated that especially Facebook, because I had more contact with that company, wasn't going to do more. And that people were in denial about it.
goes back to the Upton Sinclair quote, you can't get someone to question something, that their salary
depends on them not seeing. Yeah. And their boss, their boss who runs everything. He really was,
it's like a brick wall on that. We're a neutral mirror for society. We're just showing you the
unfortunate facts about how your society already feels and works. Yeah, I kept saying finish college,
you'll understand. You might want to take World War, maybe, throw in some Vietnam War and perhaps,
you know, go back to World War I because it's all like, you know, that's recent history.
So a couple of months ago, you and AESA released
had a presentation that I went to here in Washington
called the AI Dilemma, laying out the fears.
You know, I think there's a proclivity to say calm down.
It's don't be so terminator.
There's a proclivity to say don't be so sunshine, right?
That's focus not on the existential fears,
but the current ones we can work on.
Now, one of the people that have been working on it
feel like you can't guess what it's going to do
at this point.
And that when you get overly dramatic, it's a real problem.
Do you, do you, yours was pretty dramatic when you were doing it in front of a group of
Washington people.
When you say you can't guess what is going to do, what are you?
What's going to happen with this?
We don't know.
So let's deal with our current fears versus our supposed fears.
Yeah, I disagree.
First of all, there's a whole bunch of harms with AI.
And all the stuff around bias and fairness and automating job applications and police
algorithms and loans.
And those issues are super important.
and they affect kind of, you know, the safety of society as it exists.
I think the things that we're worried about are the ways that the deployment of AI
can undermine the container of society working at all.
You know, cyber attacks that can break critical infrastructure, water systems, nuclear systems,
you know, the ability to undermine democracy at scale.
You know, in Silicon Valley, it's common for AI researchers to ask each other what they're,
it's called P-Doom, the probability of doom.
Explain what P-Doom is calculating and tell me what's your P-Doom.
So I don't know if I have a P-Doom.
I would say that we, and you were sort of, I want to make sure I go back to the thing you were saying earlier,
can we predict what's going to happen.
I would say we can predict what's going to happen.
And I don't mean that it's doom.
What I mean is that a race dynamic where if I don't deploy my AI system as fast as the other guy,
I'm going to lose to the guy that is deploying super fast.
So if Google, for example...
That's internally capitalist companies and then also other countries.
Yes, exactly.
And that's just a multipolar trap, a classic race to the cliff.
And so Google, for example, had been holding back many advanced AI capabilities in a lab,
not deploying them because they thought they were not safe.
When Microsoft and Open AI hit the starting gun and said in November,
we're going to launch TAPT, and then boom, we're going to integrate that into Bing
and actually make this the way, you know, we're going to make Google dance, as Satina Della said.
That hit the starting gun on a pace of market competition.
Right, they have to.
Then now everybody is going, we have to.
And we have to, what?
We have to unsafely, recklessly deploy this as fast as possible.
So that we're out front.
Like, my Google just asked me to write an email.
They usually want to finish sentences.
Now they're like, can I write this email for you?
I was like, go fuck yourself.
No, I don't want you to.
Right.
Well, and then Slack has to integrate their thing and integrate a chat bot
and then Snapchat integrates my AI bots into the way that it works.
Spotify.
I have TikTok and I haven't even seen the Spotify one.
I mean, the point is it's going to, you can, this is what I mean by,
you can predict the future because what you can predict is that everyone that can integrate
AI in a dominating way to become, in the case for the race to engagement in AI, it's the race
to intimacy. Who can have the dominant relationship slot in your life? If Snapchat AI has a
relationship with a 13-year-old that they have for four years, are they going to switch to TikTok
or the next AI when it comes out? No, because they've already built up a relationship with that one.
Unless AI is everywhere, and then you have lots of relationships like you do in life.
But what they'll want to be incentivized to do is to deepen
that relationship, to personalize it, to have known everything about you, and to really care
about you, and don't leave me now. I mean, even Facebook did that when you wanted to delete your
account in 2016, they would say, do you really want to leave? And they would literally put up
photos of the five friends, and they would calculate which of the photos, which five friends
could I show you that would most dissuade you from doing that. And so now we're going to see
more and more sophisticated versions of those kinds of things. But that race to intimacy,
that race to become that slot in your life, the race to deploy, the race to therefore move
recklessly, those are all predictable factors. So just to be clear, because you're sort of challenging
me, you know, can we predict where this is going? And at the point is we can predict that it was
going to go so recklessly and go so quickly, because we're also deploying this faster than
we deployed any other technology in history. So the most consequential technology,
the most powerful technology we have ever deployed, and we're deploying it faster than any
other one in history. So for example, it took Facebook four and a half years to get to 100 million
users. It took TikTok nine months. It took chat TPT, I believe, two months. And they have the app now.
But in the presentation, in that vein, you cite a study where 50% of AI researchers say that P-Doom, their P-DUM is 10% or higher.
But it's based on a non-peer-reviewed survey, on a single question in the survey, they had only about 150 responses.
Should we be swayed by that data, that they're worried?
Because there is that ongoing theory that the people who make this are worried, the cooks are worried about what they're making.
Yeah, so one critique of that survey is it's somehow all about AI hype, that the people who are answering the survey are people inside the companies who want to hype the capabilities so that people get,
they get more funding and that everybody thinks it's bigger than it actually is.
Sure.
But the people who answered that survey were machine learning researchers who actually publish papers and conferences.
They're the people who actually know this stuff the best.
Sure.
If you go inside the industry and talk to the people who build the stuff, it's much higher than that survey is.
Again, this is why we're doing this, right.
Oh, I was at a dinner party years ago when they were top people.
Like, I was sort of like, huh, that's interesting.
They're very top people.
I mean, don't trust a survey.
Trust, there's a document of all the quotes of all the founders of AI companies over all the years of saying these quotes about,
we're going to wipe out, you know, there's a strong chance to wipe out humanity will probably go extinct.
They're not talking about jobs.
They're talking about a whole bunch of other scenarios.
So don't let one survey be the thing.
We're just trying to, you know, take one data point.
People are worried.
People are deeply worried.
Yeah.
You use the metaphor of a gallum.
Explain the gallum.
So the reason that we actually came up with that phrase to describe it is that people have often said,
and this is pre-GPT4 coming out, like why are we suddenly so worried about AI?
AI has existed for 20 years.
We haven't freaked out about it until now.
series still mispronounces my name
and Google Maps still says my, you know,
pronounces the street address that I live on wrong
and why are we suddenly so worried about AI?
And so one of the things that in our own work
in trying to figure out how we would explain this to people
was sort of realizing that we needed to tell the part of the story
that in 2017, AI changed
because a new type of AI, sort of class of AI came out
called Transformers, it's 200 lines of code,
it's based in deep learning.
That technology created this brand new explosive wave of AI
that are based on generative, large, language, multimodal models,
G-L-L-M.
We said, how can we differentiate this new sort of era of AI that we're in
from the past so that people understand why this curve is so explosive and vertical?
Right.
And so we said, okay, let's give that a name so that people can track it better
as public communicators A's and I care deeply about precise communication.
So we just said, let's call them GOLM class AIs.
And a GOLM, of course, is the famous...
The Jewish myth of an inanimate object that then is sort of...
gains animate capabilities.
And that's one of the other factors about generative large language models is that as you
pump them with more information and more compute and you train on them, and they actually
gain new capabilities that the engineers themselves didn't program into them.
Right, that they're learning.
They learn things.
Now, let me be clear, you do not believe these are sentient.
No, and this has nothing to do with...
Make that clear.
They're not humans.
There's this fascinating tendency when human beings think about this, where they get obsessed
with the question of whether they can...
think.
Sci-fi.
Sci-fi.
That's why.
Yeah.
But it actually kind of demonstrates just kind of like a predispositions of humans.
So imagine Neanderthals are baking homo sapiens in a lab and they become obsessed with the
question when it comes out, when this thing is more intelligent, is it going to be sentient
like Neanderthals?
It's just a bias of how our brains work.
Right.
When really the way, what really matters is can you anticipate the capabilities of something
that's smarter than you?
So imagine you're a Neanderthal.
You're living in a Neanderthal brain.
You can't think about humans once they pop out inventing computes.
inventing energy,
inventing oil-based hydrocarbon economies,
inventing, you know, language.
Right.
So we don't know,
which you're essentially saying we don't know.
It's inconceivable what it is,
but it's not sentient.
And I think that's,
because then we attribute emotions to it.
Right.
Well, we just, it may eventually those questions will matter,
but they're just not the questions that matter.
The question is whether or not it is sentient,
it doesn't have to be.
No.
There's enormous dangers that can just emerge
from just growing these capabilities
and entangling this new alien intelligence
with society faster than we actually know what's there.
Alien is an interesting word that you use, because it's one that Elon Musk used many years ago.
He said they treat us like aliens would treat a house cat.
But then he changed it to we're an antil and they're making a highway.
They don't really, they're not mad at us.
No.
They don't care.
No, they're just doing things from their perspective that makes sense.
Makes sense.
But just like, by the way, just like social media was.
Social media was doing.
So social media already, let me argue that AI might have already taken control of humanity
in the form of first contact with AI.
which is social media.
What are all of us running around the world doing every day?
What are all of our political fears?
What are all of our elections?
They're all driven by social media.
We've been in the social media AI, like, brain implant for 10 years.
We don't need the Elon Musk's brain implant.
We already have one.
It's called social media.
It's been feeding us through worldviews and the umbelts
that define how we see reality for 10 years.
And the noisiest people, yeah.
And that has warped our collective consciousness.
And so are you free if all the information you've ever been looking at
has already been determined by an AI for the last 10 years.
And you're running confirmation bias on a stack of stuff
that has been pre-selected from the outrage selection feed
of Twitter and the rest of it.
And so you could argue that AI has already taken over society
in a subtle way.
I don't mean taken over in the sense that its values are driving us,
but in the sense that, you know, just like we don't have regular chickens
anymore, we have the kind of chickens that have been domesticated
for their, you know, their meat.
We don't have regular cows.
We have the kind of cows that have been domesticated for their milk and their meat.
We don't have regular humans anymore.
We have AI engagement optimized humans.
So one of the things you did, you and Asa did, was you made a lot of news when you tested Snapchat's AI.
It's my AI called, as if you were a 13-year-old that gave him advice how to set the mood for sex with a 35-year-old.
Stunty, they've fixed it.
They think they've fixed it.
Is it tested it a few days ago?
It still happens.
It's suggesting you bring candles for your first romantic time with a 13-year-old, with a 38- or 41-year-old, I think it was.
So it doesn't say a couple of the suggestions, but it still does say,
some of those things. And you can still get it to those things. By the way, I've gotten emails from
parents since we gave that presentation, and their kids have independently found it doing things
like that. Doing things like that. So it's still not. They just can't anticipate all the
problems. Well, it's actually even worse than that. It's just important for listeners to know,
just to be fair to Snapchat, they actually did not roll that my AI bought out to all of its,
I can't remember if it's 700 million users. They didn't roll it out to all their users. They
rolled it out to only paid subscribers at first, which is something like 2 to 3 million users.
but of course
just two weeks ago
or something like that
they released it to all their users
why do they do that
because they're in a race
to dominate that intimate spot in your life
everyone wants to be the Scarlett Johansson
her AI bot in your ear
you both signed a letter calling for the six-month pause
on giant AI experiments
Elon did too
Elon must did too
it's unfortunate that that letter
got defined by Elon's participation in that
because he looked like he was doing his own business
well later obviously he then also
started his own AI company
and so obviously a delegitial
Yeah, he also laughed and said he knew it would be futile to sign it.
So why make that?
Many people think it was a futile effort.
Well, these are separate topics.
I want to make sure we really slow down and actually distinguish here.
The founders of the field of machine learning started that, you know,
helped sign that letter.
Steve Wozniak started the letter.
The co-founder of Fury signed the letter.
Andrew Yang, et cetera, all of us at Center for Human Technology.
That letter is because the Overton window of society,
how unsafe and dangerous this is, was not well known.
The purpose of that letter was to make it very well known
that this field is much more dangerous
than what people understand.
And I think there is a legitimate, we know the Future of Life Institute folks
who were really kind of spearheading the letter.
There was a lot of debate about what is the appropriate time to call for a slowdown.
And by the way, I think slowdown is also badly named on retrospect.
I think something like redirection of all the energy of those labs
into safety work and safety research and guardrails.
So imagine it's six months of instead of an AI winter, an AI harvest,
an AI summer, when you harvest the benefits that you have,
you do understanding on what are the capabilities inside of everything that's been released.
Did you imagine this was going to happen, that they would go, oh, yes, oh, yes, I see your point.
Well, you know, connected to the team that did it
and kind of being privy to some of the internal conversations,
I think people were all surprised how many incredible people did sign the letter.
They did, yeah.
Many people sign the letter.
It's funny that people look at it and maybe say, this is futile,
but it's like saying, you know, just because something is hard
doesn't mean it shouldn't be the intention.
And one of the interesting things is that if you talk to an engineer and you say,
oh, like, we're going to build this AGI thing.
They're like, oh, that sounds really hard.
But it's like that we're so compelled by the idea of building these AI systems,
these AGI systems, a God that I could talk to,
that they say, I don't care how hard it is.
And so they keep racing towards it.
And it's been 30, you know, 100 years that we are, whatever,
50 years, people have been working on this.
In other words, we don't say because something's hard,
we shouldn't keep going and try to build it anyway.
Whereas if I say coordination is hard for the whole world,
people say, oh, let's just throw up our hands and say it's never going to happen.
We need to get good at coordination.
All of our world's problems are coordination problems.
Right, we do it with nuclear energy.
We do it with a lot of things.
We're able to limit nukes to put a pin on it, though.
If I said, you know, it's inevitable that all countries are going to get nukes.
Let's not do anything about it.
In fact, let's just let every country pursue it and just like not do anything.
We probably wouldn't be here today.
A lot of people had to be very concerned about it and move into action to say something different needs to happen.
But people can, a nuclear war we got, we saw it and happen with the atom bombs.
So tell me, give me your best case against a pause.
And one of the more compelling criticisms is the U.S. is going to fall behind China.
This is something I heard from Mark Zuckerberg about social media in general or tech in general.
Which is interesting.
China, oh, they use the same jeer-me argument every time.
They like drag it out.
But it's concerning, it is.
It absolutely is China has shown itself to have very few governors.
The unregulated deployment of AI would be the reason we lose to China.
If worse actors do beat you in dominance, in deploying AI, people with no morals, with no safety considerations, with no concerns, with different values as a future of the world kind of society, Chinese digital authoritarianism values or something like that, or Chinese Communist Party values, then we certainly won't want to lose to that.
So I think not, if there was a sincere risk that that would happen,
there would be a good reason to say, let's not call for that.
But I would actually argue that the unregulated deployment of AI
is what is causing the West to lose to China.
Let me give you the example of social media.
Social media was the unregulated deployment of AI to society.
The breakdown of democracy's ability to coordinate
because we no longer have a shared...
That's really good for authoritarianism.
Why are democracies backsliding everywhere around the world all at once?
It's Barbara F. Walter, what a book called How the Next Civil War Starts.
She talks about anonocracies, democracies that are backsliding everywhere.
I'm not blaming it all on social media, but we're seeing it happen rapidly in all these countries
that have been governed by the information environment created by social media.
And if a society cannot coordinate, can it deal with poverty, can it deal with inequality, can it deal with climate change?
So we shot herself in the foot and now we're going for the arms.
Yeah.
That kind of thing.
I'm going to go to, I've interviewed a number of times.
one we did in 2017, as I said,
before you and Asa founded the Center for Humane Technology.
Back then, you were focused on social media,
as we discussed earlier,
showing why revenue models built on monetizing our attention or bad for us,
because a lot of this is about monetization
and who's going to have the next intimate relationship,
which they've been trying to do forever in different ways,
through Syria and all kinds of different things.
But now they really want you to be theirs, essentially.
Let's pay a clip from it.
Right now, essentially, you know, Apple, Google, and Facebook
are kind of like these private companies
who collectively are the urban planners
of a billion people's attentional landscape.
Right. That's a great way to foot it.
We kind of all live in this invisible city.
Right. Which they created.
Which they created.
And there's, what's the question?
What's unlike a democracy where you have some civic representation and you can say, well, who's the mayor?
And should there be a stoplight there?
Stoplight on our phone or blinker signals between the cars or these kinds of things.
We don't have any representation except if we don't use the product or don't buy it.
And that's not really representation because the city itself is.
So attention taxation without representation.
Maybe, yeah.
But so I think, you know, there's this question of how do we create that accountability loop?
You know, that was very well put.
And I took it further.
I said it's like the purge.
They don't, they actually own the city and they don't do anything.
Oh, yeah.
We can't do anything.
And they won't do anything.
They have no stop signs.
They have no streets.
They have no sewage.
Everything else.
So I took your thought a step further.
Talk about AI firms becoming the new urban planners of the, I guess, a intentional landscape.
Because that's what they want.
It's more than attention they want.
They want to own you, right?
I mean, it's what you're saying.
Well, so there's really, I want to separate between two different economies.
So there's the engagement economy, which is the race to dominate, own, and commodify human experience.
So that's the-social media.
Social media is the biggest player in that space.
But VR is in that space.
YouTube is in that space.
Netflix is in that space.
It's the race to say, look at me.
Look at me, all the things that construct your reality that determine from the moment you wake up and your eyes open to the moment your eyes close at the end of
the night. Who owns that space? Your attention. That's the engagement economy. That's the
attention economy. And there's specific actors in that space. AI will be applied to that
economy, just like AI will be applied to all sorts of other economies. Also, the cyber hacking
economy. A.L. be applied to the battery, you know, storage. It's more like the internet.
Yeah. It's a bigger. AI is a much bigger thing. So there's a subpart of the AI economy,
which is the engagement economy, and AI will supercharge the harms of social media there.
because before we had people A-B testing a handful of messages on social media
and figuring out like Cambridge Analytica, which one works best for each political tribe.
Now you're going to have AIs that do that, and there's a paper out called, I think it's called
silicon sampling.
So you can actually sample a virtual group, like instead of running Franklin's focus groups around
the world, you can kind of have a language bot chat bot that you talk to and it will
answer questions as if someone is a 35-year-old in Kansas City, has two kids.
and so you can run even perfect message testing.
Right, so you don't need to talk to people.
So you don't need to talk to people anymore.
You know what they're going to say.
You can do a million things like that.
And so the loneliness crisis that we see,
the mental health crisis that we see,
the sexualization of young kids that we see,
the online harassment situation that we see,
all that's just going to get supercharged with AI.
And the ability to create alpha persuade,
which is just like there was AlphaGo and Alpha Chess
where the system is playing chess against itself
and kind of getting much much better.
It's now going to be able to hyper-minipyed.
you and hyper-persuade you.
So what you're talking about is social media as a lower being than AI.
AI powers everything.
Social media is one.
But we couldn't even regulate social media.
Is society aware of the need for regulation since we didn't do it for social media?
So the point we made in this AI dilemma presentation is that we were too late with social
media because we waited for it to entangle itself with journalism, with media, with
elections with business, because now businesses can only reach their consumers if they have
an Instagram page and use marketing on Facebook and Instagram and so on. Social media captured
too many of the fundamental life's organs of how our society works. And that's why it's been
very hard to regulate. I mean, you know, certain parties benefit, certain politicians benefit.
Can you regulate, would you want to ban TikTok if you're a politician or a party that's currently
winning a lot of elections by being really good at TikTok? Right. Right. So once things start
to entangle themselves, it's very hard to regulate them. There's too many vested interests.
With AI, we have not yet allowed this thing to roll out.
I mean, now it's obviously happening incredibly fast.
We gave the presentation a few months ago.
The whole point of it was before GPT4 was we need to act before this happens.
One good example of this happening in history was a treaty to ban blinding laser weapons from the battlefield before they were actually ever used.
To blind the soldiers.
To blind soldiers.
Yes, this would be a high energy laser that has the capability.
Point out of everyone.
And it just blinds them.
But we're just like, you know what?
In the game of war, which is a ruthless game where you kill other human beings,
even as ruthless as that game is,
that is just a, we don't want to allow that.
And even before it was ever deployed,
that was one of maybe the most optimistic examples
where humanity could sort of use our higher selves
to recognize that's a future game.
It goes into the killer robot part of the portion of the show, right?
Then there's the slaughter bots.
How do we ban autonomous weapons?
How do we ban recombinant DNA engineering
and human cloning, things like this?
And so this is another one of those situations
and we need to look to,
especially the example of the blinding laser weapons,
because that was in advance of the technology ever getting fully deployed.
Because a lot of the kind of guardrails that we're going to need internationally
are going to be saying no one would want that future race to happen.
So let's prevent that race.
Right. But that's nation states.
Now, AI, anybody could do it.
The same thing, CRISPR, though, they definitely,
scientists got together and had standards.
And this is much easier to be able to do what you want,
if we are all in a group together coordinating this.
So if I want to steal man the AI doomers and the P. Dumers
that have a really high number for that P. Doom number,
it's because it's so hard to prevent the proliferation
that many people think that we're doomed.
Just to really clear and why that's also a very legitimate thing.
That is certainly...
That would be my biggest P. Doom.
This is too easy for lots of people.
So let's just hang there for a moment.
Just really recognize that.
That's not being a doomer.
That's just being an honest viewer of these of the risks.
Now, if something other were to happen,
you could involve governments and law to say,
hey, we need to get maybe more restrictive about GitHub and hugging face and where these models go.
Maybe we need export controls.
There are people who are working on models of how do we, just like there's 3D printing guns as a file.
You can't just send those around the open internet.
We put export controls on those kinds of things.
It's a dangerous kind of information.
So now imagine there's a new kind of information that's not a 3D printed gun,
but it's like a 3D printing gun that actually self-replicates and self-improves and gets into a bigger and bigger gun.
And builds itself.
that's a new class.
That's not just free speech.
The Founding Fathers couldn't anticipate
something that self-replicates
and self-improves being a class of speech.
That's not the kind of speech that they were trying to protect.
Part of what we need here are new legal categories
for these new kinds of speech.
Sam Alton, who runs OpenAI,
was on the Hill calling for AI regulation.
They all are.
You can't say you didn't warn them, right?
A lot of Tex-Eos have claimed they want regulation,
but they've also spent a lot of money previously
on stopping antitrust,
stopping algorithm of transparency,
stopping any privacy regulation.
Do you believe this class of CEOs?
Because a lot of them are saying,
this is dangerous.
Would you please regulate this?
Yeah, so you're pointing to what happened with social media,
which was that publicly they would say,
we need regulation, we would need regulation.
When you talk to the staffers...
They never said this is dangerous, we need, right?
He never said dangerous.
He says dangerous.
He says dangerous.
And I want a golf clap that, you know,
we always want to endorse and celebrate
when there is actually an honest recognition of the risks.
I mean, to Sam Malman's credit, he has been saying in public settings,
I think much to the short end of maybe his investors and other folks,
that there are existential risks here.
I mean, what CEO goes out there saying this could actually wipe out humanity
and not just because of jobs.
I mean, so we should celebrate that he's being honest about the risks.
We actually do need an honest conversation about it.
However, as you said, in the history of social media,
it is very easy to publicly advocate for regulation
and then your policy teams follow up with all the staffers
and then say, let me redline this,
redline that. That's never going to work. And they just sort of stall it so nothing actually
ever happens. I don't think it's that bad faith in this context. I do think that some kind
of regulations needed. Sam Altman talked about GPU licensing, licensing doing a training
runs. If you're going to run a large frontier model, you're going to do a massive training run.
You've got a license to do that. You're building a, just like we have, the Wuhan Institute of
Virology was a biosafety level four lab doing advanced, you know, kind of gain of function
research. If you're building a level four lab, you need level four practices and responsibilities.
Even there, though, we know that that may not have been a
enough, whatever safety practices. We're now building AI systems that are super advanced. And the
question is, do we actually have the safety practices? Are we treating it like a top lab?
Well, the first thing is, are we treating it that way? And then the second is, do we even know
what would constitute safety? So this is going to need the end question you're asking, can we
even do this safely? Is that even possible? Because think of AI as like a biosafety level 10
I'm inventing it right now, but a biosafety level 10 lab where I invent a pathogen that the
second is released, it kills everyone instantly. Let's just imagine that that was actually
possible. Well, you might say, well, let's let's let's let people have that scientific capacity.
We want to just see. Is that even possible? We want to test it. So we can build a vaccine or
prevention systems against a pathogen that could kill everyone instantly. But the question is
to do that experimental research, what if there was, we didn't have biosafety level 10
practices? We only had biosafety level 10 dangerous capabilities. Would we want to pursue
biosafety level 10 labs? I think that AI, the deeper question is, with great power comes,
you cannot have the power of gods
without the wisdom, love, and prudence of gods.
And right now we are handing out
and democratizing godlike powers
without actually even knowing
what would constitute the love, prudence, and wisdom
that's needed for it.
And I think the story and the parable of the Lord of the Rings
is that there are some, you know,
why did they want to throw the ring into Mount Doom?
There's some kinds of powers that when you see them,
you say, if we're not actually wise enough to hold
this ring and put it on, we have to know
which rings we have to say, hey, let's collectively not put on that ring.
Right, I get that. I understand that.
One of the things is that when you,
get this dramatic, like I said at the beginning, does that push people off? Like, this is
a pathogen we get. Like, we've just been through COVID, and that was bad enough. And there's
probably a pathogen that could kill people instantly. It's not how people think.
Yeah, well, let's actually just make that example real for a second, because I'm not, that was
a hypothetical thing of about safety level 10 thing. Can AI accelerate the development of pathogens
and gain of function research and people tinkering with dangerous, lethal bio-weapons? Can it democratize
that? Can it make more people able to do that?
More people be able to make household explosives with household
materials? Yes, we don't want
that. That's really dangerous. That's a very concrete thing.
That's not AI doomers. There's real concrete stuff
we have to respond to here.
We'll be back in a minute.
Tell me
something that AI could be good for,
I talk about that, because I think I'm a little less extreme than you.
There are, and I think at the beginning of the Internet, I was like, this could be great.
And, of course, then you saw them not worrying about the not-so-great.
And I think it's sort of that tools and weapons, speaking of which from Microsoft,
that was the Microsoft president, Brad Smith, talked about tools and weapons.
Some are a knife as a tool and a weapon.
So what is the tool part of this, that is a good thing?
So first of all, I think this is another one of those things, just like we say, is the AI sentience,
that when people hear me saying all this
they think I don't hear or don't know about
or aren't talking about all the positives
that can do, this is another
fallacy of how human brains work.
Just like we get obsessed with the question of his sentient
where we get obsessed with the one-sidedness of one
like it has all the positives.
You can just as fast as you can design cyber weapons
with AI and accelerates the creation of that.
You can also identify all the vulnerabilities
and code or many vulnerabilities in code.
You can invent cures to diseases.
You can invent new solutions for battery stores.
We're going to have, as I said, in Social Dilemma,
what's going to be confusing about this era
is it's simultaneous utopia and dystopia.
I can't think of so many good things about social media.
I couldn't.
I can think of dozens here, dozens here.
And there I was like, maybe we'll all get along and do better like each other.
Social media is like increasing the flows of information.
People are able to maintain many more relationships,
old high school sweetheart.
Sure, but not like this.
This is gene folding.
This is drug discovery.
This is real movement.
forward. Absolutely. But I'll tell a story. So the real confusing thing is, is it possible on the
current development path to get those goods without the bats? What if it was not possible? What if I
can only get that, you know, the synthetic biology capabilities that let me solve problems? But there
was no way to do it without also enabling bad guys to create this pathogen like you're talking about,
for example. So just to make it personal. My mother died of cancer.
And if you told me that I, like any human being,
would do anything to have my mother still be here with me.
And if you told me that there was an AI
that was going to be able to discover a cure for my mother
that would have her still be with me today,
obviously I would want that cure.
If you told me that the only way for that cure to be developed
was to also unleash capabilities that the world would get wrecked.
This is a dinner party, one of those dinner party questions.
Would you kill 100 million people to save?
But it's real.
Yeah.
I mean, I'm just saying there's certain domains where there's no way to do the one side without doing the other side.
And if you told me that, just really on a personal level, as much as I want my mom to be here today, I would not have made that trade.
Well, you're talking about an old Paul Virilio quote, which is you can't have a ship without a shipwreck or electricity without the electric chair.
We do that every day.
A car is, net, net cars have been great.
Net, they've been bad now, you know what I mean?
But if you have godlike powers that can kind of break society in much more.
fundamental ways. So now again, we're talking about benefits that are literally godlike
inventing solutions for every problem. But if it also just undermines the existence of how life
can work. So that's your greatest worry, is this idea of reality fracturing in ways they're
impossible to get back? No, I mean, all of it together. If AI is unleashed and democratized
to everybody, no matter how high the tower of benefits that AI assembles, if it also simultaneously
crumbles the foundation of that tower, it won't really matter. What kind of society can receive
a cancer drug. If no one knows what's true, there's cybertax everywhere, things are blowing up,
and there's pathogens that have locked down the world again. Think about how bad COVID was.
People forget, like, going through one pandemic, just one pandemic. Imagine that just happens
a few more times. Like that can quickly, we saw the edges of our supply chains. We saw how much
money had to be printed to keep the economy going. It's pretty easy to break society if you have
a few more of these things going. And so again, how will cancer drugs,
sort of flow in that society that has kind of stopped working.
And I don't mean, again, AI Doom, Eliezerid Kalski, AGI, AGI, Kills everybody in one instant.
I'm talking about dysfunction at a scale that is so much greater.
Are we getting closer to regulation?
Did you find those hearings?
Did you have any good takeaways from them?
And where is it going to go from here?
Who knows where it's going to go?
I didn't see all of the hearing.
I was happy to see a couple things, which is based on structural issues.
So one was actually the repeated discussion of multilateral
bodies. So something like
an IAEA, like the International Atomic Energy
Agency, something like that for AI
that's actually doing global monitoring
and regulation
of AI systems of large frontier
AI systems. I think, you know, Sam
was proposing that. That was repeated several times.
I was surprised to see that. I think that's actually great
because it is a global problem. What's the answer when we
develop nuclear weapons? Is it that Congress passes a law
to deal with nukes here? No. It's a
global coordination around how do we limit nukes to
nine countries? How do we make sure we don't do above-ground
nuclear testing? So I was happy to see that.
in the hearing. I was also happy to see
multiple members of Congress, including, I think it was
Lindsey Graham and the Republicans who are typically not
for new regulatory agencies,
but them saying they recognize
that there's, that we need one.
Because the system is, you know,
E.O. Wilson, if we have Paleolithic emotions,
medieval institutions and godlike tech,
medieval institutions and medieval laws,
18th century ideas, 19th century
laws and ideas, don't match
for 21st century issues. Like
replic, Larry Lessig has a paper out
about replicent speech. Should we protect the speech
of generative robots,
the same way to protect free speech.
The founding of authors had totally different ideas
about what that was about.
No, we need to update those laws.
Part of our medieval institutions
are institutions that don't move as fast as the godlike tech.
So if a virus is moving at 21st century speeds
and your immune system is moving at 18th century speeds,
your immune system being regulation.
So do you have any hope for any significant legislation?
I mean, Vice President Harris met with,
they're all meeting with everybody for sure and early compared to the other things.
I don't remember, Kerr, but when we did that briefing in D.C.,
back here in whatever it was February or March,
we said one of the things we really want to happen
is for the White House to convene a gathering of all the CEOs.
And that I would have never thought would have ever happened,
and it did happen.
And they would have never thought there was a hearing.
And they mentioned it the G7 this week.
And they did it.
They mentioned the G7 this week.
So there's things that are moving.
I don't want people to be optimistic, by the way.
There needs to be a massive effort
and coordinated response to make the right things happen here.
Right. Vice President Harris led that meeting
and told them they have ethical, moral,
and legal responsibility to ensure the safety and security of their products.
They certainly don't seem protected by Section 230.
they're probably not protected.
There is liability attached to some of this, which could be good.
That's good.
Is there any?
We talk to people inside the companies.
All we're trying to do is figure out what needs to happen,
and often the people inside the companies who work on safety teams will say,
like, I can't advocate for this publicly, but, you know, we need liability.
Because talking about responsibility and ethics just get bulldozed by incentives.
There needs to be liability that creates real guardrails.
Right.
Let's do a lightning round.
What you would say to the following people if they were here right now,
Sam Altman, CEO of Open AI.
What would you say to him, Tristan?
Gather all of the top leaders to negotiate a coordinated way to get this right.
Move at a pace that we can get this right, including working with the Chinese
and getting a multilateral negotiations happening and say that's what needs to happen.
It's not about what you do with your company and your safety practices and how much RLHF.
So multilateral.
Multi.
Coordination.
Sachi Nadella and Sundarpa Chai.
I'm going to mush them together.
Retract the arms race instead of saying let's make a cool dance, which is what Sa'i Nadella said.
we have to find a way to move back
into a domain of advanced capabilities being held back
buying ourselves a little bit more time matters
yeah well they've been sick of being pantsed the entire last decade
I think they want to do that in some fashion
Reid Hoffman Mustafa Suleiman co-founders of
inflection AI which put out a chat bot this month
I mean honestly it'd be the same things with Sam it's like
everyone needs to work together to get this right
we need to see this as dangerous for all of humanity right
this isn't us for the tech companies this is all of us are human beings
and there's dangerous outcomes that land for all of us.
What about Elon Musk?
He signed the A.O. Pause letter and has been outspoken on the danger for years.
He was one of the earliest people that were talking about it along with Sam, as I recall a decade ago.
But he, of course, started his own company, X AI, when he wants to get to the truth AI, whatever that means.
We need to escape this logic of, I don't think the other guys are going to do it right.
So I'm going to therefore start my own thing to do it safely, which is how we got to the arms race that's now driving all the unsafe.
And so the logic of, I don't believe in the way the other guys are doing it.
and mostly for competitive reasons, probably underneath the hood.
I'm doing my own thing.
That logic doesn't work.
He's very competitive.
Do you blame them personally for putting us at risk?
Or is it just one of these group things that everyone goes along?
So there's this really interesting dynamic where when there is a race,
which all the problems are driven by race is if I don't do the mining in that version place
or if I don't do the deforestation, I just lose to the guy that will.
If I don't dump the chemicals and my competitors do.
Right, and I'll do it more safely.
So better me doing it than the other guy as long as I get more profit.
And so everyone has that self-reinforcing logic.
So there's races everywhere that are the real driver
of most of the issues that we're seeing.
And there's a temptation once we diagnose it as a race,
a bad race, to then absolve the companies of responsibility.
I think we have to do both.
Like there's both a race and also Satina Della and Sam,
you know, helped accelerate that race
in a way that actually we weren't trajectorying that way.
There was human choices evolved at that moment in the timeline.
I talked to people who helped found some of the original AGI labs
early in the day. They said, you know, if we go back 15 years, they would have said, let's put
a ban on pursuing artificial general intelligence, building these large systems that ingests
the world's knowledge about everything. We don't need to do that. We should be building advanced
applied AI systems like Alpha Fold that says let's do specific targeted research domains and
applications. If we were living in that world, how different might we be? You know, we had three
rules of technology we put in that AI dilemma presentation. When you invent a new technology,
you create a new class of responsibilities. Second rule of technology, if the new technology
you invent confers power, it will start a race.
If I don't adopt the plow and start out-competing the other society,
I'll lose to the guy that does adopt the plow.
If I don't adopt social media to get more efficient,
car, etc.
So it starts a race.
Third rule of technology, if you do not coordinate that race,
the race will end in tragedy.
We need to become a society that is incredibly good
at identifying bad games rather than bad guys.
Right now, we do have bad guys.
We have, again, CEOs that do bear some responsibility for some choices.
But right now we're always just, that drives up polarization
because you put all the energy into going after one CEO or one company
when we have to get good at slaying bad games.
Well, except wouldn't you agree that one of the reasons social media got so out of whack
was because of Mark Zuckerberg in his huge power.
Like he had a power over the most big, the biggest thing and just was both badly educated.
Mark Zuckerberg made a ton of bad decisions while denying many of the harms
most of the way through until just recently, including that it was a crazy idea
that fake news had anything to do with the election.
You know, later they found the, you know, the Russia stuff was, oh, this is all overblown.
Which, you know, I understand there's the Trump Russia stuff, which is, there may have been overblown stuff there.
But the Facebook content, they said, oh, it didn't really reach that many people.
And it ended up reaching 150 million Americans.
No, I get it.
Facebook's own research said that 64% of extreme.
Yeah, exactly.
We could go on it forever about that.
Jeffrey Hinton, who was known as one of the godfathers of AI, not the only one, had recently been sounding the alarm.
Do you think others would follow suit?
That was a big deal when he did that.
He really was.
I was very aware of him in AI.
Do you think it'll change the direction,
or is he just Robert Oppenheimer saying,
I have become deaths?
You know, one of the things that struck me both,
you know, I came out too, right?
I was an early person coming out,
and I've seen the effects of insiders coming out.
Francis Hagen, the Facebook whistleblower,
is a close friend of mine,
and, you know, her coming out made a really big difference.
The social dilemma, I know, impacted her.
It legitimized for many people inside the companies
that they felt like something was wrong
and now many more people came out
I think the more people come out
the big names come out, the Jeff Hinton come out
it actually makes more people question
just I think this few days ago
there's now a street protest
outside of Deep Mines headquarters in London
saying we need to pause AI
I don't know if you saw that
No, it's comparable to climate change
in a lot of ways
There are real people inside their own companies
that are saying there's a problem here
which is why it's really important
that when the people who are making something
who know it most intimately are saying there's a real problem here
when the head product guy at Twitter says,
you know, I don't let my own kids use social media.
That's all you need to know about whether something is good or safe.
So one of the things, there's some proposals you brought up,
there's one based on a work by Taiwan's Digital Minister,
who's so creative, where a hundred regular people get in a room
with AI experts and they come out with a proposal.
That's an interesting one.
You come up with one having a national televised discussion.
Major AI labs, lead safety actors, another civic actors,
talk on TV. That's hard because then you get a, on one hand, I could see that working but not
working. Yeah, that's what you done carefully. Let me explain the Taiwan on really quickly.
Okay. So let's imagine, there's kind of two attractors for where the world is going right now.
One attractor is, I trust everyone to do the right thing and I'm going to distribute God-like
AI powers, superhuman powers to everyone. Everyone can build bio-weapons. Everyone can make generative
media, find loopholes in law, manipulate religions, do fake everything. That's, that world lands in
continual chaos and catastrophe because it's just basically I'm handing everyone out the power to do
anything. Oh, yeah. Everyone had superpowers, yeah. Right. So that's one outcome. That's one
attractor. Think of it like a like a 3D field and it's kind of like sucking the world into one
gravity well. It's just like continual catastrophe is. But go ahead. Yeah. The other side is
dystopia, which is instead of trusting everyone to do the right thing with these superhuman
powers, I don't trust anyone to do the right thing. So I create this sort of dystopian state
that sort of has surveillance and monitors everyone. That's kind of the Chinese digital authoritarianism
outcome. That's the other deep attractor for the world, given this new kind of tech that's
entering into the world. So the world is currently moving towards both of those, and actually
as the more frequently the continual catastrophes happen, the more it's going to drive us
towards the direction of the dystopia. So in both cases, we're getting a self-reinforcing
in the loop. So the reason I mentioned Taiwan is what we need is a middle way or third
attractor, which is what has the values of an open society, a democratic society, in which
people have freedom. But instead of naively trusting everyone to do the right thing, instead of also
not trusting anyone to do the right thing, we have what's called warranted trust. So think of it as
a loop. Technology, to the degree at impact society has to constitute a wiser, more responsible,
more enlightened culture. A more enlightened culture supports stronger upgraded institutions.
Those upgraded institutions sets the right kind of regulatory guardrails, et cetera, for better
technology that then is in a loop with constituting better culture. That's the upward spiral. We are
currently living in the downward spiral.
Technology decoheres, Dix Outrage,
loneliness, culture.
That incoherent culture can't support any institutional responses to anything.
That incapacitated dysfunctional set of institutions
doesn't regulate technology, which allows the downward spiral to continue.
The upward spiral is what we need to get to.
And the third way, what Taiwan is doing
is actually proving that you can use technology in a way that gets you the upward spiral.
Audrey Tang's work is showing that you can use AI to find unlikely consensus across groups.
There's only so many people that can fit into that town hall
and get mad at each other?
What if she creates a digitally augmented process
where people put in all their ideas and opinions about AI
and we can actually use AI to find the coherence,
the shared areas of agreement?
That we all share.
And do that even faster than we could do without the tech.
So this is not techno-utopianism,
it's techno-realism of applying the AI
to get a faster UDAL loop,
a faster, observe, orient, decide, and act loop
so that the institutions are moving as fast
as the evolutionary pace of technology.
And she's got the best, closest example to that.
and that's kind of part of what a third attractor needs to identify.
Right, where people feel that they've been putted
and at the same time don't feel the need to scream.
Right.
Which is absolutely true.
She's really quite something.
Having a national debate about it,
I think people will just take away whatever they want from it.
Yeah, let me explain that, though,
which was that modeled after the film the day after.
So in the previous era of a new technology that had the power to...
I was there in college when that happened.
In college when I came out, I was not born yet, but...
Let me just explain.
This is a movie about the nuclear bomb blowing up,
and they convened groups all over the country to talk about it,
watch the movie, and then discuss it.
And it really was terrifying at the time.
But we were all joined together in a way we're not anymore.
I can't even imagine that happening right now.
It was a made-for-TV movie commissioned by ABC
where the director Nicholas Meyer,
who also directed Star Trek 2, The Rath of Khan,
and some other great films.
They put together this film that was basically noticing
that nuclear war, the possibility of it,
existed in a kind of a repressed place inside the human mind.
No one wanted to think about this thing
that was ever present. That actually was a real possibility
because it was the act of Cold War and it was increasing
and escalating with Reagan and
Gorbachev. So they decided let's make
a film that became the largest
made for TV watched film in all of
TV history. 100 million Americans
tuned in, and I think it was 1983,
watched it once. They had a whole
PR campaign, put your kids to bet early,
which actually increased the number of people who actually
didn't watch it with their kids.
Reagan's biographer later, several years
later said that Reagan got depressed for weeks.
He watched in the White House film
studio. And when the Rikavik Accords happened, because they actually, I should mention, they
aired the film the day after in the Soviet Union a few years later in 1987. And it scared basically
the bejesus out of both the Russians and the U.S. Yeah, it was quite something at the time.
And it made visible and visceral the repressed idea of what we were actually facing. We
actually had the power to destroy ourselves. And it made that visible and visceral for the first
time. And the important point that we mentioned this AI dilemma talk that we put online is that after
this, you know, one and a half hour, whatever it was, film, they aired a one-hour debate where they
had Carl Sagan and, you know, Henry Kissinger and Brent Skowcroft and Eli Wiesel, you know,
study the Holocaust, to really debate, like, what we were facing. And that was a democratic way
of saying we don't want five people at the Department of Defense in Russia and the U.S. deciding
whether humanity exists tomorrow or not. Yeah. And similarly, I think we need that kind of debate.
So that's the idea. I don't know about a TV broadcast. We need that kind of thing. Honestly, I don't. I think
everyone is so
what's interesting is
that was very effective
that's an interesting thing
to talk about the day after
because it did scare the bejes
watching Jason Robards
disintegrated in real time
was disturbing
but there was nothing like that
and now there is a lot like that
right everybody is constantly hit
with information every day
we didn't
it was unique
because we used to have
a commonality that we don't have
so you have gone on
Glenn Beck podcast
God save you
Brian Kilmead podcast
we did a lot of media
across the board
Exactly. Do they react differently from your message than progressive audiences?
No.
Because, again, can they split like progressive?
Tech companies are bad.
Well, let me say it differently.
Conservatives, you know, surveillance and the deep state.
Well, exactly.
Social media got polarized.
So actually one of the reasons I'm doing a lot of media across the spectrum is I have a deep fear that this will get unnecessarily politicized.
We do not.
That would be the worst thing to have happen is when there's deep risks for everybody.
It does not matter which political beliefs you hold.
This really should bring us together.
and so I try to do media cross the spectrum
so that we can get universal consensus
that this is a risk to everyone and everything
and the values that we have
and people's ability to live in a future that we care about.
I do this because I really want to live in a future
that kids can be raised and we can live in a good world
as best as we can.
We're facing a lot of dark outcomes.
There's a spectrum of those dark outcomes.
Let's live on the lighter side of that spectrum
rather than the darkest side
or maybe the lights go out.
So one last question.
How do you think the media has been covering it?
Because there is a pressure
if you cover it too negatively,
It's like, oh, come on, don't you see the better, you know, are you missing the bigger picture?
And I know from my person experience, I'm so sick of being called the bummer by an irritant.
It gets exhausting.
But at the same time, you do want to see maybe this time we can do it better.
Give me hope here because I definitely feel the pressure not to be so negative.
And I still am.
I don't care.
And I think in the end, both of us were right back then, but it doesn't feel good being right.
Everything creates externalities, you know, effects that show up on other people's balance sheets.
If you're a doomer and you think you're just communicating honestly, but you end up terrifying people,
maybe some shooters come around and they start doing violent things because they've been, you know, terrorized by what you've shared.
I think about a lot.
I think a lot about responsible communication.
So I think there's a really important thing here, which is that there's kind of three psychological places that I think people are landing.
The first is what we call pre-tragic.
I borrow this from a mentor, Daniel Schmockenberger,
who we've done the Joe Rogan show with.
Pre-Tragic is someone who actually doesn't want to look at the tragedy
of whether it's climate or some of the AI issues that are facing us
or social media having downsides, any issue where there's actually,
there is a tragedy, but we don't want to metabolize the tragedy,
so we stay in naive optimism.
We call this kind of person a pre-tragic person
because there's a kind of denial and repression of actual honest things that are facing us.
Because I want to believe, well, things always work out in the end.
Humanity always figures it out.
We muddle our way through.
Those things are partially true too, but let's be really clear about the rest.
Okay, so that's the pre-tragic.
Then there's the person who then stares at the tragedy, and then people tend to get stuck in tragedy.
You either get depressed or you become nihilistic or the other thing that can happen is you actually, it's too hard and you bounce back into pre-tragic.
You bounce back into, I'm going to just ignore that information, go back to my optimism, because it's just too hard to sit in the tragedy.
there's a third place to go, which is we call post-tragic,
where you actually stare face-to-face with the actual constraints that are facing us,
which actually means accepting and grieving through some of the realities that we are facing.
I've done that work.
Personally, it's not about me.
I just mean that I think it's a very hard thing to do.
It's the humanity's right of passage.
You have to go through the dark night of the soul and be with that.
So you can be with the actual dimensions of the problems that we're dealing with.
because then when you do solutions on the other side of that,
when you're thinking about what do we do.
Now you're honest about the space.
You're honest about what it would take to do something about it.
So you're not negative, but people will cast you as that.
So there's something called pre-trans fallacy
where someone who's post-tragic can sound like someone on the other side.
It can sound confusing.
So I can sound like a doomer, but really it is I'm trying to communicate clearly.
People often ask me, like, am I an optimist?
No.
Had to ask, had to ask.
You know, Sam Altman has his little home.
I know he does.
I know he does.
He wanted to ask me what was my plan, you know, just joking.
We're joking around about it.
I said, well, you're smaller than I.
I'm going to beat you up and take your things and take your whole plan.
He's like, that's a good plan.
He's like, that's a good plan.
He goes, that's a good plan.
I go, it's an excellent plan.
Yeah.
I think I can take you if it came to that.
I think we need to get good at holding each other through to the post-tragic.
I don't know what that looks like, but I know that that's what guides.
me and what we're trying to do.
And if there's anything that I think I want to get even better at is it's hard once you
take people through all these things to carry them through to the other sort of side.
Right, because they get hopeless.
They get hopeless.
Yeah, you can be hopeful.
After that thing, I came back, I'm like, we are fucked.
Like, we were so, you know, after that thing.
And I thought, that's not going to go well because most people hide on Instagram or TikTok.
That doesn't feel good.
Let me run away from myself again.
Let me scroll a bunch of photos.
This is going to be a difficult.
time, the more we can go through and see the thing together, I think part of being post-tragic
is actually going through it with each other, like being there with each other as we go through
it. I'm not saying that just as a bullshit throwaway line. I really mean it. I think we need to be
there for each other. All right. Post-Tragic, hand-in-hand. Here we go, Tristan. Let's do it. Thank you.
Okay. Thanks, thanks. Today's show was produced by Naima-Razza Blakeney Schick,
Christian Castro O'Selle, and Megan Burney. Special thanks to Mary Mathis. Our engineers are
Fernando Arruda and Rick Kwan. Our theme music is by Tracademics. If you're already
following this show. Welcome to the world of post-tragedy. Hey, it could be worse. If not, it's a high
P-Doom for you. Go wherever you listen to podcast, search for On with Kara Swisher, and hit follow.
Thanks for listening to On with Carous Wisher from New York Magazine, the Vox Media Podcast Network,
and us. We'll be back on Monday with more.
