On with Kara Swisher - A.I. Doomsday with Tristan Harris
Episode Date: May 25, 2023After Kara and Nayeema review the week’s A.I. news, including Sam Altman’s Senate testimony and the viral AI-generated image of the Pentagon in flames, we turn to Tristan Harris — co-founder of ...the Center for Humane Technology and a key voice among the calls for slowing down the A.I. arms race. BTW, do you have any burning questions on career, love or life you’d like Kara or Nayeema’s advice on? If so, call 1-888-KARA-PLZ and leave us a voice mail. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for this show comes from Constant Contact.
If you struggle just to get your customers to notice you,
Constant Contact has what you need to grab their attention.
Constant Contact's award-winning marketing platform
offers all the automation, integration, and reporting tools
that get your marketing running seamlessly,
all backed by their expert live customer support.
It's time to get going and growing with Constant Contact today.
Ready, set, grow.
Go to ConstantContact.ca and start your free trial today.
Go to ConstantContact.ca for your free trial.
ConstantContact.ca
Support for this podcast comes from Anthropic.
It's not always easy to harness the power and potential of AI.
For all the talk around its revolutionary potential,
a lot of AI systems feel like they're designed for specific tasks,
performed by a select few.
Well, Clawed by Anthropic is AI for everyone.
The latest model, Clawed 3.5 Sonnet,
offers groundbreaking intelligence at an everyday price.
Clawed Sonnet can generate code, help with writing,
and reason through hard problems better than any model before.
You can discover how Clawed can transform your business
at anthropic.com slash Claude.
It's on!
Hi, everyone, from New York Magazine and the Vox Media Podcast Network.
I'm Ron DeSantis with a face for Twitter spaces.
Just kidding.
This is On with Kara Swisher, and I'm Kara Swisher.
And I'm Neymar Raza.
That's not so nice, Kara. Don't we have faces for radio? Guess what? I wasn't meaning to be nice.
Kara. So mean. Anyways, we're not going to focus on Elon today or on Ron DeSantis. We have much
bigger things to talk about, like artificial intelligence. But explain very quickly what's
going on with this Twitter proclamation, his announcement. It's interesting that they're
doing this. Of course, they're trying to pretend the media is losing its mind. It isn't really. It's interesting. Ron DeSantis doesn't want to
announce on mainstream media that he's running. He wants to announce with Elon and David Sachs.
Yeah, that's fine. Whatever. Good luck, guys. Good luck making media. They want to make it
into like media is so mad, and they're not. It's an interesting stunt. We'll see if it works.
They've been doing presidential announcements the same way forever. This is good for Elon Musk only because he's trying to build that, as we talked about extensively,
a media company. That's his effort. And good luck. Good luck. Media is hard, and it's not
very lucrative. He's got Tucker Carlson. He's got Ron DeSantis. He's got David Sachs. He doesn't
have us. No. But anyways, you're off to give a commencement speech today at Cooper Union. Are
you ready to drop the wisdom on young people, Kara?
I was trying hard not to do the drop wisdom thing because I know my own kids would be like, mom, no tells what to do.
Are you going to announce a presidential bid at Cooper Union?
That's how I'm doing it in front of the students at Cooper Union.
That's a good idea.
No, I just I want to just talk about tech because they invited me because I'm a tech person.
So where it's going and a little bit about AI and generative AI and where it's going.
Because this will affect these students.
They're focused on this kind of stuff and it will impact their lives.
I'm sure this is going to scare them.
So you're basically telling them they have no jobs when they graduate.
They're going to be worse off than millennials?
No, I'm going to, I talk about their responsibility to monitor this the way we have not monitored early social media and early Internet.
And they have a responsibility to figure out what works best.
And it's in their hands.
So that's what I'm saying.
For this generation, I think it will be about asking good questions, giving good instructions, being very attentive to detail, actually, because that's such a skill that will define your ability to interact with and use AI.
Yeah.
But no commencement yellow pages of AI-protected jobs.
I don't think they want to hear that. Like, okay, you're leaving. Good luck. You're fucked. I don't
think that's really what I'd want to hear. Like, fuck you, old lady. Fuck you, old lady. What are
you doing? Like, I don't, that's not my goal. I want to leave them with some level of hope,
but also responsibility.
Very sweet.
They could point to someone, you, you're out of work.
You, you're out of work.
You, you're, I don't think that's true.
You don't have a job.
You don't get a job.
You don't get a job.
You're such an anti-Oprah, Kara.
I'm not going to do that.
No, I'm not going to do that.
I'm 100% not going to do that.
Yeah, why would I do that to people when they're happy?
We're joking, but the topic of AI and what it means for future livelihoods is obviously
a very critical topic.
It came up during Sam Altman's congressional testimony.
Senators like Richard Blumenthal noted that AI eating jobs is their worst nightmare.
And they asked Sam Altman about this. Let's play a clip.
There will be an impact on jobs. We try to be very clear about that.
And I think it will require partnership between the industry and government,
but mostly action by government to figure out how we want to mitigate that.
But I'm very optimistic about how great the jobs of the future will be.
Well, that was a full answer of everything.
Everything will work or not.
Everything will work.
And if it doesn't work, we'll be in partnership, except it'll really be on you.
Yeah, exactly.
That's the answer.
You know, that's a good answer.
Well done, Sam.
I mean, one thing about the testimony, don't say he didn't warn people.
No one said Facebook's going to cause an insurrection unless you stop it, right?
That's the kind of thing.
So I appreciated that, but he should run for office.
That's a very deft, yeah, political answer.
Well done.
Well done.
Political answers to the politicians.
A lot of action will be needed, according to one global survey by Goldman Sachs.
As many as 300 million jobs worldwide could be automated and replaced by AI.
I imagine the number could be much higher.
How do you see this playing out and what do you think will need to be done?
You know, Scott always says it's overstated in the short run and understated in the long run.
I think ultimately people will shift and it makes sense.
A lot of jobs are rote and silly and shouldn't be there, right? And work
itself is changing, as we've talked about a lot so drastically. You know, I think it's like 60
million hours people are saving not commuting. Like, what do you do with that time? It's a time
for, it's an opportunity for creativity. And the government's got to be there to help figure it out
along the way, because that's why we elected the government. And of course, the government is
non-functional right now with the debt ceiling crisis. But I'm not quite as worried for the jobs thing as others,
but I do think jobs will be affected. I sort of sound like Sam Altman, but it's true.
I think in the long run also, in a nation where social safety nets have been
so fraught and so politicized and so often dismissed as communist or socialist,
AI might be the jumpstart we need to actually invest in some of these welfare programs, like healthcare for all or job training or even universal basic income. Because if you
look forward, it might be very hard. And things like job training might become easier with AI,
actually. But I also think we should ask the question of what's the default assumption on
who owns the training?
We are all training computers to do our job, and are we compensated fairly by our employers to do that?
Mary Gray, a great MacArthur Fellow winner, talks about this,
but kind of the default assumption of who owns your work
is also given to corporates, which may not be the case.
Once again, they're plumbing our stuff to sell back to us.
That's what the internet was, right? And we made the internet, and the internet is us,
and then they sell it back to us. Yeah, we are the product, we are the manufacturer,
and we are the customer. This is really not a good deal, guys. Yeah, yeah, yeah. Anyway,
Sam got a lot of praise last week during his testimony for acknowledging the dangers of AI,
and you just gave him some praise right now. He advocated for regulation. He, you know, used the word dangers, but he kind of punted to the government on the
solution in many cases. And then early this week, after the testimony, he and other leaders of Open
AI followed up with this kind of lengthy report on possible regulation, where they describe what
they see as out of scope, too burdensome. It's basically like, regulate us, but just not too
much. Well, there is a us, but just not too much.
Well, there is a balance, of course, as you should when you're doing any kind of legislation.
But I think this is going to be a global thing. I think Europe will be involved.
Yeah, they're already way ahead. They're like a two-year head start on us.
Yeah, exactly. So in this case, I think it's because there's such important questions,
and people know what have seen what happened when it's you know a lesser than technology it's already damaging enough that we have some history here whether it's the
insurrection or not and not to completely blame social media of course not but we've seen what
can happen and so if you understand that it's it's that times infinity it's really right important
and i think people do i do get a sense from, I've been
interviewing a lot of elected officials recently, and I think they do get it. They were so enthusiastic
about the internet. They're more like, hmm, about this, which I think is a good thing.
It's really hard to talk about AI as distinct from social media because it's also built on top of
that kind of broken and poorly regulated layer. So we saw this earlier this week with this AI
generated deep fake of the Pentagon building on fire. So we saw this earlier this week with this AI generated deepfake of the Pentagon building
on fire. And then what happens
when fake information meets platforms
that aren't good at dealing with disinformation?
Unsafe, you'd say? Perhaps where a president
should announce his election. Yes.
Yeah, a lot could go wrong. Twitter didn't stop it.
But, you know. I think there's one guy
in, like, Singapore
in charge of that. Like, probably. Who knows?
And he's part-time. It's probably his cousin. Exactly. But there was a photo of the Pentagon building on fire that circulated,
first on Facebook, and then more powerfully on Twitter, because it was circulated by a
Blue Check account called Bloomberg Feed, which has no relationship to Bloomberg News.
Well, also more powerfully because Facebook took it down right away and Twitter didn't.
That's really why it was more powerful.
Because the guy in Singapore was sleeping because the time difference.
Yeah, I guess.
Yeah, Cousin Greg was busy figuring out who's going to take over at Waystar right now.
Go, Jo.
Who is?
You know.
Don't tell us.
I do.
We don't want spoilers on this show.
We get very many emails when we spoil things.
I shall not.
But back to this.
We're going to ask our guest Tristan Harris in a minute.
But I want to know, Kara, do you see Sam and Sundar as different from the social media CEOs of yesteryear?
Oh, yes.
Who said, regulate us, and then new government wouldn't really regulate them, especially if they threw money at them?
Yes.
Maybe not Sundar.
Sundar's from the old school.
He has a personality that's very calming.
But I do.
I do.
I think they're not sunshine and roses,
which is a very big difference. And they were sunshine and roses. They were. This is going
to be great. We're Arab Spring, dingity ding, that kind of stuff. So I think, yes, they are
more honest. Yes, thank you. Thank you. Well, they have to be because also the world has
wisened up. Even the folks on the Capitol have wisened up. This is not Mark Zuckerberg's 2018
testimony. No, I mean, they have to. I mean, they'd look ridiculous if they said,
this is all going to be great. Because it's killing us. I don't think that would be a good
move. Well, someone who might say that is Tristan Harris. No, I'm just kidding. He wouldn't say,
this is all going to be great. He'd say, this is not going to be great. This is not going to be
great. Our guest today, Tristan Harris. He's the former Googler who co-founded the Center for
Humane Technology with Azaraskan and others. And he came up as a tech ethicist who rose to kind of more
mainstream prominence because of the Netflix documentary, The Social Dilemma.
I met him a long time ago when he had just started talking about this. And I think I
did one of the first interviews with him about it on my previous podcast, Recode Dica.
Six, seven years ago, right?
Yeah, yeah, exactly. And so I thought what he was saying made a lot of sense. And it was my experience, what he was talking about inside,
I was outside of these companies. But we had talked about a lot of things that are, of course,
all came true. And we both had the same concerns. We were starting to see real hair on the dog of
social media, like, hmm, this seems problematic, this seems problematic. And I
just saw him recently in DC, where he gave what was initially a confidential presentation to
lawmakers about what was happening with gender of AI. It was a packed room. And people were,
I was not gobsmacked, but a lot of people there were by the presentation.
But you texted after and you were kind of in a little bit of awe slash... Yes, I thought he would do it again. I thought he did it again.
And so did Asa Raskin, who partners with him at the Center for Humane Technology. Look,
a lot of people have been early to this. You know, Joy Boulamwini was there early,
Timnit Gebru. Timnit Gebru, of course, off Google, yeah. Kate Crawford. There's dozens of people,
again, many of which are women, which is,
of course, they can see safety issues better than men can. They just can.
So, our competitive advantage is being a little unsafe.
Yeah, a little less safe. And so, there's been a lot of people here along with him. He just
happens to be also was there early, too.
Yeah. He's a bit of Cassandra.
Yes, so am I.
Yes, you are. So So let's take a quick break
and we'll be back with The Meeting of the Cassandras, your conversation with Tristan Harris.
Fox Creative. This is advertiser content from Zelle.
When you picture an online scammer, what do you see?
For the longest time, we have these images of somebody sitting crouched over their computer
with a hoodie on, just kind of typing away in the middle of the night.
And honestly, that's not what it is anymore.
That's Ian Mitchell, a banker turned fraud fighter.
These days, online scams look
more like crime syndicates than individual con artists. And they're making bank. Last year,
scammers made off with more than $10 billion. It's mind-blowing to see the kind of infrastructure
that's been built to facilitate scamming at scale. There are hundreds, if not thousands,
facilitate scamming at scale. There are hundreds, if not thousands, of scam centers all around the world. These are very savvy business people. These are organized criminal rings. And so once we
understand the magnitude of this problem, we can protect people better. One challenge that fraud
fighters like Ian face is that scam victims sometimes feel too ashamed to discuss what happened to them.
But Ian says one of our best defenses is simple.
We need to talk to each other.
We need to have those awkward conversations around what do you do if you have text messages you don't recognize?
What do you do if you start getting asked to send information that's more sensitive?
Even my own father fell victim to a, thank goodness, a smaller dollar scam,
but he fell victim.
And we have these conversations all the time.
So we are all at risk and we all need to work together to protect each other.
Learn more about how to protect yourself at Vox.com slash Zelle.
And when using digital payment platforms, remember to only send money to people you know and trust.
Do you feel like your leads never lead anywhere? remember to only send money to people you know and trust. tells you which leads are worth knowing, and makes writing blogs, creating videos, and posting on social a breeze.
So now, it's easier than ever to be a marketer.
Get started at HubSpot.com slash marketers.
Welcome, Tristan.
Now, you and I met, let's go back a little bit, when you were concerned about social media.
I think it was one of the first interviews you did.
It was 2016, 2017.
I think it was right after Trump had gotten elected.
That's correct.
And I was really choosing to come out and say, you know, you and I were both. In a little booth in Stanford.
I remember that.
It was very small.
The Stanford radio.
Yeah.
Yeah.
But talk about, for people who don't know you, both of us are probably seen as irritants or Cassandras, I guess, she was right. But whatever, John the Baptist,
any of those precursors. Lost his head. Okay. Talk about what got you concerned in the first
place, just very briefly for people to understand. Yeah. So I guess for people who don't know my
background, I was a tech entrepreneur. I had a tiny company called Apture. We got talent acquired
by Google. In college, I was part of a class called the Stanford Persuasive Technology Lab Behavior Design class
and studying the field of social psychology, persuasion, and technology. How does technology
persuade people's attitudes, beliefs, and behaviors? And then I saw how those techniques
were the mechanisms of the arms race to engage people with attention. Because how do I get your attention?
I'm better at pulling on a string in the human brain, in the human mind.
And so I became a design ethicist at Google after releasing a presentation inside the
company in 2013 saying that we were stewarding the collective consciousness of humanity.
We were rewiring the flows of attention, the flows of information, the flows of relationships.
I sort of said,
you know, I'm really worried about this.
I actually thought the presentation was going to get me fired.
And instead, I became a design ethicist getting to study how would we-
To give you a job of these worries, right?
It's better than me leaving and doing something else.
Right.
But I tried to change Google from the inside for three years before leaving.
When you look back on that, I think they wanted to have you there.
You're kind of like a house pet, right?
Like, I don't mean to insult,
but you know what I mean?
Like, oh, we got a design.
Hopefully I'm a friendly house pet.
Yeah, I know.
But they don't like the house pets that bite.
And you started to bite.
Yeah, well, I think, you know,
it's funny now because if you look at,
when we get to AI,
which we're going to get to later,
people who started AI companies
actually started with the notion of
we can do tremendous damage
of what we've created.
There's a whole field of AI safety and AI risk.
Now, imagine if when we created social media companies, Mark Zuckerberg and Jack Dorsey
and all these guys said, we can wreck society.
We need to have a whole field of social media safety, social media risk.
And they had actually had safety teams from the very beginning figuring out.
They hated when you brought up negative things.
Yeah, they hated it.
They denied that there was even any issue.
And it was hard to see the issue.
And we had to fight for the idea, you and I, that there was these major issues.
Addiction, polarization, narcissism, validation seeking, sexualization of kids, online harassment, bullying.
These are all digital fallout of the race to the bottom of the brainstem for attention, the race to be more and more aggressive about attention.
Right.
The race to be more and more aggressive about attention.
So I was frustrated that especially Facebook, because I had more contact with that company, wasn't going to do more.
And that people were in denial about it. And it goes back to the Upton Sinclair quote.
You can't get someone to question something that their salary depends on them not seeing.
And their boss.
Their boss who runs everything.
He really was like a brick wall.
We're a neutral mirror for society.
We're just showing you the unfortunate facts
about how your society already feels and works.
Yeah, I kept saying finish college, you'll understand.
You might want to take World War II, maybe.
Throw in some Vietnam War and perhaps, you know,
go back to World War I because it's all like,
you know, that's recent history.
So a couple of months ago, you and Asa released,
had a presentation that I went to here in Washington
called The AI Dilemma, laying out the fears.
You know, I think there's a proclivity to say calm down.
It's don't be so Terminator.
There's a proclivity to say don't be so Sunshine, right?
That there's, let's focus not on the existential fears, but the current ones we can work on.
Now, one of the people that have been working on it feel like you can't guess what
it's going to do at this point. And that when you get overly dramatic, it's a real problem.
Yours was pretty dramatic when you were doing it in front of a group of Washington people.
When you say you can't guess what it's going to do, what do you mean?
What's going to happen with this? We don't know. So let's deal with our current
fears versus our supposed fears.
Yeah, I disagree.
First of all, there's a whole bunch of harms with AI and all the stuff around bias and fairness and automating job applications and police algorithms and loans.
And those issues are super important, and they affect kind of the safety of society as it exists.
I think the things that we're worried about are the ways that the deployment of AI can undermine the container of society as it exists. I think the things that we're worried about are the ways that the
deployment of AI can undermine the container of society working at all. You know, cyber attacks
that can break critical infrastructure, water systems, nuclear systems, you know, the ability
to undermine democracy at scale. You know, in Silicon Valley, it's common for AI researchers
to ask each other what their, it's called PDoom, the probability of doom. Explain what PDoom is calculating and
tell me what's your PDoom. So I don't, I don't know if I have a PDoom. I would say that we,
and you were sort of, I want to make sure I go back to the thing you were saying earlier,
can we predict what's going to happen? I would say we can predict what's going to happen. And
I don't mean that it's doom. What I mean is that a race dynamic where if I don't deploy my AI system as fast as the other guy, I'm going to lose to the
guy that is deploying super fast. So if Google, for example-
That's internally capitalist companies and then also other countries.
Yes, exactly. And that's just a multipolar trap, a classic race to the cliff. And so Google,
for example, had been holding back many advanced AI capabilities in the lab, not deploying them because they thought they were not safe.
Yeah.
When Microsoft and OpenAI hit the starting gun and said in November we're going to launch ChatGPT and then boom, we're going to integrate that into Bing and actually make this the way, you know, we're going to make Google dance, as Satya Nadella said.
Right.
That hit the starting gun on a market, a pace of market competition.
Right.
They have to.
Then now everybody is going, we have to.
Yeah.
And we have to what?
We have to unsafely, recklessly deploy this as fast as possible.
So that we're out front.
Like my Google just asked me to write an email.
They usually want to finish sentences.
Now they're like, can I write this email for you?
I was like, go fuck yourself.
No, I don't want you to.
Right.
Well, and then Slack has to integrate their thing and integrate a chatbot.
And then Snapchat integrates my AI bots into the way that it works.
Spotify.
If TikTok and I haven't even seen Spotify.
I mean, the point is, this is what I mean by you can predict the future.
Because what you can predict is that everyone that can integrate AI in a dominating way to become, in the case for the race to engagement in AI, it's the race to intimacy.
Who can have the dominant relationship slot in your life?
You know, if Snapchat AI has a relationship with a 13-year-old that they have for four years, are they going to switch to TikTok or the next AI when it comes out?
No, because they've already built up a relationship with that one.
Unless AI is everywhere and then you have lots of relationships like you do in life.
But what they'll want to be incentivized to do is to deepen that relationship, to personalize it, to have known everything about you and to really
care about you. Don't leave me now. Don't leave me now. And, you know, I mean, even Facebook did
that when you wanted to delete your account in 2016, they would say, do you really want to leave?
And they would literally put up photos of the five friends and they would calculate which of the
photos, which five friends could I show you that would most dissuade you from doing that. And so
now we're going to see more and more sophisticated versions of those kinds of things. But that race to intimacy,
that race to become that slot in your life, the race to deploy, the race to therefore move
recklessly, those are all predictable factors. So just to be clear, because you're sort of
challenging me, you know, can we predict where this is going? And the point is, we can predict
that it was going to go so recklessly and go so quickly, because we're also deploying this
faster than we deployed any other technology in history. So the most consequential technology,
the most powerful technology we have ever deployed, and we're deploying it faster
than any other one in history. So for example, it took Facebook four and a half years to get
to 100 million users. It took TikTok nine months. It took ChatGPT, I believe, two months.
But in the presentation, in that vein, you cite a study where 50% of AI researchers say that PDoom, their PDoom is 10% or higher.
But it's based on a non-peer-reviewed survey.
On a single question survey, they had only about 150 responses.
Should we be swayed by that data that they're worried?
Because there is that ongoing theory that the people who make this are worried.
The cooks are worried about what they're making. One critique of that survey is it's somehow all about AI hype, that the people who are answering the survey are people inside the companies who want to hype the capabilities so that they get more funding and everybody thinks it's bigger than it actually is.
But the people who answered that survey were machine learning researchers who actually published papers and conferences.
They're the people who actually know this stuff the best.
Sure.
If you go inside the industry and talk to the people who build this stuff, it's much higher than that survey is.
Again, this is why we're doing this.
Oh, I was at a dinner party years ago when they were top people.
Like I was sort of like, huh, that's interesting.
They're very top people.
I mean, don't trust a survey.
Trust – there's a document of all the quotes of all the founders of AI companies over all the years of saying these quotes about we're going to wipe out – you know, there's a strong chance we'll wipe out humanity.
We'll probably go extinct. They're not talking about jobs. They're talking about
a whole bunch of other scenarios. So don't let one survey be the thing. We're just trying to,
you know, take one data point. People are worried. People are deeply worried. Yeah.
You use the metaphor of a golem. Explain the golem. So the reason that we actually came up
with that phrase to describe it is that people have often said, and this is pre-GPT-4 coming out,
like, why are we suddenly so worried about AI? AI has existed for 20 years. We haven't freaked
out about it until now. And Siri still mispronounces my name and Google Maps still
says my, pronounces the street address that I live on wrong. And why are we suddenly so worried
about AI? And so one of the things that in our own work and trying to figure out how we would
explain this to people was sort of realizing that we needed to tell the part of the things that in our own work and trying to figure out how we would explain this to people was sort of realizing that we needed to tell the part of the story that in 2017, AI changed.
Because a new type of AI, sort of class of AI came out called transformers.
It's 200 lines of code.
It's based in deep learning.
That technology created this brand new explosive wave of AI that are based on generative, large language, multimodal models, GLLM. We said,
how can we differentiate this new sort of era of AI that we're in from the past so that people
understand why this curve is so explosive and vertical? And so we said, okay, let's give that
a name so that people can track it better as public communicators, as in I care deeply about
precise communication. So we just said, let's call them Gollum class AIs.
And a Gollum, of course, is the famous...
The Jewish myth of an inanimate object that then sort of gains animate capabilities.
And that's one of the other factors about generative large language models is that as
you pump them with more information and more compute and you train on them, and they actually
gain new capabilities that the engineers themselves didn't program
into them.
Right.
That they're learning.
They're learning.
Yeah.
Now, let me be clear.
You do not believe these are sentient.
No.
And this has nothing to do with-
Make that clear.
Yeah.
They're not humans.
There's this fascinating tendency when human beings like think about this, where they get
obsessed with the question of whether they can think.
Sci-fi.
Sci-fi.
That's why.
Yeah.
And it actually kind of demonstrates just kind of like a can think. Sci-fi. Sci-fi. That's why. Yeah. And it actually kind of demonstrates
just kind of like a predispositions of humans.
So imagine Neanderthals are baking Homo sapiens in a lab
and they become obsessed with the question
when it comes out, when this thing is more intelligent,
is it going to be sentient like Neanderthals?
It's just a bias of how our brains work.
Right.
When really the way that what really matters
is can you anticipate the capabilities
of something that's smarter than you?
So imagine you're a Neanderthal.
You're living in a Neanderthal brain.
You can't think about humans once they pop out inventing computation, inventing energy, inventing oil-based hydrocarbon economies, inventing language.
Right.
So we don't know, which you're essentially saying we don't know.
It's inconceivable what it is, but it's not sentient.
And I think that's – because then we attribute emotions to it.
Right. Well, we just – maybe eventually those questions will matter, but they's not sentient. And I think that's because then we attribute emotions to it. Like it would. Well, we just, it just, it maybe eventually those questions will
matter, but they're just not the questions that matter. The question is whether or not it is
sentient. It doesn't have to be. There's enormous dangers that can just emerge from just growing
these capabilities and entangling this new alien intelligence with society faster than we actually
know what's there. Alien is an interesting word that you use because it's one that Elon Musk used many years ago.
He said they treat us like aliens would treat a house cat.
But then he changed it to we're an anthill
and they're making a highway.
They don't really, they're not mad at us.
No.
They don't care.
No, they just, they're just doing things
from their perspective that makes sense.
Makes sense.
But just like, by the way, just like social media was.
Social media was doing.
So social media already.
Let me argue that AI might have already taken control of humanity in the form of first contact with AI, which is social media.
What are all of us running around the world doing every day?
What are all of our political fears?
What are all of our elections?
They're all driven by social media.
We've been in the social media AI like brain implant for 10 years.
We don't need an Elon Musk brain implant.
We already have one.
It's called social media.
It's been feeding us the worldviews and the umwelts that define how we see reality for 10 years.
And the noisiest people, yeah.
And the noisiest people.
And that has warped our collective consciousness.
And so are you free if all the information you've ever been looking at has already been determined by an AI for the last 10 years?
And you're running confirmation bias on a stack of stuff that has been pre-selected from the outrage selection feed
of Twitter and the rest of it. And so you could argue that AI has already taken over society in
a subtle way. I don't mean taken over in the sense that its values are driving us, but in the sense
that, you know, just like we don't have regular chickens anymore, we have the kind of chickens
that have been domesticated for their, you know, their meat. We don't have regular cows. We have
the kind of cows that have been domesticated for their milk and their meat. We don't have regular cows. We have the kind of cows that have been domesticated for their milk and their meat. We don't have regular humans
anymore. We have AI engagement optimized humans. So one of the things you did, you and Asa did,
was you made a lot of news when you tested Snapchat's AI. It's my AI called, as if you
were a 13-year-old that gave them advice how to set the mood for sex with a 35-year-old.
Stunty. They've fixed it.
They think they've fixed it.
Is it tested a few days ago?
It still happens.
It still happens.
It doesn't.
It's suggesting you bring candles for your first romantic time with a 13-year-old with
a 38 or 41-year-old, I think it was.
So it doesn't say a couple of the suggestions, but it still does say some of those things.
And you can still get it to those things.
By the way, I've gotten emails from parents since we gave that presentation, and their
kids have independently found it doing things like that. Doing things like that. So it's those things. By the way, I've gotten emails from parents since we gave that presentation and their kids have independently found it doing things like that.
So it's still not, it's still, they just can't anticipate all the problems.
Well, it's actually even worse than that. It's just important for listeners to know,
just to be fair to Snapchat, they actually did not roll that My AI Bot out to all of its,
I can't remember if it's 700 million users. They didn't roll it out to all their users. They rolled
it out to only paid subscribers at first, which is something like 2 to 3 million users.
But, of course, just two weeks ago or something like that, they released it to all their users.
Why did they do that?
Because they're in a race to dominate that intimate spot in your life.
Everyone wants to be the Scarlett Johansson, her AI bot in your ear.
You both signed a letter calling for the six-month pause on giant AI experiments.
Elon did too. Elon Musk did too. It's unfortunate that that letter got defined by Elon's
participation. Yes, because he looked like he was doing his own business. Well, later, obviously,
he then also started his own AI company. And so obviously, it delegitimizes. Yeah, he also laughed
and said he knew it would be futile to sign it. So why make that many people think it was a futile
effort? Well, these are these are separate topics. I want to make sure we really slow down and futile to sign it. So why make that? Many people think it was a futile effort.
Well, these are separate topics. I want to make sure we really slow down and actually distinguish here. The founders of the field of machine learning started that, you know,
helped sign that letter. Steve Wozniak started the letter. The co-founder of Siri signed the letter.
Andrew Yang, et cetera, all of us at Center for Humane Technology.
That letter is because the Overton window of society about how unsafe and dangerous this is was not well known.
The purpose of that letter was to make it very well known that this field is much more dangerous than what people understand.
And I think there is a legitimate – we know the Future of Life Institute folks who were really kind of spearheading the letter. There was a lot of
debate about what is the appropriate time to call for a slowdown. And by the way, I think slowdown
is also badly named on retrospect, I think something like redirection of all the energy
of those labs into safety work and safety research and guardrails. So it's, imagine it's six months
of instead of an AI winter, an AI harvest, an AI summer, where you harvest the benefits that you have, you do understanding on what are the capabilities inside of everything that's been released.
Did you imagine this was going to happen?
That they would go, oh, yes.
Oh, yes.
I see your point.
Connected to the team that did it and kind of being privy to some of the internal conversations, I think we were all surprised how many incredible people did sign the letter. They did. Yeah. Many people signed the letter. It's funny that people look at
it and maybe say, this is futile, but it's like saying, you know, uh, it just because something
is hard, doesn't mean it shouldn't be the intention. And one of the interesting things is
that if you talk to an engineer and you say, Oh, like we're going to build this AGI thing. And
they're like, Oh, that sounds really hard, but it's like, but we're so compelled by the idea
of building these AI systems, these AGI systems, a God that I could talk to, that they say,
I don't care how hard it is. And so they keep racing towards it. And it's been 30, 100 years,
50 years that people have been working on this. In other words, we don't say because something's
hard, we shouldn't keep going and try to build it anyway. Whereas if I say coordination is hard
for the whole world, people say, oh, let's just throw up our hands
and say it's never going to happen.
We need to get good at coordination.
All of our world's problems are coordination problems.
Right, we do it with nuclear energy.
We do it with a lot of things.
We have a limit of nukes to nine countries.
Just to put a pin on it, though,
if I said it's inevitable
that all countries are going to get nukes,
let's not do anything about it.
In fact, let's just let every country pursue it
and just not do anything.
We probably wouldn't be here today.
A lot of people had to be very concerned about it and move into action to say something different needs to happen.
But people can – a nuclear war we got.
We saw it.
It happened with the atom bomb.
So tell me – give me your best case against a pause.
And one of the more compelling criticisms is U.S. is going to fall behind China.
This is something I heard from Mark Zuckerberg about social media in general or tech in general.
Which is interesting because I would argue –
Oh, they use the same Xi or me argument every time.
They like drag it out.
But it's concerning.
It is.
It absolutely is.
China has shown itself to have very few governors on itself.
I would say the unregulated deployment of AI would be the reason we lose to China. actors do beat you in dominance in deploying AI, people with no morals, with no safety
considerations, with no concerns, with different values as a future of the world kind of society,
you know, Chinese digital authoritarianism values or something like that, or Chinese Communist Party
values, then we certainly want to want to lose to that. So I think not, if the if there was a
sincere risk that that would happen, there would be a good reason to say, let's not call for that.
But I would actually argue that the unregulated deployment of AI is what is causing the West to lose to China.
Let me give you the example of social media.
Social media was the unregulated deployment of AI to society.
The breakdown of democracy's ability to coordinate because we no longer have a shared –
That's really good for authoritarianism.
Why are democracies backsliding everywhere around the world all at once?
Barbara F. Walter wrote a book called How the Next Civil War Starts.
She talks about democracies, democracies that are backsliding everywhere.
I'm not blaming it all on social media,
but we're seeing it happen rapidly in all these countries
that have been governed by the information environment created by social media.
And if a society cannot coordinate, can it deal with poverty?
Can it deal with inequality?
Can it deal with climate change?
So we shot ourself in the foot and now we're going for the arms.
Yeah.
That kind of thing.
I'm going to go to – I've interviewed a number of times.
One we did in 2017, as I said, before you and Asa founded the Center for Humane Technology.
Back then you were focused on social media, as we discussed earlier,
showing why revenue models built on monetizing our attention are bad for us. Because a lot of this is about monetization and who's going to have the next intimate relationship, which they've been trying to do forever in different ways through Siri and all kinds of different things. But now they really want you to be theirs, essentially. Let's pick a clip from it. Apple, Google, and Facebook are kind of like these private companies who collectively are the urban planners of a billion people's attentional landscape.
Right. That's a great way to put it.
We kind of all live in this invisible city.
Right. Which they created.
Which they created. And the question is, what's unlike a democracy where you have some civic
representation and you can say, well, who's the mayor? And should there be a stoplight there,
stoplight on our phone, or blinker signals between the cars, or these kinds of things?
We don't have any representation except if we don't use the product or don't buy it.
And that's not really representation because the city itself is—
So attention taxation without representation.
Maybe, yeah.
But so I think, you know, there's this question of how do we create that accountability loop?
You know, that was very well put.
Now we took it further.
I said it's like the purge.
They actually own the city and they don't do anything. We can't do anything
and they won't do anything. They have no stop signs. They have no streets. They have no sewage,
everything else. So I took your thought a step further. Talk about AI firms becoming the new
urban planners of the, I guess, attentional landscape, because that's what they want.
It's more than attention they want. They want to own you, right? I mean, it's what you're saying.
Well, so there's really, I want to separate between two different economies. So there's
the engagement economy, which is the race to dominate, own, and commodify human experience.
So that's the-
Social media.
Social media. Social media is the biggest player in that space.
But VR is in that space. YouTube is in that space. Netflix is in that space. It's the
race to say, look at me, look at me, all the all the things that construct your reality that
determine from the moment you wake up and your eyes open to the moment your eyes close at the
end of the night, who owns that your attention? That's the engagement economy. That's the
attention economy. And there's specific actors in that space, AI will be applied to that economy,
just like AI will be applied to all sorts of other economies.
Also, the cyber hacking economy, AI will be applied to the battery storage.
It's more like the internet.
Yeah.
It's a bigger.
AI is a much bigger thing.
So there's a subpart of the AI economy, which is the engagement economy.
And AI will supercharge the harms of social media there.
Because before we had people A- AB testing a handful of messages on social
media and figuring out like Cambridge Analytica, which one works best for each political tribe.
Now you're going to have AIs that do that. And there's a paper out called, I think it's called
Silicon Sampling. So you can actually sample a virtual group, like instead of running
Franklin's focus groups around the world, you can kind of have a language bot, chat bot that you talk to that will answer
questions as if someone is a 35 year old in Kansas City has two kids. And so you can run
even perfect message testing. Right. So you don't need to talk to people.
So you don't need to talk to people. You know what they're going to say.
You can do a million things like that. And so the loneliness crisis that we see,
the mental health crisis that we see, the sexualization of young kids that we see,
the online harassment situation that we see, all that just of young kids that we see, the online harassment situation
that we see, all that's just going to get supercharged with AI. And the ability to create
alpha persuade, which is just like there was AlphaGo and AlphaChess, where the system's playing
chess against itself and kind of getting much, much better. It's now going to be able to hyper
manipulate you and hyper persuade you. So what you're talking about is social media as a lower being than AI. AI powers everything. Social media is one. But we couldn't even regulate social media.
Is society aware of the need for regulation since we didn't do it for social media?
So the point we made in this AI dilemma presentation is that we were too late with
social media because we waited for it to entangle itself with journalism,
with media, with elections, with business.
Because now businesses can only reach their consumers if they have an Instagram page
and use marketing on Facebook and Instagram and so on.
Social media captured too many of the fundamental life's organs of how our society works.
And that's why it's been very hard to regulate.
I mean, you know, certain parties benefit, certain politicians benefit. Can you regulate, would you want to ban TikTok if you're
a politician or a party that's currently winning a lot of elections by being really good at TikTok?
Right. Right. So once things start to entangle themselves, it's very hard to regulate them.
There's too many vested interests. With AI, we have not yet allowed this thing to roll out. I
mean, now it's obviously happening incredibly fast. We gave the presentation a few months ago.
The whole point of it was before GPT-4 was we need to act before this happens.
One good example of this happening in history was a treaty to ban blinding laser weapons from the battlefield before they were actually ever used.
To blind the soldiers.
To blind soldiers, yes.
This would be a high-energy laser that has the capability to point at everyone and it just blinds them.
But we're just like, you know what?
In the game of war, which is a ruthless game where you kill other human beings, even as ruthless as that game is, that is just a,
we don't want to allow that. And even before it was ever deployed, that was one of maybe the most
optimistic examples where humanity could sort of use our higher selves to recognize that's a future
game. It goes into the killer robot part of the portion of the show, right?
Right. Then there's the slaughter bots. How do we ban autonomous weapons? How do we ban recombinant DNA engineering
and human cloning, things like this?
And so this is another one of those situations.
And we need to look to,
especially the example of the blinding laser weapons,
because that was in advance of the technology
ever getting fully deployed.
Because a lot of the kind of guardrails
that we're going to need internationally
are going to be saying,
no one would want that future race to happen. So let's prevent that race. Right. So but that's nation states. Now,
AI, anybody could do it. The same thing with CRISPR, though, they definitely scientists got
together and had standards. And this is much easier to be able to do what you want if we are
all in a group together coordinating this. So if I want to steel man the AI doomers,
and the P doomers that have a really high number for that P doom number, it's because it's so hard to prevent the proliferation that some people,
that many people think that we're doomed just to really clear and why that's also a very legitimate
thing. That is certainly, that would be my biggest P doom. This is too easy for, it's too easy. So
let's just hang there for a moment. Just really recognize that that's not being a doomer. That's
just being an honest viewer of these are the risks.
Now, if something other were to happen, you know, you could involve, you know, governments
and law to say, hey, we need to get maybe more restrictive about GitHub and Hugging
Face and where these models go.
Maybe we need export controls.
There are people who are working on models of how do we, just like there's three printed
guns as a file, as a file, you know. You can't just send those around the open internet.
We put export controls on those kinds of things.
It's a dangerous kind of information.
So now imagine there's a new kind of information
that's not a 3D printed gun,
but it's like a 3D printed gun
that actually self-replicates and self-improves
and gets into a bigger and bigger gun.
And builds itself.
And builds itself.
That's a new class.
That's not just free speech.
The founding fathers couldn't anticipate
something that self-replicates and self-improves being a class of speech. That's
not the kind of speech that they were trying to protect. Part of what we need here are new legal
categories for these new kinds of speech. Sam Alton, who runs OpenAI, was on the Hill
calling for AI regulation. They all are. You can't say you didn't warn them, right? A lot of
tech CEOs have claimed they want regulation, but they've also spent a lot of money previously on stopping antitrust, stopping
algorithmic transparency, stopping any privacy regulation. Do you believe this class of CEOs?
Because a lot of them are saying, this is dangerous. Would you please regulate this?
Yeah. So you're pointing to what happened with social media,
which was that publicly they would say, we need regulation,
we would need regulation.
When you talk to the staffers- They never said, this is dangerous, we need regulation.
They never said dangerous.
He says dangerous.
He says dangerous.
And I want to golf clap that, you know,
we always want to endorse and celebrate
when there is actually an honest recognition of the risks.
I mean, to Sam Allman's credit,
he has been saying in public settings,
I think much to the chagrin of maybe his investors
and other folks,
that there are existential risks here.
I mean, what CEO goes out there saying,
this could actually wipe out humanity
and not just because of jobs.
I mean, so we should celebrate
that he's being honest about the risks.
We actually do need an honest conversation about it.
However, as you said, in the history of social media,
it is very easy to publicly advocate for regulation and then your policy teams follow up with all the staffers and then say,
let me redline this, redline that, that's never going to work. And they just sort of stall it.
So nothing actually ever happens. I don't think it's that bad faith in this context. I do think
that some kind of regulation is needed. Sam Altman talked about GPU licensing, licensing,
doing a training runs. If you're going to run a large frontier model, you're going to do a massive
training run. You've got a license to do that. You're building a, just like we have the
Wuhan Institute of Virology was a biosafety level four lab doing advanced, you know, kind of gain
of function research. If you're building a level four lab, you need level four practices and
responsibilities. Even there though, we know that that may not have been enough, whatever
safety practices. We're now building AI systems that are super advanced. And the question is,
do we actually have the question is, do we
actually have the safety practices? Are we treating it like a top lab?
Well, the first thing is, are we treating it that way? And then the second is, do we even know what
would constitute safety? So this is getting to the end question you're asking. Can we even do
this safely? Is that even possible? Because think of AI as like a biosafety level 10 lab. Imagine
we had something called, I'm inventing it right now, but a biosafety level 10 lab where I invent
a pathogen that the second it's released, it kills everyone instantly. Let's just imagine that that was
actually possible. Well, you might say, well, let's let people have that scientific capacity.
We want to just see, is that even possible? We want to test it so we can build a vaccine or
prevention systems against a pathogen that could kill everyone instantly. But the question is to
do that experimental research. What if there was, we didn't have biosafety level 10 practices? We only had
biosafety level 10 dangerous capabilities. Would we want to pursue biosafety level 10 labs? I think
that AI, the question, the deeper question is, with great power comes, you cannot have the power
of gods without the wisdom, love, and prudence of gods. And right now we are handing out and
democratizing godlike powers without actually even knowing what would constitute the love, prudence, and wisdom that's needed for
it. And I think the story in the parable of the Lord of the Rings is that there are some, you know,
why did they want to throw the ring into Mount Doom? There's some kinds of powers that when you
see them, you say, if we're not actually wise enough to hold this ring and put it on, we have
to know which rings we have to say, hey, let's collectively not put on that ring.
Right. I get that. I understand that. One of the things is that when you get this dramatic,
like I said at the beginning, does that push people off? Like this is a pathogen we get,
like we've just been through COVID and that was bad enough. And there's probably a pathogen that
could kill people instantly. It's not how people think. Yeah. Well, let's actually just make that
example real for a second, because I'm not,
that was a hypothetical thing of a biosafety level 10 thing.
Can AI accelerate the development of pathogens and gain-of-function research and people tinkering with dangerous lethal bioweapons?
Can it democratize that?
Can it make more people able to do that?
More people be able to make household explosives with household materials?
Yes.
We don't want that.
That's really dangerous.
It's a very concrete thing.
That's not AI doomers.
There's real concrete stuff we have to respond to here.
We'll be back in a minute.
Support for this show comes from Indeed. If you need to hire, you may need Indeed. Thank you. Fast. Listeners of this show can get a $75 sponsored job credit to get your jobs more visibility at Indeed.com slash podcast.
Just go to Indeed.com slash podcast right now and say you heard about Indeed on this podcast.
Indeed.com slash podcast.
Terms and conditions apply.
Need to hire?
You need Indeed.
Your business is ready for launch.
But what's the most important thing to do before those doors open?
Is it getting more social media followers?
Or is it actually legitimizing and protecting the business you've been busy building?
Make it official with LegalZoom.
LegalZoom has everything you need to launch, run, and protect your business all in one place.
Setting up your business properly and remaining compliant are the things you want to get right from the get-go.
And LegalZoom saves you from wasting hours making sense of the legal stuff.
And if you need some hands-on help, their network of experienced attorneys from around the country has your back.
Launch, run, and protect your business
to make it official today at LegalZoom.com
and use promo code VoxBiz
to get 10% off any LegalZoom Business Formation product,
excluding subscriptions and renewals.
Expires December 31st, 2024.
Get everything you need from setup to success
at LegalZoom.com
and use promo code VoxBiz. LegalZoom.com and use promo code VoxBiz. LegalZoom.com and use
promo code VoxBiz. LegalZoom provides access to independent attorneys and self-service tools.
LegalZoom is not a law firm and does not provide legal advice except we're authorized through its
subsidiary law firm, LZ Legal Services, LLC. Tell me something that AI could be good for, because I talk about that because I think I'm
a little less extreme than you. There are, and I think at the beginning of the internet, I was like,
this could be great. And of course, then you saw them not worrying about the not so great. And
I think it's sort of that tools and weapons, speaking of which from Microsoft, that was the
Microsoft president, Brad Smith talked about tools and weapons. Some are a knife is a tool and a weapon.
So what is the tool part of this that is a good thing?
So first of all, I think this is another one of those things, just like we say, is the AI sentience.
That when people hear me saying all this, they think I don't hear or don't know about or aren't talking about all the positives it can do.
This is another fallacy of how human brains work.
Just like we get obsessed with the question of is it sentient, we get obsessed with the one it can do. This is another fallacy of how human brains work. Yeah.
Just like we get obsessed with the question of is it sentient,
we get obsessed with the one-sidedness of one,
like it has all the positives.
You can, just as fast as you can design cyber weapons with AI and accelerates the creation of that,
you can also identify all the vulnerabilities in code
or many vulnerabilities in code.
You can invent cures to diseases.
You can invent new solutions for battery storage.
We're going to have, as I said in Social Dilemma, what's going to be confusing about this era is its simultaneous
utopia and dystopia. I can't think of so many good things about social media. I couldn't.
I can think of dozens here, dozens here. And there I was like, maybe we'll all get along and do
better. Social media is like increasing the flows of information. People are able to maintain many more relationships.
Old high school sweethearts.
Sure, but not like this.
This is gene folding.
This is drug discovery.
This is real movement forward.
Absolutely.
But I'll tell a story.
I mean, so the real confusing thing is,
is it possible on the current development path
to get those goods without the bats? What
if it was not possible? What if I can only get that, you know, the synthetic biology capabilities
that let me solve problems, but there was no way to do it without also enabling bad guys to do it.
Then to create this pathogen that you're talking about, for example.
So just to make it personal, my mother died of cancer. And if you told me that there were, you know, I, like any human being, would do anything to have my mother still be here with me.
And if you told me that there was an AI that was going to be able to discover a cure for my mother that would have her still be with me today, obviously I would want that cure.
If you told me that the only way for that cure to be developed was to also unleash capabilities, that the world would get wrecked.
This is a dinner party, one of those dinner party questions.
Would you kill 100 million people to save?
But it's real.
Yeah.
I mean, I'm just saying there's certain domains where there's no way to do the one side without doing the other side.
Right.
And if you told me that, just really on a personal level, as much as I want my mom to be here today, I would not have made that trade.
Well, you're talking about an old Paul Virilio quote, which is you can't have a ship without a shipwreck or electricity without the electric chair.
We do that every day.
A car is – net cars have been great.
Net they've been bad.
Now, you know what I mean?
But if you have godlike powers that can kind of break society in much more fundamental ways.
So now, again, we're talking about benefits that are literally godlike solutions for every problem.
But if it also just undermines the existence of how life can work.
That's your greatest worry is this idea of reality fracturing in ways that are impossible to get back.
No, I mean all of it together. If AI is unleashed and democratized to everybody,
no matter how high the tower of benefits that AI assembles, if it also simultaneously crumbles the foundation of that tower, it won't really matter. What kind of society can receive a cancer drug?
If no one knows what's true, there's cyber attacks everywhere, things are blowing up,
and there's pathogens that have locked down the world again. Think about how bad COVID was. People
forget going through one pandemic, just one pandemic.
Imagine that just happens, like, a few more times.
Like, that can quickly, we saw the edges of our supply chains.
We saw how much money had to be printed to keep the economy going.
It's pretty easy to break society if you have a few more of these things going.
And so, again, how will cancer drugs sort of flow in that society that
has kind of stopped working? And I don't mean, again, AI doom, Eliezer Yudkowsky,
AGI kills everybody in one instant. I'm talking about dysfunction at a scale that is so much
greater. Are we getting closer to regulation? Did you find those hearings? Did you have any
good takeaways from them? And where is it going to go from here?
Who knows where it's going to go?
I didn't see all of the hearing.
I was happy to see a couple of things, which is based on structural issues.
So one was actually the repeated discussion of multilateral bodies. So something like an IAEA, like the International Atomic Energy Agency, something like that for AI that's actually doing global monitoring and regulation of AI systems, of large frontier AI systems.
I think Sam was proposing that.
That was repeated several times.
I was surprised to see that.
I think that's actually great
because it is a global problem.
What's the answer when we develop nuclear weapons?
Is it that Congress passes a law to deal with nukes here?
No, it's a global coordination
around how do we limit nukes to nine countries?
How do we make sure we don't do above ground nuclear testing?
So I was happy to see that in the hearing.
I was also happy to see multiple members of Congress, including I think it was
Lindsey Graham and the Republicans who are typically not for new regulatory agencies,
but them saying they recognize that we need one because the system is, you know, E.O. Wilson,
if we have paleolithic emotions, medieval institutions and godlike tech, medieval
institutions and medieval laws, 18th century ideas, 19th century laws and ideas don't match for 21st century issues like replicant speech.
Larry Lessig has a paper out about replicant speech. Should we protect the speech of generative
robots the same way we protect free speech? The founding fathers had totally different ideas about
what that was about. No, we need to update those laws. Part of our medieval institutions are
institutions that don't move as fast as the godlike text.
So if a virus is moving at 21st century speeds and your immune system is moving at 18th century speeds, your immune system being regulated.
Do you have any hope for any significant legislation?
I mean, Vice President Harris met with – they're all meeting with everybody for sure and early compared to the other things. I don't remember, Cara, but when we did that briefing in D.C. back here in whatever it was, February or March, we said one of the things we really want to happen is for the White House to convene a gathering of all the CEOs.
And that I would have never thought would have ever happened.
And it did happen.
I would have never thought there would be a hearing.
And they mentioned it at the G7 this week.
And they did it.
They mentioned the G7 this week.
So there's things that are moving.
I don't want people to be optimistic, by the way.
There needs to be a massive effort and coordinated response to make the right things happen here.
Right. Vice President Harris led that meeting and told them they have ethical,
moral, and legal responsibility to ensure the safety and security of their products.
They certainly don't seem protected by Section 230. They're probably not protected. There is
liability attached to some of this, which could be good. That's good. Is there any?
We talk to people inside the company. All we're trying to do is figure out what needs to happen.
And often the people inside the companies who work on safety teams will say, like, I can't advocate for this publicly, but, you know, we need liability.
Because talking about responsibility and ethics just gets bulldozed by incentives.
There needs to be liability that creates real guardrails.
Right.
Let's do a lightning round.
What you would say to the following people if they were here right now.
Sam Altman, CEO of OpenAI, what would you say to him, Tristan?
Gather all of the top leaders to negotiate a coordinated way to get this right.
Move at a pace that we can get this right, including working with the Chinese and getting a multilateral negotiations happening.
And say that that's what needs to happen.
It's not about what you do with your company and your safety practices and how much RLHF.
So multilateral, multilateral.
But get coordination.
Satya Nadella and Sundar Pichai, I'm going to mush them together.
Retract the arms race.
Instead of saying let's make a cool dance, which is what Satya Nadella said, we have to find a way to move back into a domain of advanced capabilities being held back.
Buying ourselves a little bit more time matters.
Yeah.
Well, they've been sick of being pantsed the entire last decade.
I think they want to do that in some fashion.
Reid Hoffman, Mustafa Suleiman, co-founders of Inflection AI, which put out a chatbot this month.
I mean, honestly, it would be the same things with Sam.
It's like everyone needs to work together to get this right.
We need to see this as dangerous for all of humanity, right?
This isn't us versus the tech companies.
This is all of us are human beings, and there's dangerous outcomes that land for all of us.
What about Elon Musk? He signed the AI pause letter letter has been outspoken on the danger for years,
he was one of the earliest people that was talking about it along with Sam, as I recall,
a decade ago. But he of course, started his own company x AI, when he wants to get to the truth,
AI, whatever that means, we need to escape this logic of I don't think the other guys are going
to do it right. So I'm going to therefore start my own thing to do it safely, which is how we got to the arms race that's now driving all the unsafety.
And so the logic of, I don't believe in the way the other guys are doing it, and mostly for competitive reasons, probably underneath the hood, I'm doing my own thing, that logic doesn't work.
He's very competitive.
Do you blame them personally for putting us at risk?
Or is it just one of these group things that everyone goes along so there's this there's this really interesting dynamic where when there is a race which is all the problems are driven by
races if i don't you know do the mining in that plate in that version you know place or if i don't
do the deforestation i just lose to the guy that will if i don't dump the chemicals and i'll do it
more safely right i'll do it more safely so better me doing it than the other guy as long as i get
my profit and so everyone has that self-reinforcing logic so there's races everywhere that are the
real driver of most of the issues that we're seeing.
And there's a temptation once we diagnose it as a race, a bad race, to then absolve the companies of responsibility.
I think we have to do both.
Like there's both a race and also Satya Nadella and Sam, you know, helped accelerate that race in a way that actually we weren't trajectoring that way.
There was human choices evolved at that moment in the timeline. I talked to
people who helped found some of
the original AGI labs early in the day.
They said, you know, if we go back 15
years, they would have said, let's put a ban on
pursuing artificial general intelligence,
building these large systems that ingest the world's
knowledge about everything. We don't need to
do that. We should be building advanced applied
AI systems like AlphaFold that says, let's do
specific targeted research domains and applications. If we were living in that world,
how different might we be? You know, we had three rules of technology we put in that AI
dilemma presentation. When you invent a new technology, you create a new class of responsibilities.
Second rule of technology, if the new technology you invent confers power, it will start a race.
If I don't adopt the plow and start out competing the other society, I'll lose to the guy that does it off the plow. If I don't adopt social media to get more efficient,
car, et cetera. So that's, it starts a race. Third rule of technology. If you do not coordinate that
race, the race will end in tragedy. We need to become a society that is incredibly good
at identifying bad games rather than bad guys. Right now, we do have bad guys. We have,
again, CEOs that do bear some responsibility for some choices. But right now, we're always just,
that drives up polarization because you put all the energy into going after one CEO or one company
when we have to get good at slaying bad games. Well, except wouldn't you agree that one of the
reasons social media got so out of whack was because of Mark Zuckerberg and his huge power.
Like he had a power over the most big, the biggest thing and just was both badly educated.
Mark Zuckerberg made a ton of bad decisions while denying most of the harm, many of the harms,
most of the way through until just recently, including that it was a crazy idea that fake news had anything to do with the election.
You know, later they found the, you know, the Russia stuff was, oh, this is all overblown,
which I understand there's the Trump Russia stuff, which is there may have been overblown stuff there.
But the Facebook content, they said, oh, it didn't really reach that many people.
And it ended up reaching 150 million Americans.
No, I get it.
Facebook's own research said that 64% of extremists.
We could go on forever about that.
Jeffrey Hinton, who was known as one of the godfathers of AI, not the only one, had recently been sounding the alarm.
Do you think others would follow suit?
That was a big deal when he did that.
It really was.
I was very aware of him in AI.
Do you think it'll change the direction or is he just Robert Oppenheimer saying I have become death?
You know, one of the things that struck me both, you know, I came out too, right?
I was an early person coming out and I've seen the effects of insiders coming out. Frances Haugen, the Facebook whistleblower is a
close friend of mine and, you know, her coming out made a really big difference. The social dilemma,
I know, impacted her. It legitimized for many people inside the companies that they felt like
something was wrong. And now many more people, you know, came out. I think the more people come
out, the more the big names come out, the Jeff Hintz come out. It actually makes more people question.
Just I think this few days ago, there's now a street protest outside of DeepMind's headquarters in London saying we need to pause AI.
I don't know if you saw that.
No.
It's comparable to climate change in a lot of ways.
There are real people inside their own companies that are saying there's a problem here, which is why it's really important that we – when the people who are making something, who know it most intimately,
are saying there's a real problem here,
when the head product guy at Twitter says,
you know, I don't let my own kids use social media,
that's all you need to know
about whether something is good or safe.
So one of the things,
there's some proposals you brought up.
There's one based on a work by Taiwan's digital minister
who's so creative,
where a hundred regular people get in a room with AI experts
and they come out with a proposal.
That's an interesting one. You come up with one having a national televised discussion, major AI labs, lead safety experts, and other civic actors talk on TV. That's hard
because then you get a, on one hand, I could see that working, but not working.
Yeah, that's what you've done carefully. Let me explain the Taiwan one really quickly.
Okay.
Okay. So let's imagine there's kind of two attractors for where the world is going right now. One attractor is I trust everyone
to do the right thing. And I'm going to distribute godlike AI powers, superhuman powers to everyone.
Everyone can build bioweapons. Everyone can make generative media, find loopholes in law,
manipulate religions, do fake everything. That world lands in continual chaos and catastrophe
because it's just basically I'm
handing everyone out the power to do anything. Oh, yeah. Everyone had superpowers. Yeah.
Right. So that's one outcome. That's one attractor. Think of it like a 3D field and
it's kind of like sucking the world into one gravity well. It's just like continual catastrophes.
Kind of like guns, but go ahead.
Yeah. The other side is dystopia, which is instead of trusting everyone to do the right thing with
these superhuman powers, I don't trust anyone to do the right thing. So I create this sort of dystopian
state that sort of has surveillance and monitors everyone. That's kind of the Chinese digital
authoritarianism outcome. That's the other deep attractor for the world, given this new kind of
tech that's entering into the world. So the world is currently moving towards both of those. And
actually, as the more frequently the continual catastrophes happen, the more it's going to drive us towards the direction of the dystopia. So in both cases,
we're getting a self-reinforcing loop. So the reason I mentioned Taiwan is what we need is a
middle way or third attractor, which is what has the values of an open society, a democratic society
in which people have freedom. But instead of naively trusting everyone to do the right thing,
instead of also not trusting anyone to do the right thing, we have what's called warranted
trust. So think of it as a loop. Technology, to the degree it impacts society, has to constitute
a wiser, more responsible, more enlightened culture. A more enlightened culture supports
stronger upgraded institutions. Those upgraded institutions sets the right kind of regulatory
or guardrails,
et cetera, for better technology that then is in a loop with constituting better culture.
That's the upward spiral. We are currently living in the downward spiral. Technology
decoheres, addicts, outraged, loneliness, culture. That incoherent culture can't support any
institutional responses to anything. That incapacitated, dysfunctional set of institutions
doesn't regulate technology, which allows the downward spiral to continue.
The upward spiral is what we need to get to.
And the third way, what Taiwan is doing
is actually proving that you can use technology
in a way that gets you the upward spiral.
Audrey Tang's work is showing that you can use AI
to find unlikely consensus across groups.
You know, there's only so many people
that can fit into that town hall
and get mad at each other.
What if she creates a digitally augmented process
where people put in all their ideas
and opinions about AI
and we can actually use AI to find the coherence,
the shared areas of agreement that we all share
and do that even faster than we could do without the tech.
So this is not techno-utopianism,
it's techno-realism of applying the AI
to get a faster OODA loop,
a faster observe, orient, decide, and act loop
so that the institutions are moving as fast
as the evolutionary pace of technology. And she's got the best, closest example to that. And that's kind of
part of what a third attractor needs to identify. Right, where people feel that they've been putted
and at the same time don't feel the need to scream. Right. Which is absolutely true. She's
really quite something. Having a national debate about it, I know people will just take away
whatever they want from it.
Yeah.
Let me explain that, though, which was that modeled after the film The Day After.
So in the previous era of a new technology that had the power to – I remember I was there in college when that happened.
In college when it came out.
I was not born yet, but I –
Let me just explain.
This is a movie about the nuclear bomb blowing up, and they convened groups all over the country to talk about it, watch the movie, and then discuss it.
And it really was terrifying
at the time. But we were all joined together in a way we're not anymore. I can't even imagine
that happening right now. It was a made-for-TV movie commissioned by ABC, where the director
Nicholas Meyer, who also directed Star Trek II, The Wrath of Khan, and some other great films,
they put together this film that was basically noticing that nuclear war, the possibility of it,
existed in a kind of a repressed place
inside the human mind.
No one wanted to think about this thing
that was ever present.
That actually was a real possibility
because it was the act of Cold War
and it was increasing and escalating
with Reagan and Gorbachev.
So they decided, let's make a film
that became the largest made-for-TV-watched film
in all of TV history.
100 million Americans tuned in,
and I think it was 1983,
watched it at once. They had a whole
PR campaign, put your kids to bed early,
which actually increased the number of people who actually
did watch it with their kids. Reagan's
biographer later, several years later,
said that Reagan got depressed for weeks.
He watched in the White House film
studio. And when the Reykjavik
Accords happened, because they actually, I should mention,
they aired the film the day after in
the Soviet Union a few years later in 1987. And it scared basically the bejesus out of both
the Russians and the US. Yeah, it was quite something at the time. And it made visible and
visceral the repressed idea of what we were actually facing. We actually have the power
to destroy ourselves. And it made that visible and visceral for the first time. And the important
point that we mentioned this AI dilemma talk that we put online is that after this, you know, one and a half hour, whatever it was, film, they aired a one hour debate where they had Carl Sagan and, you know, Henry Kissinger and Brent Scowcroft and Eli Wiesel, you know, study the Holocaust to really debate like what we were facing.
to really debate like what we were facing.
And that was a democratic way of saying,
we don't want five people at the Department of Defense in Russia and the US deciding
whether humanity exists tomorrow or not.
And similarly, I think we need that kind of debate.
So that's the idea.
I don't know about a TV broadcast.
Well, you know, I don't think it'll work today.
Honestly, I don't.
I think everyone is so,
what's interesting is that was very effective.
That's an interesting thing to talk about the day after
because it did scare the bejesus.
Watching Jason Robards disintegrated in real time was disturbing but there was nothing like that and now there is
a lot like that right everybody is constantly hit with information every day we didn't it was it was
unique because we used to have a commonality that we don't have so you have gone on glenn beck
podcast god save you um b Brian Kilmeade podcast.
We do a lot of media across the board.
Right, exactly.
Do they react differently from your message than progressive audiences?
No.
Because, again, can they split?
Like progressive tech companies are bad.
Well, let me say it differently.
Conservatives, you know, surveillance and the deep state. Well, exactly.
Social media got polarized.
So actually one of the reasons I'm doing a lot of media across the spectrum is I have a deep fear that this will get unnecessarily politicized. We do not want,
that would be the worst thing to have happen is when there's deep risks for everybody. It does
not matter which political beliefs you hold. This really should bring us together. And so I try to
do media across the spectrum so that we can get universal consensus that this is a risk to everyone
and everything and that the values that we have and people's ability to live in a future that we care about. I do this because I
really want to live in a future that kids can be raised and we can live in a good world as best as
we can. We're facing a lot of dark outcomes. There's a spectrum of those dark outcomes. Let's
live on the lighter side of that spectrum rather than the darkest side, or maybe the lights go out.
So one last question. How do you think the media has been covering it? Because there is a pressure
if you cover it too negatively, it's like, oh, come on, don't you see the better, you know, are you missing the bigger picture? And I know from my personal experience, I'm so sick of being called the bummer by Ernie Erton. It gets exhausting. But at the same time, you do want to see maybe this time, we can do it better. Give me hope here because I definitely feel the pressure not to be so negative.
And I still am.
I don't care.
And I think in the end, both of us were right back then.
But it doesn't feel good being right.
Everything creates externalities, effects that show up on other people's balance sheets.
If you're a doomer and you think you're just communicating honestly but you end up terrifying people, maybe some shooters come around and they start doing violent things because they've been, you know,
terrorized by what you've shared.
I think about a lot.
I think a lot about responsible communication.
So I think there's a really important thing here,
which is that there's kind of three psychological places
that I think people are landing.
The first is what we call pre-tragic.
I borrow this from a mentor, Daniel Schmachtenberger,
who we've done the Joe Rogan show with.
Pre-tragic. I borrow this from a mentor, Daniel Schmachtenberger, who we've done the Joe Rogan show with. Pre-tragic is someone who actually doesn't want to look at the tragedy of whether it's climate or some of the AI issues that are facing us or social media having downsides.
Any issue where there's actually – there is a tragedy, but we don't want to metabolize the tragedy, so we stay in naive optimism.
We call this kind of person a pre-tragic person because there's a kind of denial and repression of actual honest things that are facing us.
Because I want to believe, well, things always work out in the end.
Humanity always figures it out.
We muddle our way through.
Those things are partially true too, but let's be really clear about the rest.
Okay, so that's the pre-tragic.
Then there's the person who then stares at the tragedy.
And then people tend to get stuck in tragedy.
And then people tend to get stuck in tragedy.
You either get depressed or you become nihilistic.
Or the other thing that can happen is you actually, it's too hard and you bounce back into pre-tragic.
You bounce back into, I'm just ignore that information, go back to my optimism because
it's just too hard to sit in the tragedy.
There's a third place to go, which is we call post-tragic, where you actually stare face to face with the actual constraints that are facing us, which actually means accepting and grieving through some of the realities that we are facing.
I've done that work personally, and it's not about me.
I just mean that I think it's a very hard thing to do.
It's the humanity's rite of passage.
You have to go through the dark night of the soul and be with that so you can be with the actual dimensions of the problems that we're dealing with.
Because then when you do solutions on the other side of that, when you're thinking about what do we do, now you're honest about the space.
You're honest about what it would take to do something about it.
So you're not negative.
No.
But people will cast you as that.
So there's something called pre-trans fallacy where someone who's post-tragic can sound like someone on the other side.
Yeah.
It can sound confusing.
So I can sound like a doomer, but really it is I'm trying to communicate clearly.
People often ask me, like, am I an optimist?
No.
Had to ask.
Had to ask.
You know, Sam Altman has his little home.
I know he does.
I know he does.
Yeah.
He wanted to ask me what was my plan.
You know, just joking.
We were joking around.
And I said, well, you're smaller than I am.
I'm going to beat you up and take your things and take your whole plan.
He's like, that's a good plan.
Take his house and fix her or whatever it is.
Yeah.
I was like, he goes, that's a good plan.
I go, it's an excellent plan.
Yeah.
I think I can take you if it came to that.
I think we need to get good at holding each other through to the post-tragic.
I don't know what that looks like, but I know that that's what guides me and what we're trying to do.
And if there's anything that I think I want to get even better at is it's hard once you take people through all these things to carry them through to the other sort of side.
Right, because they get hopeless.
They get hopeless.
Yeah, you can be hopeless.
After that thing, I came back.
I'm like, we are fucked. They get hopeless. Yeah, you can be hopeless. After that thing, I came back. I'm like, we are fucked. Like we were so, you know, after that thing. And I thought, that's not going to go over well because most people hide on Instagram or TikTok.
That doesn't feel good. Let me run away from myself again. Let me scroll a bunch of photos. This is going to be a difficult time. The more we can go through and see the thing together, I think part of being post-tragic is actually going through it with each other, like being there with each other as we go through it.
I'm not saying that just as a bullshit throwaway line.
I really mean it.
I think we need to be there for each other.
All right.
Post-tragic, hand in hand.
Here we go.
Let's do it.
Thank you.
Okay.
Thanks.
Hold me, Kara.
Hold me into the post-tragic.
No, I will not.
No, thank you.
What is that?
I don't know. I think he's right. I think he's right about what's going to happen. I think he's 100% right.
Yeah, I think the Taiwanese minister, Audrey Tang, saying, you know, how do we make this world more humane?
She's fantastic.
She is the hopeful version of that, but she's just as worried. It's just that she is saying, okay, now what are we going to do?
And I think that's probably the part that just I need to work on. Like, it's all, you know, the end is near. Oh,
oh, okay. Really? Can we do anything to stop it? So are you pre-tragic, post-tragic, or just what
was the other one was like bathing in, staring into the abyss? What are you? Just tragic, I guess.
You know, in this case, you fall into the abyss, right? You don't just stare into it. It envelops you. It's like a black hole in a lot of ways. But I guess I would say post-tragic, I'm like, all right, what are we going to do about it? Anyone who has kids or family has to say that. You can't say, oh, the world is ending. Let's all, you know, eat Twinkies and forget about it. I think you have to be post-tragic. I'm going to put myself and I think you in a different category, which is we're post-post-tragic.
Oh, all right.
You know, we're not as doomy because you have to make sense of this in some way. You have to
unleash it. And I liked when he kind of was saying, okay, well, we were wrong to say AI
winter and AI pause. And we should have said, you know, I don't know, AI hot summer or whatever.
Yeah.
But this idea of redirecting the technology, and that I think was actually a
more compelling way to frame the conversation. 100%. Yeah, I think 100%. I think that letter
was stupid. And I thought I said so at the time. It's like, come on.
But Andrew Yang signed it.
Whatever. Good luck, Andrew. I mean, I was on the list.
I think there's a lot of, you know, peacocking here by a lot of people. And so I think like,
I want to get to solutions because we have a record
of what happened the first time. And so we have an opportunity then to do something. And history
doesn't have to repeat itself. I know that's an old trope. You're right. History can't repeat
itself. I like that conversation about the day after film, and you're kind of pushing back,
saying that really can't happen. We don't have a single national news hour, you know, provided by three broadcast networks. But I also think that AI could change that.
There could be a rallying around the flag or a sense of camaraderie that we need to figure this
out. And big problems present opportunities for people to get over their problems.
Yeah, I'm of the mind that people are a lot, that average people are a lot smarter and more
reasonable than our public discourse would show. I remember the day after. I had not thought
of that in years. It was such a, like, a memory, like, when it came back. And it was, people were
silenced. And everyone understands that the atomic bomb had been dropped. You know, people understand
war. But that was an important media moment and i had utterly forgotten but impacted
everybody in a bad way for good if that makes sense yeah so i like that you lived it but he
kind of mansplained it to you by the way it's true he's like let me tell you who made this film
and what it was about i'm aware i watched it. I remember being there at the Georgetown University,
and it was such a moment. It really was.
When was it?
The 1983.
I wasn't born, yeah.
Oh, okay, then you shouldn't have to worry about it at all. It reminds me of a person I met. I made
a Mount St. Helens reference, and they're like, what's Mount St. Helens? And I said, it erupted
in whatever the year it was. And they're like,
oh, I wasn't born. I was like, all right, I'm just going to leave the conversation right now.
Mount St. Helens. But one of the things that frustrated me, not the mansplaining, was the failure of the analogies. I
think he talked about AI like being these blinding lasers, but the blinding lasers are a brute
singular weapon that are easier to kind of get ahead of. He talked about nuclear,
which is more apt because you have nuclear energy as well as nuclear weapons. But I think we have
had Hiroshima and the world was just organized differently at that time. The biosafety labs
analogy was a little bit more interesting. And then you raised CRISPR, which I think is probably
the best here. People will violate it. Just like, what can we do as a group?
I'd forgotten about laser on the battlefield.
Like, what's wrong with us?
And I'm surprised more people don't use it,
and maybe they will in the future.
Maybe the catch is off and we're over the edge,
as the song goes.
But you should contemplate the worst thing.
Yes.
And I think the difference is,
it's easier to get a group of scientists like in CRISPR to agree on ground rules than a bunch of capitalists.
Yeah, they won't.
Who's incentivist to make money.
And even the scientists couldn't hold it together.
You had the, you know, China do.
That we know of, you know.
I love that, again, business, tech have kind of disemboweled government.
And they're like, well, you've got to really figure this out, guys.
This is on you.
We're in partnership, but it's really on you.
Elon now owns a presidential candidate for the man who has everything.
So, he's got this. And let's end on that. I'm going to calculate my P.D. Elon now owns a presidential candidate for the man who has everything.
Let's end on that.
I'm going to calculate my P. Doom.
Meanwhile, can you read us out, please?
Yes.
Today's show was produced by Naeem Araza, Blakeney Schick, Christian Castro-Russell, and Megan Burney.
Special thanks to Mary Mathis.
Our engineers are Fernando Arruda and Rick Kwan.
Our theme music is by Trackademics.
If you're already following this show, welcome to the world of post-tragedy. Hey, it could be worse. If not, it's a high P-Doom for you. Go wherever you listen to podcasts, search for On With Kara Swisher and hit follow. Thanks for listening to
On With Kara Swisher from New York Magazine, the Vox Media Podcast Network, and us. We'll be back
on Monday with more. its own unique story through distinctive design and immersive experiences, from medieval falconry
to volcanic wine tasting. Autograph Collection is part of the Marriott Bonvoy portfolio of over
30 hotel brands around the world. Find the unforgettable at autographcollection.com.
Food insecurity still affects millions of individuals around the globe,
and Nestle, a global leader in nutrition, health, and wellness,
understands the importance of working together to create lasting change.
Nestle's partnerships extend beyond just financial support.
From building urban hoop houses to producing custom seasoning for food banks,
Nestle and their partners actively engage with local communities,
listening to their needs, and working together to find innovative solutions.
Nestle is committed to helping support thriving, resilient communities
today and for generations to come.
Together, we can help to build stronger, healthier communities.
Learn more at Nestle.com.