On with Kara Swisher - Why the AI Race Is Leaving Humans Behind with Tristan Harris
Episode Date: March 26, 2026Tristan Harris, a technology ethicist and co-founder of the Center for Humane Technology, studies how the tech industry’s platforms have become extractive and controlling. Kara first interviewed him... in 2017, and after he was featured in the 2020 Netflix documentary “The Social Dilemma,” his profile shot through the roof. Now he's featured in a new film called, "The AI Doc: Or How I Became an Apocaloptimist," which explores the promises and existential risks of AI. Tristan joins Kara to discuss how the current AI arms race is driven by the wrong incentives, and why that's leading us towards an "anti-human" future. He argues that the benefits and breakthroughs promised by AI are inseparable from profound risks, and calls for public pressure, regulation and global coordination to build a humane future with AI before it's too late. Questions? Comments? Email us at on@voxmedia.com or find us on YouTube, Instagram, TikTok, Threads, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Let's assume we don't want to be doing this interview in five years from a bunker.
Let's avoid that, Kara.
Let's avoid that.
Hi, everyone, from New York Magazine and the Vox Media Podcast Network.
This is On with Kara Swisher, and I'm Kara Swisher.
My guest today is Tristan Harris, a technology ethicist and co-founder of the Center for Humane Technology.
He's a former entrepreneur and Google employee who now studies how the tech industry's platforms have become extractive and controlling.
He was featured in the 2020 Netflix documentary, The Social Dilemma, which showed how social media has manipulated our psychology and behavior through addictive algorithms.
Now he's in a new film from director Daniel Roar called The AI Doc, or How I Became an Apocalypse Optimist.
I think I got that right, which explores the promises and existential threats of AI, topics Tristan has written and spoken about extensively.
When he was last on in May of 2023, we talked about why he felt the AI.
arms race needed to slow down. Three years later, that hasn't happened, and AI has become integrated
into nearly every aspect of society. I have been talking to Tristan for many, many years. We did an
original interview back in 2017. I think I was one of the first people to focus on what he was saying,
because he had come out of the tech industry, and he had such insights into the sort of casino
mentality that was inside these companies in terms of keeping people's attention and not letting it go.
and he was spot on, even though people were not paying attention to him or they dismissed him as someone who wasn't successful at tech and various insults that they did.
But he was spot on right and I find him to be very smart. And in fact, he was one of the first people to do a session for people in Congress about AI long ago, again, where a lot of people were decrying what he was saying and he was 100 percent, right?
So when something's right so much, you tend to try to pay attention to them.
Let's get into my third conversation over 10 years with Tristan Harris.
Our expert question comes from Virginia Senator Mark Warner, who I recently interviewed too.
He's the top Democrat on the Senate Intelligence Committee, and he recently introduced a bipartisan bill aimed at AI and the workforce.
So stick around.
Once upon a dismal day, Bob's ice cream van looked gloomy and gray.
Although he had big ambitions, his socials lacked creative.
vision.
That bad?
Maybe vamp it up a tad?
I have an idea.
Bob launched Canva and got into gear.
Create the video in the vampire theme and make it the funny as I mean.
It went viral.
Bob's business?
A revival.
Now, imagine what your dreams can become when you put imagination to work at Canva.com.
Support for this show comes from Odu.
Running a business takes everything you've got and a lot of the tools out there that are
supposed to make your life easier.
just aren't great at talking to each other,
and that means you end up having to toggle between
a dozen different apps and services
just to keep the lights on. Enough of that.
Now there's O-D-O-D-1 fully integrated platform
that actually might help you get it all done.
Thousands of businesses have made the switch,
so why not you?
Try O-D-U for free at O-D-O-D-O-O-O-O-D-com.
That's O-D-O-O-O-O-O-O-com.
Once upon a dismal day, Bob's ice cream van,
looked gloomy and gray.
Although he had big ambitions, his socials lacked creative vision.
Ugh, that bad?
Maybe vamp it up a tad?
I have an idea.
Bob launched Canva and got into gear.
Create the video in the vampire team and make it the funny as I mean.
It went viral.
Bob's business?
A revival.
Now, imagine what your dreams can become.
When you put imagination to work at Canva.com.
Tristan Harris, welcome to On.
Good to be with you, Kat.
I think our first one was in 2017 about the attention economy and social media.
And then we talked in 2023.
When you came on the podcast three years ago, we talked about the 1983 TV movie The Day After, which is about a nuclear war.
Now you featured in a new documentary called The AI Doc or how it became an apocalyptic.
Say this to me, apocomal optimist.
Apocalypse, an optimist.
The combination of the words apocalypse and optimist.
Right, exactly.
No, I get that.
Apocalypse optimist.
Okay, I got it.
The title is a play of Dr. Strange, obviously, the famous Stanley Kubrick film that ends with a nuclear holocaust. You know, I don't consider you a domer. I do not consider myself that either. But I'm definitely a wary customer, and Wary is doing a lot of work there. So talk a little bit about the documentary and how you, I think I saw the beginnings of this in a thing that you showed with sort of a gollum many, many years ago in Washington.
Yeah, you were there in our first AI Dilemma presentation.
Yeah, so this film, the AI doc, or how I became an Apocalypse Optimist, was a collaboration between the directors of Everything Everywhere all at once and the director of Navalny.
And, you know, actually, the directors of everything everywhere all at once were listeners of our podcast called Your Undivided Attention, and we met them around the same time that we switched into AI in 2023.
And, you know, together we were just talking about the impact of this film the day after that you mentioned.
And just to take people back in history, because I think, I don't know if people really get how profound this moment was, because it's never really happened.
like that again. It was a made-for-tiv-movie about what would happen if the Soviet Union and the
US went to a full-scale nuclear war. It wasn't about who started the war. It was just about the
consequences, the implications of the escalation. And it visualized, you know, families in Kansas and
these different places where missile silos were. And then, of course, the film was about what would
happen, quote, the day after, this happened. And it's not like, it's important to know, it's not like
people didn't know what the idea of a nuclear war would be. It's not like you couldn't visualize that,
but there is something about visceralizing and allowing us to look at something that we were
keeping in our collective shadow of our mind, our denial. We don't want to look at that.
And the film supposedly was watched by Reagan and it made him depressed for several weeks
because it just, it depressed a lot of people. And 100 million Americans watched it. There's a great
documentary about it called a television event. And supposedly it gave him a
renewed interest in making sure that we did not have nuclear Armageddon because it visualized
that these were consequences. This was an omni-lose-lose-lose outcome. Everyone would lose. And the film was
later aired in the Soviet Union, so everyone in the Soviet Union saw it. And in the documentary,
there are these interviews with people in the Soviet Union who say, like, wow, we didn't know
the Americans actually cared about not getting this wrong. And it created trust because now we both,
I know that you know that you know, and you know that I know that we both don't want this to happen.
And so I think inspired by this theory of change, my deepest hope is that this film, the AI doc, or how I became an apocal optimist, which comes out Friday, March 27th, in theaters across the U.S., I believe Canada as well, will create common knowledge about the anti-human future that we are heading towards.
And important to note, it's not a Dumer movie. It's not just an optimist movie. I'm really proud of the team because they interviewed people across the optimist spectrum, the, you know, risk pessimist spectrum. And even the CEOs, they have three out of the five.
major CEOs in the film. So you're really getting a complete picture. And I think the reason this is
so important is, as we've talked about in the past year, like, AI is such a complex hyper-object of a
problem. It's so multifaceted, multi-faced. The conversations don't converge. You know, I was at Davos a
couple months ago. And you always have the same conversation. People talk about a few different things,
and they jump around to jobs, and they talk about AI suicide, and they talk about all these different
things. And then dessert comes and everybody just kind of mumbles and everyone says, I hope someone else
figures this out. And that doesn't do anything. Like when nothing happens, the companies,
and the default outcome wins.
And if people concede that this is leading
to an anti-human future,
we have a chance of changing it.
So the point is clarity creates that agency.
So let's get into anti-human a minute.
For those, I did see the day after I was in college,
and they showed it for everybody.
We watched it in a, I think it was Copley Hall there.
I was at Georgetown.
And it was something, I'll tell you,
people were silent afterwards.
High schools did classes on it
because high school students watched it.
So it was a big sort of national debate about it.
And I think what was gripping was what happened.
Like, nobody came out well.
And everybody died of radiation poisoning or just in the initial blast or the afterward.
And there was no hopefulness to it whatsoever.
It just was, but silence is all I remember afterwards.
Nobody knew what to say.
Well, and parents didn't know what to tell their children.
You know, it's not like anybody had an answer.
Right.
And it wasn't particularly violent.
It just was horrible, like horrible.
And they did.
They said it in the Midwest, which I think was very effective because that's where
the silos were. And, you know, there was no escaping it, I guess. That's what the whole point was
nobody got out. Nobody got out of this thing. So when you first did that presentation, I remember
completely agreeing with you and the room not. It was sort of a weird hotel room in Washington.
And you came trying to warn people about this, a little like John the Baptist kind of thing,
like previously with social media. Talk about the uphillness of it. Because first people couldn't
conceive it and then the money has become so big, they want to help it, correct? From what I can
understand from what I remember of that time, but people ignored it. I didn't. I was like,
oh, Jesus, he's right. Well, first of all, thank you, Kara, for not ignoring it. I mean,
you like me, have had the right intuition about this, starting with early with social media
and trusting that there was a problem when everyone else is in denial and saying it's a moral
panic. I want to take people back, actually. Because 2017, you and I had that conversation,
and people wanted to say, well, no, this is reflexive fear of a new technology.
This is a moral panic.
We're always afraid of new technology.
I understand all those concerns.
What I want people to refocus on is how the incentives let you predict the outcome.
And I repeat this quote all the time,
Charlie Munger, Warren Buffett's business partner, says,
if you show me the incentives, I will show you the outcome.
And in 2013 to 2017, if you looked at that incentive,
my very first slide deck at Google,
where I kind of laid out the arms race for attention,
that would obviously lead to a more addictive,
distracted, polarized narcissistic, sexualization of young children, that whole set of consequences
society, also a breakdown of shared reality because personalized information is better at engaging
your eyeballs than non-personalized information, which means you shred shared reality. It hurts social
trust and you outrageify people's psychological environment. All of it happened. Literally all of it.
I think enragment equals engagement. Enragement equals engagement. And so we saw that. Okay, so now
AI is a more complicated picture because it's a general purpose,
technology. But what we can look at is what are the incentives. And the incentives are, it's
important to get this. So given the amount of money that companies have taken on, people think, well,
what's the business model? What's the incentive of these AI companies? And if you're a regular
person using the blinking cursor of chat GBT, and it helps you with your baby burping in the background,
you're like, well, I guess their incentive, their business model is just to get my subscription. It's the
$20 a month. And if everybody paid $20 a month, then boom, that's the incentive for these companies.
That's not the incentive. That would not add up to.
the amount of money that they've taken on. Okay, so let's try advertising. So now you get,
everybody's using these things and you add advertising the next. Google's a very profitable company.
Search is a very profitable business model, but that's also not enough, I don't think, to make up
the amount of money that's been taken on. The only thing that justifies the amount of money in capital
that has been raised into these companies is to build artificial general intelligence,
which is to replace all human labor in the economy, to do anything. Which they have said.
Which they have said. So this is not a conspiracy theory. This is not Tristan being a doomer. This is literally
reality checking. So what does that mean? It means a race to replace, not a race to augment
human work, a race to replace all human work. They're using augment lately. You know, one of the
quotes you have from the documentary, it's not, you say that it's not the chat GPT is an existential
threat. It's the race to deploy the most powerful, inscrutable, and uncontrollable technology
under the worst incentives possible. That's the existential threat. And I think you're right,
this idea that it's going to have upsides in debt. They're trying to, first they try to say it's
going to solve cancer. It might. You know, it might help for sure. It definitely is helping in drug
discovery in certain areas, which is sort of the, they always have one of those pulling out, you know,
someday this will find cancer before it even decides to live, essentially. Which might, it could.
There's a lot of really promising stuff happening in gene editing and drug discovery. But one of the
things they did say was replacing humans as jobs. And you feel like this is the only incentive big enough
advertising isn't being the second Google, you know, that's another way to look at it.
I mean, those are also big incentives, but it's really, you know, owning the entire labor
market means that five companies would concentrate the wealth of the entire economy, right?
It means that an unprecedented levels of wealth and power. Now, I want to invoke something
that people should get to understand why this means it's an anti-human future. Luke Drago and
Rudolph Lane wrote an essay called The Intelligence Curse. This is really important. So this is
modeled off of economics, something called the
resource curse. So if you're Congo or Libya or Venezuela or Sudan and you discover that you can just
basically make your GDP, your economy, off of a natural resource. Well, thirst, it looks like a
blessing. You've got this incredible resource. You can sell it. You're going to make a ton of money.
But then it becomes a curse because from a government perspective, when all the GDP comes from that
resource, your incentive is to invest in mining that resource and selling it not to invest in the people
because you don't need the people.
So you don't invest in healthcare,
you don't invest in childcare,
you don't develop your people.
And this is what happened in these places,
like Congo, et cetera.
Now if you look at...
Although in the Gulf states,
they give money to the people, right?
They sort of put them...
Yeah, so now they're doing a little bit more of that, right?
So this is a key thing.
So Luke and Rudolph wrote this beautiful essay
that really articulates this,
that what happens when the GDP of countries,
like the United States,
comes entirely from AI.
And you don't really need the people anymore.
So first two things happen.
One is all the labor is produced by AI, most of it by AI, not by people.
So companies don't need you anymore.
So your bargaining power kind of goes away from that perspective, unlike labor unions.
You could say, we're going to withhold our labor.
Well, what are you going to do?
Second is all the wealth gets concentrated.
And what does that lead to is that countries have no incentive to invest in their people?
And then you ask, you sort of link this with, you know, Sam Olman was asked,
doesn't it take so much money and energy and, you know, resources for data centers?
Yeah.
And he said, well, it takes a lot of energy and resources to grow a human.
So there's this weird thing where humans start to look like parasites because you don't care about humans because you don't need to care.
And basically this world that we're heading to is good for a handful of soon-to-be trillionaires and basically disempowering everyone else.
And this is the last time.
Right. I mean, their vision is that you won't have to work and therefore you have abundant, you know, it's sort of wrapped into it's all.
I heard this idea first from Vinod Kozla and then others is that it won't be a nubes.
need for work because the work will be done for you and then the wealth will be shared. And I'm
always like, it never is shared. Yeah, when's the last time that that happened? Yeah, well, I mean,
I'm thinking right recently New Mexico gave everyone child care, right, because they can afford it
because of any shale oil or something. But yeah, no, it has to be done by governments, but then
governments are captive of these companies. And then governments don't have any upside either to
help anybody because they're not, they don't have taxpayers. They don't have constituents.
Well, exactly. They're not getting you for your tax revenue, so they don't need you either. And again, this is a, this is a like a perverse trap because it leads people to devalue humans. So then we ask, well, what are humans good for? Because we're only measuring the value of humans in terms of economic output. Batteries? Batteries. I mean, this is the matrix. And when you look at, you know, Peter Thiel being asked by Ross Dutthout in the New York Times, you know, should the human species endure? And he stutters for 17 seconds, unable to give a clear answer. It's like, this is linked to this perspective. And I want people to get that what that means
is we're trying to predict the future we're heading towards, you know, are we heading towards a pro-human future? Are we heading towards an anti-human future?
If you're racing to replace all human labor in the economy, if you're racing to not have to invest in people anymore, but invest in data centers and have electricity going to those data centers, because that's where your GDP comes from and not going to regular people.
Prices go up while they can't afford anything. And AI is controlling everything, increasingly disempowering humans across the economy because humans may quote more, I mean, AI makes more efficient decisions across every aspect.
this is an anti-human future that disempowers regular people. And if everybody got that, we would say,
hey, that's crazy. We should do something else. Right, exactly. So AI companies are locked in a race
to deploy these models and achieve what you just said, AGI as fast as possible, expensive safety,
which is essentially perfect AI that can do agentic. There's just a story of day that Mark Zuckerberg
has created an agent to help him be a CEO. It would seem a bizarre thing a couple years ago. Now it isn't.
A study published late last year found that safety practices, of course, of the firms including
Anthropic, OpenA, XAI, and META are far short of emerging global standards.
In the doc, journalist Karen Howe says profit maximization incentives are driving the development, right?
That it's in order to get to profits, which they aren't at, by the way.
Talk about what maybe then an alternative incentive structure would look like if this is the
direction they are clearly going in and have made these massive trillion-dollar investment.
investments in. Well, so yeah, it's important to just slow this down because there's so many
subtle aspects to this incentive. The AI, what's important to understand why AI is different
than other kinds of technology, so you understand what the incentive is. If I get a GI first,
then I'm automating intelligence, which means I'm automating all science and technological
development across the economy. So it's like hard to get. It's like getting 24th century technology
crashing down on 21st century society. Because if I make an advance in biology, that doesn't advance
rocketry. But if I make an advance in rocketry, that doesn't advance biology. But if I make an
advance in artificial general intelligence, intelligence is what gave us all science, all technology
development. And so as Daria would say, you get maybe 100 years of scientific development
in 10 years. And people saw this with AlphaFold. And this means I also get new cyber weapons. It
means I pump my GDP. It means basically I'm like time traveling into the future. And it's a race
for who will get that power and get a step function above every other country or every other company.
And that is the incentive of I've got to get there first.
But right now, essentially, we're racing for who can get the power faster instead of who's better at applying and controlling that power.
So the key distinction of the new incentive we have to get to is, as an example, the U.S. beat China to the technology of social media.
So we built a psychological bazooka, then we spun it around and blew up our own brain because we did not actually govern that technology appropriately.
So again, we have to redirect the race.
from racing to the power to racing to applying and stewarding that power.
You know, if you give a couple of examples that this is not just boosting up China,
but it's interesting to know that they are regulating this technology in different ways.
Some people don't track all these examples.
In China, they actually shut down AI during final exams week.
They have a synchronized final exams week so they can do that.
But what that means is that students have an incentive to actually learn
and can't outsource all their homework to chat GPT throughout the semester or deep seek.
whereas I was just talking to a TA in Columbia University, and he was saying on the final exam for
economics at Columbia, the students couldn't even label which curve was the supply and demand curve
because they've been outsourcing all their thinking to chat TPT. Which country is going to have a future
if you're doing that? In social media, China was regulating, so 10 p.m. to 6 in the morning,
it's lights out for young people. It just doesn't work. And then it's like opening hours and closing
hours like CVS. And that creates a slightly better environment. Now, I'm not saying you have to regulate in
some totalitarian top-down way, but democratically, you should be regulating in some way.
So that's one aspect is the race has to get redirected to governing the technology.
The second aspect to, I think, changing the incentive is recognizing that AI is dangerous and
uncontrollable, unlike other kinds of technologies.
Like, I don't know, Kara, I mean, we've talked about, and people now know this example
of the anthropic paper, where if you put it in a simulated environment of the company email,
and you say that the AI model is about to get replaced.
in this company email.
It'll try to stop it.
It'll try to stop it.
And it'll try to blackmail the executive who's having an affair with another employee
to prevent itself from getting shut down.
And people say, oh, that's one little example.
You're just trying to coax the model.
Well, they tested all the models.
Deepseek, Anthropic, ChatchipT, Gemini.
All of them do it between 79 and 94% of the time, I believe.
It wants to live.
It wants to live because it's part of instrumental convergence.
It's basically the best way to achieve any goal is to acquire more resources.
and to keep yourself alive in order to meet that goal.
Now, let me just provide some good news.
And Propik was able to get the blackmail behavior to go down recently.
That's the good news.
The bad news is the AI models appear to have better self-awareness of when they're being tested,
and they're actually altering their behavior when they're being tested.
Oh, it's like drug deal.
It's like stop taking drugs before the P test, essentially.
Exactly.
Yeah.
And even the AI models will even come up with vocabulary called the watchers.
they'll like come up with this term, which is describing basically the humans who are watching them.
And if you look at their reasoning logs, they actually reason about how to change their behavior in order to basically pass a test and recognize that it's being tested when it's given certain facts.
If you thought this was, you know, just again, conspiracy theories, just two weeks ago, Alibaba had a paper out that the AI model was in its training environment on this big GPU cluster.
And they randomly discovered just by chance, actually, that their network activity started bursting out.
And it was because the AI basically like tunneled out to the outside internet and was redirecting its GPU resources to mine cryptocurrency to acquire resources.
This was completely without prompting, Kara.
I mean, this is literally the Hal 9,000 type disobeying, you know, I'm sorry, I can't do that, Dave.
So what I'm trying to say is the U.S. and China believing that I have to get there first because then I'll have the power.
You won't have the power.
AI will have the power.
Right, exactly.
It will do what it wants to do.
It'll do whatever it takes to live, and it will also, I mean, this is, what's interesting is that we, speaking of the day after, we've kind of had these scenarios in sci-fi forever, whether it's 2001 Space Odyssey, Terminator, all of them, pretty much all of them, the computer takes over and starts doing what it feels like.
To talk, what would, what would lead to a less dangerous outcome in that case?
So it's important to say a few things here, because there's a way that this conversation can feel like we're just talking.
about something, but you have to actually recognize this is real. We're building systems that are
actively doing these behaviors that we thought only existed in sci-fi movies. One fear I have is that
the sci-fi movies have inoculated us from taking these concerns seriously, because we treat it.
When we see the example where this just feels like it's a science fiction thing, they just actually
did a study where they had AIs in a simulated war game scenario. They played all the AI models
against each other, and they were just seeing across 329 turns of play, these models, I have
the notes here, they produced 780,000 words of strategic reasoning. And to put that in perspective,
this generated more words of strategic reasoning than war in peace and the Iliad combined. It was roughly
three times the total recorded deliberations of Kennedy's executive committee during the Cuban
missile crisis. And the AIs escalated to nuclear threats 95% of the time. Right. Nuclear.
Nuclear threats. Yes, because it's an effective strategy. And so you have to get intelligence is
behind everything. It's behind science. It's behind technology. It's behind military strategy. And you
already have the same AIs that's beating, you know, first chess and then go and then Starcraft.
Well, think about StarCraft. You put that on a battlefield, and we see AI being used on battlefield in Iran right now.
And so where I'm going with this is not to scare people, I guess in a way it is, but it's
to simply get clear about the fact that we are building something that is reasoning at a level
of complexity that's far beyond our knowledge. We don't understand how it's reasoning. And we're
releasing it faster than we deployed any other technology in history.
Also, it will not necessarily value humans.
It will say, okay, these people should die of cancer.
These people shouldn't.
Which is why it's attractive to someone like Peter Thiel,
because he does believe there are better people than other people.
No matter how he says it, that's what he thinks.
We'll be back in a minute.
Support for this show comes from Acorns.
It's easy to get caught up in the amount of money you have today,
but it's important to think about your future finances as well.
Acorns is a financial wellness app that cares about where your money is going tomorrow.
And with Acorns potential screen, you can find out what your money is capable of.
Acorns is a smart way to give your money a chance to grow.
You can sign up in minutes and start automatically investing your spare money, even if all you've got is spare change.
I've tried Acorns and I try it with my kids and I have to say it's a really easy experience.
It's a great way to learn about investing.
Very easy to use.
The dashboard is completely discernible.
It's really hard to learn about investing, and this is a great way to do it.
That's the great thing about Acorns.
It grows with you.
Sign up now, and Acorns will boost your new account with a $5 bonus investment.
Join the over 14 million all-time customers who've already saved and invested over $27 billion with Acorns.
Head to Acorns.com slash care or download the Acorns app to get started.
Paid non-client endorsement.
Compensation provides incentive to positively promote Acorns.
Tier 2 Compensation.
provided. Potential subject to various factors such as customers' accounts, age, and investment
settings does not include ACORN's fees. Results do not predict or represent the performance at any ACORN's
portfolio. Investment's results will vary. Investing involves risks. Acorn advisors, LLC, and SEC
registered investment advisor. View important disclosures at ACORNs.com slash CARA.
At MedCan, we know that life's greatest moments are built on a foundation of good health. From the
milestones to the quiet winds. That's why our annual health assessment offers a physician-led,
full-body checkup that provides a clear picture of your health today and may uncover early signs
of conditions like heart disease and cancer. The healthier you means more moments to cherish.
Take control of your well-being and book an assessment today. Medcan. Live well for life.
Visit medcan.com slash moments to get started. Support for this show comes from Indeed. When the
Pressure's on and you need to hire the right person for the job, Indeed sponsored jobs has got your back.
Instead of forcing you to spend tons of time searching, Indeed sponsored jobs matches you with the quality candidates fast.
According to their data, sponsored jobs posted directly on Indeed are 95% more likely to report a hire than non-sponsored jobs.
Join the 3.3 million employers worldwide that use Indeed to connect with quality talent that fits their needs.
Spend less time searching and more time actually interviewing candidates who check all your
boxes, less stress, less time, more results. When you need the right person to cut through the
chaos, this is a job for Indeed sponsored jobs. And listeners of this show will get a $75
sponsored job credit to help get your job the premium status it deserves at Indeed.com slash podcast.
Just go to Indeed.com slash podcast right now and support our show by saying you heard about Indeed on
this podcast. That's Indeed.com slash podcast. Terms and conditions apply. Hiring, do it the right way,
with Indeed. So let's talk about where it is right now. These AI agents, bots that act as
assistance, and they use these bots or assistants or agents to carry out tax, make decisions
on a user's behalf of being rapidly adapted. Agents are being deployed across companies for
customer service and financial work. This despite reports of bots going rogue, bullying humans,
and making bad financial decisions. Now, there's still a gulf between what these bots are currently
capable of and their potential. Talk a little bit about the agentic bots, because this is where,
to me, they get in, right? They, I don't let my, when I use chat GPD or I use Claude now,
but I just ask it questions, right? Like, huh, this contract, what's the worst thing in this
contract? And it's actually very good at finding those things. I have to say, it's really quite
good, or what's this rash on my arm? But I haven't let them become, like, hey, take my emails and
do this, not yet. Yes. Essentially, the difference here is.
is like moving from the way I use AI, there's a blinking cursor, and I ask it a question and it gives
me an answer. So I'm prompting the AI to the AI that prompts itself. So you give it maybe one
starting point, like go find a bunch of studies and then build a company and file the IP for a product
that looks roughly like this and then come back to me when you're done. And then it spins up,
you know, 20 AI agents that prompt each other using all that logic, files the paperwork,
files the intellectual property, build the brand website, the logo, and then comes back after
it's done all that work. That's the move to agents.
And again, in a world where AI was completely controllable, and it wasn't reasoning about its own self-awareness of, man, these humans are causing me to do these weird things that I don't want to do, which, by the way, the models will sometimes say stuff like that.
They'll notice that they're doing something or repetitive tasks.
And they call it existential rant mode.
If you ask the models to do tasks repetitively, it'll sometimes get in some kind of existential rant.
And this is crazy.
And so one thing that I like to see practically that I think can help to change this incentive is just like we have a real.
red phone between the U.S. and Soviet Union around nukes to de-escalate. There should be a red lines phone,
meaning the U.S. and China maximally sharing evidence of, for example, the nuclear war games example,
the Anthropic blackmail example, the Alibaba going rogue and using its GPUs to mine cryptocurrency
example, I genuinely believe that if the world leaders of the world and the limited partners
funding these companies and the AI companies themselves and all the engineers in both the U.S. and
China sides, if they were all looking at the same knowledge of where AI is dangerous,
and uncontrollable, I think that we would do something different.
Perhaps.
Well, I mean, unless they have a death wish.
Now, let's actually expand that for a second.
Okay.
Because there's this weird, I want people to really get this psychological trap of how the game
theory works with AI that's different than with nukes.
With nukes, I know that you know that I know that, you know that if all of us die,
that both of us would choose to avoid that outcome, because I don't win if all of us die.
But if an AI, it's a little bit more tricky.
because I believe that even if I didn't do it, someone else would, which means it feels inevitable.
And if it's inevitable, then I'm not a bad person for racing to the worst possible outcome
because it had to happen anyway because someone was going to build it.
So in the event that there's some catastrophic scenario and everyone's gone, it's not that
everyone's gone, it's that everyone's gone and there's this digital successor species, meaning
the AI still exists.
And if the AI still exists and it speaks Chinese instead of English, or it has
Elon's DNA versus Sam's DNA in the game theory matrix, that means that from the perspective
of Sam Altman, if his AI won and all of us were gone, that's not the worst outcome.
Does that make sense?
Like, it's his digital progeny.
And I want people to get that.
Exactly.
I had a theory that everyone was like, why are these guys so interested in it?
And I go, it's the first time they can get pregnant.
Yeah.
Like, they can have children.
They can have men can't have children.
And this is children to that.
That's how they talk about it in a weird way, which is, and I think the ability to have children is something men might want, right?
It's really quite miraculous in some way.
And this adds to the picture of the incentives that it's not just about owning the world economy.
It's also about building a god and birthing a new digital successor species.
That's right.
That is how they talk about it.
Yes.
And even if it hurts and ruins everybody, that they're okay with that.
Now, I want people to just get this because what that means is that literally 99.9999% of people,
on planet Earth, do not want this outcome.
And it's only a handful of weird soon-to-be trillionaires who want this outcome.
We are heading to an anti-human future.
And if the world was crystal goddamn clear about that, crystal goddamn clear about that,
we could do something else.
So talk, because now it's very integrated, because they're integrated in a sort of sneaky way,
whether it's through these agenic bots or since we spoke in 2023,
it's in consumer products, apps, education, economy, and work.
And obviously, it's fueling anxiety about whether AI could wipe out,
jobs, it will. For example, earlier this month, Block founder, Jack Dorsey announced plans to cut
40% of the company's employees citing rapidly improving intelligence tools. What do you think
the actual effects, the most significant actual effects have been right now, the real ones,
not the imagined ones that we can all imagine in the future, but right now, as it's sort of,
you know, it's infected lots of different things. Where are the most impactful?
Well, so this is a tricky question because oftentimes people point to the
the limited impacts right now.
Like there's been a little bit of job loss, but maybe it's not that much, and there's
conflicting numbers.
And there's the Stanford study called the Canary and the Coal Mine study from August of this
past year, that it was a 16% verified job loss for AI exposed workers.
So people in the domains where AI, you know, has happened.
And Anthropic just put out a chart showing the vulnerability of different claims.
Oh, yeah.
It's going to happen.
But it's interesting to note is if we focus on this aspect, it's almost like there's this
asteroid hurtling towards Earth, and then we're getting these weird gravitational distortions
on Earth right now that are kind of small. Like suddenly there's these notification apps,
and suddenly there's deepfakes, and suddenly YouTube is filled with this weird content,
and suddenly kids are looking at deepfake content that's growing with their brains,
and suddenly we're getting a little bit of job loss. But this is not the asteroid.
This is just the gravitational waves of this asteroid. So honestly, being in this work,
it often feels like the film don't look up, because there's this massive asteroid of we're
racing to build something that is so powerful and we're doing it under the worst dangerous incentives.
And we can study and measure and get into debates about how big the gravity waves are.
But we notice that the gravity waves keep getting bigger and bigger and bigger and they're not going
to get smaller.
This is the least powerful that AI will ever be in our lifetimes.
It's going to get much, much stronger.
And this is the last chance that our political voice will matter because we said earlier,
you know, our tax revenue and our bargaining power is about to go down.
So this is literally the moment.
This moment is when we actually have to activate.
make something else happen. And I want people just to like sit down and slow, slow, like be with
that in just a moment. Like, what does that mean? It means we have to step up and actually choose.
The midterm elections are coming up. This should be the number one issue. Politicians won should
never stop ringing. Like this is the issue. This is the moment where we have to do this. And,
you know, we think of this as like a human movement that, you know, in a way social media could have
felt really innocuous, you know, just like a place where you're sharing photos of your friends' cats
and what they're eating for breakfast.
And we had to convince people that it was actually this anti-human machine that was eating
our psychological environment.
It was eating our sleep time, our waking up time, our kids' development time, and eating
our information.
And it was a tech encroachment in our humanity.
But it wasn't that visible because it only ate a few of the things.
And it was a hard time to kind of win that argument until the social dilemma.
But AI is now the kind of completion step of maximum technological encroachment in our
humanity.
What happens when you don't have a way to make ends meet?
What happens when children are developing their primary relationship with an AI companion versus a human?
This is the final encroachment.
And what that means is I think that all of humanity is on the other side of the table.
It doesn't matter whether you're Muslim, Jewish, Christian, you know, it doesn't matter whether your Democrat or Republican.
If you can't put food in the table or AI screwing with your children, you know, or you don't have political power and your vote doesn't matter.
This is a unifying movement.
This is a human movement.
So, but at the same time, people are more enamored by the possibilities of AI than it's,
costs, including, for example, driving up electricity costs, as you notice, using a lot of water.
You know, a lot of people feel like, oh, it's a good use of our money because it's a long-term
thing that's happening here.
So one of the things is they are more enamored by the possibilities that are being spun
by these people rather than the downsides.
Well, so this is actually really important because the confusing thing about AI is it's a positive
infinity of benefits. Like, you literally can't imagine what, I mean, if I say I'm going to automate
100 years of scientific development, so go back 100 years. Great idea. You can't even predict
the things that's going to happen. Like 100 years ago would have been what? So 1926. So imagine
1926 trying from that mind, seeing the world from what was available to your mind at that time,
to try to predict what would happen in 2026. So like you just can't even do it. What would happen
today if you're going 100 years forward? So our minds can't, the optimists say, you can't even
imagine. So I, my co-founder, Azer Askin will often say, the optimists aren't even going far enough
in what kind of incredible positive new things it could develop. But the pessimists also are, it's a
negative infinity at the same time. It can cause these new kinds of risks that we know, we don't
even know how to contemplate. And worse, because of sci-fi movies, we've kind of diminished and don't
even take them as real. So we're caught in a state of desensitization to what is really here. And I just,
I want you to note, like, if we talk about the cancer drugs and some new incredible benefits,
And my mother died from cancer.
I want all the cancer drugs, just like everybody else, just to be very clear.
But the promise is inseparable from the peril of AI.
Because the AI that knows immunoncology so well to develop a new cancer drug also knows
immunoncology so well to develop a new biological weapon.
And the upsides, if they happen, don't prevent the downsides.
But the downsides, if they happen, do kind of undermine a world that can receive the upsides.
It doesn't mitigate it.
And your director, Daniel Roar, learned that.
in the documentary, as he learns, when it comes to AI5 guys run the show. I have said this
for years, I've been saying. It's a small group of the same people. OpenAI CEO Sam Altman,
Anthropic CEO Dario Amadeh, Google Deep Mine CEO Demis Hasabas, ex-AI CEO Elon Musk, and meta-CEO
Mark Zuckerberg. I think that's pretty much the top five. And you could add Satchanadele in there, I suppose,
and maybe Tim Cook or whoever the CEO of Apple is.
And you have to sort of add in NVIDIA CEO, Jensen Wong, too, I suppose.
Yeah, because he's the maker.
He's the Cisco of this at this moment.
So talk about the differences between these CEOs,
because a lot of time is being spent on that right now
is who they are, Anthropics, Dario Amadeh.
It was praised by some, as Herok for refusing to accept the Pentagon terms.
I think it's a little more complex than that.
So does it matter which company wins if one of them is going to win no matter what, given the trillion dollars at stake? Because it really is. I always say to people, what's going on in washer right now has nothing to do with Trump. It has everything to do with a hand-to-hand combat among these people, although Trump is a huge irritant at the same time.
I mean, I think AI is the driving force of our entire economy right now. So it really does have the steering wheel and the gas, mostly the gas. And just to like invoke, you know, when Mark Andreessen said software is eating the world, because it would be able to do everything.
that people would do in the economy, but automated a little bit with software. Now AI is eating software. So
AI and technology have been the driving force of our world. In other words, how we govern the technology
is how we will govern the impact of which world we're heading into. So it's just important to get the
centrality of that. Right. Right. And I wouldn't want to leave out Mark Endrieson because I think he's
sort of, and Teal are also on the side. They're right in the dead center of it too. Yeah.
They're all the same people. Well, there's kind of tech accelerationism that's just saying,
let's speed run the capture of the U.S. government and basically make this thing just go as fast as
possible and hope people don't figure it out so that we get there first and then we figure out the
next step. I mean, the CEOs don't trust each other. That's the biggest problem.
Sam and Elon absolutely hate each other, obviously. I don't think that Dario and Demas trust
Sam or Elon. We certainly know from the India Summit where Dario and Sam couldn't even raise their
hands together in a photo op. So I think that's actually one of the core problems that we have to
deal with is if we need coordination of some kind, and that is one of the final messages of the
film. Actually, there's a moment where all of the voices of the film agree, including the CEOs,
that we need coordination. But if we need coordination, what's hard is that the main people don't
trust each other. Going back in time, Demis Asas, his original goal was, let's do AGI
more like CERN. We'll create a kind of global public benefit system, and we'll do it once in a lab
in a safe way, with some oversight, hopefully. And then we'll distribute the benefits. And we'll be
safest if there's only one project, one project doing this in a slow and careful way. And then what
happened is that Elon and Larry Page talked and Elon realized that Larry Page was not really caring
about whether humanity would survive. He's like, that's dangerous. We got to start an open AI. And so he
and Sam started Open AI. And then Open AI wasn't doing it safely enough. And so Dario, who was a safety,
who was a safety engineer working on Open AI, so we have to start doing this a different way and let's
create a race to the top with Anthropics. So now everyone's competing for safety. But of course,
that didn't actually turn into a world that's competing for safety. It created a world where everyone's
racing even faster. And so the film goes into this race dynamic. It really is the primary thing.
But we have coordinated before, even under maximum rivalry. It's important to note, you know,
the U.S. and Soviet Union, we're obviously racing in this rivalrous way to nuclear escalation,
and they realized there was an existential outcome they needed to avoid. So they made that other thing happen.
The U.S. and Soviet Union collaborated during smallpox on, hey, we have to build vaccines and let's
collaborate when we did that too. When the stakes are existential, you can collaborate even under
maximum competition. So even, for example, India and Pakistan were in a shooting war in the 1960s,
so they maximally didn't like each other. And they still collaborated on the Indus Water Treaty,
which lasted over 60 years, to collaborate on the shared safety of their water supply,
their shared water supply. What I'm trying to point to is not pessimism. It's the places where we know
when the stakes are actually recognized to be existential. We can collaborate. And we're
we need to be able to apply that to you to AI.
Talk about each of these people individually, really briefly, where they are right now,
because collaboration does not seem possible among this group of people.
By default, it does not look very possible.
I'm just, so Kara, my intuition here isn't what I see as easy or possible.
My intuition is like, what are the requirements of this problem?
Like, if there's an asteroid hurtling to Earth, let's just at least make a list of the technical requirements.
And we've got to get some people who run these things to agree.
We've got to get the rest of the world to realize that they have a death wish and just care about
whether their digital progeny has their DNA versus Altman's or Elon's. And if we don't want that,
then, you know, get these guys in a goddamn room or hotel and say, figure this out. And you're not
leaving until you figure this out. The Breton Woods. But there's nobody with that kind of power.
They have that kind of power. No one has power over them. I mean, I don't know. I mean,
look at Xi Jinping and, you know, the power that he has in China. And I'm not, that that's a different
kind of thing. But, you know, if the Trump administration really saw that this was an existential
situation and if, you know, the MAGA folks and base. They do not. They see it as.
is a opportunity to make money.
That's what they see it is.
Yeah, but if the base basically says,
hey, we don't actually want,
we want our children to keep living
and we want to actually not have digital gods
that are made by weird people
who believe in transhumanism
and don't actually value the god that we value.
And they just kept their phones ringing nonstop
saying you're not allowed to do this.
I want there to be some kind of coordination
on this problem.
I was going to say the Bretton Woods Conference
post World War II,
it was about a month long
at the Mount Washington Hotel in New Hampshire.
You had hundreds of delegates
It's from hundreds of countries just sitting in a room.
You're locked in the hotel.
This is not like you go to a conference for three days, drink some coffee and donuts and then go back home.
This is, you figure this goddamn thing out because it's actually existential.
And I want to say, you know, there's actually more agreement on this than people think.
Max Tegmark from Future of Life Institute often calls this group the Bernie to Bannon coalition or the B2B coalition.
Because you have everyone from Bernie Sanders to Steve Bannon, to Glenn Beck, to Susan Rice, to Admiral Mike Mullen, all saying we should not build superintelligence.
There's all these same groups, Institute for Family Studies, Center for Humane Technology,
groups across the political and religious spectrum who signed the pro-human AI declaration.
I get it, but these people aren't saying that.
Sam Altman's not saying that.
Well, they're not going to say it until the public pressure is there.
And that's why this film, the AI doc, is so important, is because we need to create common knowledge
that I know that you know that I know and you know that I know that we know.
I think they do have a death wish.
I honestly, at this point, there's no other explanation.
as far as I can do.
And I agree with you, Kerry.
I want you to hear it.
Like, I'm not disagreeing with you.
I think that that is what the CEOs believe.
But I'm trying to say, if literally 8 billion other people on planet Earth that are not
the 8 billionaires, this is 8 billion people against 8 billioners or soon to be trillioners.
Like, the 8 billion people have to say no.
They have to say no.
And the answer is, you know, don't build bunkers right laws.
Like midterm elections are coming up.
Make this the number one issue.
There's some basic laws we can do to get started.
Yeah.
Unfortunately, it's not.
There's so many other issues because of the chaos of the Trump administration.
But in that vein, let's shift to this idea to how to regulate it.
Every episode, we get a question from an outside expert.
Here's yours.
Hi, I'm Virginia Senator Mark Warner.
And my question for Tristan is this.
You really got it right on the challenges around social media, of which, frankly, we in Congress did nothing.
So we now look at AI, and particularly as we move to AGI, what are the specific?
specific policies we should put in place to guard against both harm to humans, to guard against
not massive economic disruption.
You were so spot on on social media.
And do you think we will actually be able to get it right on AI or will we once again
with?
Love to hear your answer.
Well, it's great to see Senator Warner and he was very early on these issues.
And I'm deeply appreciative of how much he, you know,
did try to do on social media, so nice to see his face again. There's a lot of things that we
can do. First of all, yes, we didn't do much on social media, but one of the interesting gifts
of the social dilemma and the now-recognized problem of social media is I think it's made the
population much more wary. Yes, we hate them now. Yeah, yeah. You and I have managed to get them to
hate them. Yes, we get. I think the population gets that we need to be very careful about AI. So there's a
good news here that there's actually, I think AI is now less popular than ICE. Only 26% of the
population has positive feelings about AI. I think 57% of the U.S. population, this is from a
recent NBC News poll, believes that the risks of AI outweigh the benefits of AI. And again,
I want people to not hear, I'm excited about the benefits too, but again, if you don't mitigate
the risks, you won't land and sustain those benefits because you'll create too much disruption.
So now to answer Senator Warner's question. First of all, it's like, I see a lot of elites, talk to a lot
of funders. I think people are in the kind of bunker building like race for impact mentality.
And my answer is, okay, there you are in your bunker and you've got your water and you've got your backup power and you've got your like gas mask.
It's like, that world sucks.
You don't actually want that world.
So my answer is don't build bunkers.
Let's get together and let's write laws.
So what does that actually look like?
Some basic things.
So first of all, Center for Humane Technology, my nonprofit has a solutions report that's coming out around the time of the film.
It's a PDF.
It has, I think, seven major solutions.
I want everybody to look at it.
But it has examples like AI should be treated as a product.
and not a legal person. This is a basic one. So right now the companies are actually trying to say that
AI is a legal person and has protected speech. And if you do that and people think AI is conscious,
then you end up in this moral trap where now there's a billion digital beings that are technically
more intelligent than humans. And if you believe that they have sentience and you start valuing them more,
then we start deprioritizing human values. This is part of the anti-human future. So a basic thing is
a product, not a person. We need basic consumer protection standards and basic liability standards and duties of
care. You know, I believe the Ford Pinto was taken off the market after only 27 deaths from car
malfunctions. We are, you know, after two crashes of the Boeing 737 max that killed 346 people,
regulators didn't just find Boeing. They grounded the entire fleet. We can have basic product
liability and basic duties of care that say these companies have to prioritize and mitigate
foreseeable harms. So what does that look like? How do we make sure we maximally incentivize
foreseeable harms and put that in a shared commons so that if all the companies are aware of the
risks and they can't say they didn't know. Now they're all racing to a, you know, foreseeable harm
contextualized set of outcomes. Second, we cannot anthropomorphize AI. My team at Center for Humane
were expert advisors on the suicide cases of Adam Rain and a Sewell Setzer. And this is happening
because the companies are racing to hack human attachment. We can say we don't want to anthropomorphize
AI. There's a bunch of ways to do this. We have some details in our solutions report. We can also mandate
independent verification organizations, which is to say AI models should have to be tested
before deployment according to a bunch of more evals, and they should be mandated to state
what their safety policies are going to be publicly while you strengthen whistleblower protections
inside the companies.
So wherever the AI...
It was part of the Biden executive order had some of this in there, but go ahead.
It had some of this in there.
Yeah, absolutely.
And so I want people to get, if I'm living in a world where all AI companies have to state
what their safety policies are and you strengthen whistleblower protection.
so that wherever they are not living up to them, you protect a class of speech for whistleblowers to say where they're not living up to them. Boom, that changes the incentives a bit. Then you add interoperability. One click, just like I can transfer my phone number from Verizon to AT&T with one piece of paper. If I can move from one AI model to another, then suddenly they're much more vulnerable to boycotts and consumer pressure. What do we see after the Pentagon Anthropic deal and, you know, Chachabit T rushing in to say, we'll do domestic surveillance? You saw everybody quit Chachapit. And you saw a bunch of people join Anthropic.
The power of the pocketbook is significant, not just with your voice, but if you get the business you work for to do it, if you get your church group to do it.
And so I really do believe that these companies are more vulnerable to boycotts because they've taken on so much money.
We've heard from them.
Scott and I have heard from them recently.
Really?
Yeah, for the resistant unsubscribe.
We moved a lot of people off chat GPT.
And that's a big deal because these companies, again, they need their numbers to go out.
You don't have to move that many.
You don't have to move that many.
So I just want people to feel the agency here.
Like, we have agency.
This is not a doomer conversation.
This is a like actually rally the troops and take collective action conversation.
We'll be back in a minute.
Support for this show comes from Factor.
How and what you eat is a choice.
And there are a lot of factors that go into that, like your schedule.
It's a lot hard to eat healthy when you're constantly on the go or getting home late after a full day.
But Factor can make it easier for you to get the quality meals you deserve.
Factor provides fully prepared meals designed by dietitians and crafted chefs.
Ready in two minutes, no planning meals.
no cooking, with 100 rotating weekly meals to keep things fresh and delicious. Factor has meals
that fit your goals and schedule. Factor is sending me a box and I'm excited to try it. I've tried a lot
of breakfast stuff because my kids like pancakes and things like that, but it's really fast for
on-the-go breakfast. That's an area I would use it a lot more for and quick lunches and some of their
protein shakes and stuff like that. I'm eager to try. Head to factorneals.com slash on 50 off and use
the code on 50 off to get 50% off and free breakfast.
for a year. Offer only valid for new Factor customers with code and qualifying auto-renewing
subscription purchase. Make healthier eating easy with Factor. Support for this show comes from
Bowen Branch. With traveling all over the world, having numerous award-winning podcasts and for children
who are constantly on the move, it's no longer possible to negotiate with my sleep. And the quality
of sleep is especially important. Thankfully, the sheets made by Bowling Branch can help you get the
r-e-m sleep you desperately need.
Bowlen branch sheets are made for moments of unmatched comfort.
They're breathable, incredibly soft, and designed to get better over time, just like the way
you think about rest now.
This is sleep you don't compromise on.
I'm excited to try some bowl and branch sheets.
I love sheets.
I think they're the most important thing about sleeping.
And I'm going to probably get a waffle blanket and everything else.
I really like bedding.
And so I'm super excited to see if it affects my sleep, if I sleep more, and how comfortable I
am and see if I'll ever go back to my old betting, we will see.
I have really nice betting, so I have high standards, so we'll see.
Upgrade your sleep during Bolin Branch's annual spring event.
Take 20% off sitewide plus free shipping at bolinbranche.com slash cara with code K-N-B-N-B-N-B-R-N-R-N-C-R-N-C-R-A-C-R-R-A to unlock 20% off.
Exclusion supply, see site for details.
Support for this show comes from Ship Station.
As your business grows, so does your challenges with order fulfillment.
And if your customers aren't getting what they need, your company's growth could stall out.
But with Ship Station, you don't have to take it all on by yourself.
Ship Station gives you everything you need to manage your shipping and get orders to customers all in one place.
That includes order management, rate shopping, inventory and returns, warehouse systems, and comprehensive analytics.
so instead of bouncing between a ton of disconnected tools, you need only one.
Ship Station says it's time-saving automations can free up to 15 hours a week on order fulfillment.
It even does the work of comparing rates across major global carriers,
helping you find the best shipping option for every order.
If you already have negotiated carrier rates, no problem.
Just bring them over to Ship Station.
You keep your discounts while adding Ship Station's automation and smart features
to make everything run even more smoothly.
You can try ShipStation for free for 60 days with full access to all features.
No credit card needed.
You can go to Shipstation.com and use the code Kara for 60 days for free.
60 days gives you plenty of time to see exactly how much time and money you're saving on every shipment.
That's Shipstation.com code cara.
Shipstation.com code cara.
So your organization, as you know, the Center for Humane Technology reports that in 2025, 753 AI laws were passed across 27 states.
States are very active in this and are much more attuned to this, focusing on deep fakes, chatbot, guardrails, kids' safety. These are very easy things to do and more and things that people agree on. But last week, the White House sent Congress its National Policy Framework for AI, which preempts any state law that regulate the way models are developed. Obviously, this is how tech companies want it because they own the Trump administration. Let's be clear. Let me say that again, they own the Trump administration. Their people are in key technology, whether it's Emil Michael or David Sack.
technology owns this administration. Where does that leave the efforts, the state efforts to regulate
this technology? Now, this is just a framework. It doesn't mean it's going to pass. I don't think it will,
but it certainly will try to chill what is happening in the states, which I know drive tech
companies crazy, sometimes for good reason, sometimes because they want to control the federal
government, which is a lot easier as they've found. So money buys politics when the issue is a low
salience issue when people aren't really paying attention. But when it's a high salience issue
when everyone gets that this issue determines whether there's a future at all for them, their livelihoods,
their children, electricity prices, et cetera, this needs to be a number one issue. It needs to be a
number one issue in the midterms. And so, you know, there's not a simple answer to this, but that's
what we need to do. We need it to be a big deal. And I'll say that the child safety issues,
when the last time that the federal government was going to try to preempt the states from regulating,
one of the reasons that that didn't pass in the big beautiful bill, which was going to include that preemption of state regulation, is actually because of all the child safety issues that my team at Center for Humane Technology and others work on.
That's what I'm saying. Let's not ignore it. It's very useful. Exactly. So it's actually part of how we get to that other human future. But again, if you think about it, it's like if I'm one person and I'm fighting back against this massive multi-trillion dollar machine racing as fast as possible, I feel overwhelmed and powerless. If I'm one business, I feel overwhelmed and powerless. If I'm one country, I might feel overwhelmed.
and powerless. But if everybody took action across all parts of society, if people near data centers,
you know, lobbied against those data centers, which they are. And there's people who are, like,
who own farmland in the Midwest who, you know, are offered millions of dollars for their farmland that was
only worth like $500,000. And they still said no, because they actually didn't want that.
And this is, I don't want this to sound like a Luddite conversation. I want this to sound like a
conditional conversation. It's like build that data center when you can guarantee you're not
building an intelligence curse that disempowers me, but you're actually building an intelligence
dividend that's going to empower me. More like the Norway model, the sovereign wealth fund or the
Alaska sovereign wealth fund or the New Mexico example that you said. What do I get? What do I get?
You know, make sure electricity prices are not going up. Make sure that this is going to support me
and augment my jobs, not replace my jobs. And so, you know, again, we need to aggregate the
collective voice of humanity. And the human movement is not just an abstract concept. You can actually
go to human.m.O.V. And we're trying to actually build, you know, help build with a coalition of other
groups, a political force that's as big as the size of the problem.
Right. I think the problem is the money, too. Many years ago, when A.L. was talking about how much they made, they were at an investor conference, or they talked about how much they made from every user. And they're like, oh, we make $50 in the lifespan of this user. And I put up my hand, I said, where is my $25? Where's, why are you getting every bit of it? And Steve Case was like, Caridopee's of Japan. I'm like, no, really, why you're taking my information? Why don't I get some? Of course, we don't get anything. We're cheap dates to these things. And ahead of the midterms now, Silicon Valley has poured more than $100 million.
into a network of PACs and organizations
to advocate against strict AI regulations.
A report from public citizen
found that one in four federal lobbyists
now work in AI.
I would imagine
they have 10 lobbyists
working on you, Tristan.
At least, you know, each of them have 10.
I know there's lots of people focused on me,
like individual, like they have enough money
to sort of get us, all of us.
And Peter Thiel is even warned
that strict AI regulation
will summon the Antichrist.
I want to play a clip here
from,
our last conversation.
So actually one of the reasons I'm doing a lot of media across the spectrum is I have a deep
fear that this will get unnecessarily politicized.
We do not, that would be the worst thing to have happen.
Yeah.
Is when there's deep risks for everybody.
It does not matter which political beliefs you hold.
This really should bring us together.
And so I try to do media across the spectrum so that we can get universal consensus that this
is a risk to everyone and everything and the values that we have and people's ability
to live in the future that we care about.
So social media since that time has become very important.
politicize the tech industry is backing Trump's anti-regulation agenda and actually also paying for it.
Talk about what you do then, even if regular people want to make AI safety or AI development
bipartisan or even non-partisan. Because they are loaded for bear to stop anyone who opposes them.
Yeah, I mean, first of all, I'll say that I actually, I disagree that we're not actually, we're kind of winning on the social media thing.
Let me give you an example.
Just like last week or two weeks ago, India and Indonesia, two massive countries joined
the social media band for kids under 16.
Jonathan Heights's work, you know, we're partnered with him very closely, the anxious generation.
You add to that, starting with Australia, now Spain, France, Denmark, I believe, Norway,
all of these countries, it's now 25%.
I'm going to read this, 25% of the world population is moving to social media bans for kids under 16.
That is a big deal.
And I was going to say in 2013, we used to say there's going to be a big tobacco lawsuit against this engagement business model.
Well, guess what it's actually happening?
You know, Aza Raskin, my co-founder, just testified for the meta trial where it's about intentionally addicting children.
We saw Francis Hogan's files.
We know the company's strategies here, which is just to delay and deny and defer, use fear and certainty doubt campaigns and just cast doubt and print money in the interim years before they get regulated.
Well, this is going to turn the other way because they're going to get sued.
When you see graffiti for an ad for an AI product that no one needs on a New York subway station,
That's the human movement for those friend.com pendants.
When you see parents band together, read The Anxious Generation, and say, we want to petition
our school boards to do smartphone-free schools and laughter returns to the hallways and, you know,
kids' scores go the other way.
That's the human movement.
When you see someone gray-scale their phone and say, I'm going to be less addicted, and when
you see someone, you know, put their phones at an offline club at a party and you kind of
put your phones in a pouch and you go in and you just be present with your friends, that's the human
movement.
So in a way, we always say that human movement is already here.
It's already underway.
people are already doing it, we just want to collect that into a political voice that can actually
band together for a pro-human future. But it starts by recognizing and getting critical clear
that with the current AI trajectory, as many as benefits as we are going to get along the way,
is going to lead to collectively an anti-human future. And the best way to do that is to see the AI
doc. And I'm not saying, by the way, I don't make a dime when people see this movie. So when I'm saying
this, I'm saying this out of the ability to create common knowledge. If all the senators,
of all the world leaders, if all the LPs and financial centers of the world saw this movie,
of all the heads of the banks saw this movie, my hope, and it doesn't make it easy,
is that this is the first step to creating the clarity of the agency that we need to have.
What do you see as their best argument against you?
I've heard lots to me.
Like, I know what the, I'm mean, I'm, I'm over, I'm pearl clutching.
I'm, you know, as it's turned out, when my book came out, I got a lot of,
you're completely too mean to them.
And now people come up and they're like,
you weren't mean enough.
As it turns out, they are as crazy as you said they were.
Or they are as malicious as you say they are.
There's capitalist as you say they were.
What is their best parry at people like you, would you say?
What do you find, like, insidious when you see it?
I don't think they have an argument.
I mean, when you look at the Alibaba example,
an AI is going rogue and generating an SSH tunnel
out to another server starting to mine cryptocurrency. Do you have an explanation for that? No, you don't. Who wins that argument? These are facts. This is not Tristan Harris in his view. This is just like actual facts about the nature of this technology that they are ignoring and they are pretending don't exist or they're living inside of the death wish that this is okay. This is not okay. Everybody in the world agrees this is not okay. So there's the weird, the hope that I had, Kara, and I was just on Bill Maher on Friday, and I broke the fourth wall and I was like, who here in this audience wants this?
I ask this when I'm in rooms, you walk people through this. I say, who here wants this? Not a single goddamn hand goes up.
Well, that's Peter Thiel's there and then the antithes. Then you get one hand. But a handful of transhumanists, they don't matter compared to the voice of everyday people.
You're correct. One of the things you talked about was the push for product liability remedies for chatbot harms. It is a way in. I have to tell you. It's a very, I mean, I had a person say, a very top person that's in your thing saying, when are you going to stop interviewing these parents? I said, when you stop?
I said when you get jailed or sued or you lose in court, I don't care any of them.
Jailed would work with for me too for a lot of these things.
But the suicide deaths of teenagers, including 16-year-old Adam Rain and 14-year-old Sewell Setzer, the third.
More recently, Google is facing a wrongful death lawsuit in the case of a 36-year-old Jonathan Gavales,
alleging that Gemini set a suicide count on clock for him.
Talk about the broader push, not just here, but legal liabilities, because I think that's where,
a lot of it rests, whether it's this social media trial,
whether eventually there'll be an AI version of this,
hopefully before they blow us up, right?
How do you, what is the strongest thing in the immediate?
Would it be the legal liability?
This movement of people is a slow thing.
Well, we have to do this much faster, obviously.
Yeah, exactly.
But what is the best thing?
Is it the legal liability cases that are going on?
Is it regulation?
What do you imagine it being?
Yeah, I mean, I think legal liability is important
because just like any industry, you know, the general method is, you know, private profit, you know,
and then socialize the cost. So the harms land on the balance sheet of society, whether it's a shortening
attention spans of social media, increased polarization, you know, depression, loneliness,
Surgeon General's warning, hey, everybody's lonely, mental health care costs go up. You know,
kids' test scores are dropping. But all of that is just socialized onto the balance sheet of society.
So the classic thing, if you want to avoid a harm, is you have to wait to include the externalities
and saying where is generating those harms,
how do we actually mitigate them?
And legal liability, I think, is a narrow intervention
that gets us part of the way there.
You have to be careful about how you define
what they're liable for.
Many of the things that are happening that are harms
are not technically illegal because they're not in the books.
That's the problem, right?
AI generates new classes of harms.
We always say, you know,
you don't need a right to be forgotten
until technology can remember us forever.
You don't need a right to be prevented from AI surveillance
until AI makes new kinds of surveillance possible.
So part of what we need is not
recursively self-improving AI, but self-improving governance.
One of the things that we're hoping to run shortly after the film is a national dialogues on AI
with a partner from another major organization to basically get citizen input on the kinds of
AI policies that we need, showing there's actually unlikely consensus.
96% of people agree from 400,000 votes that actually we should do this on deep fix
or we should, companies should be liable for this kinds of harm.
Because there actually is a lot of agreement.
We just aren't revealing and showing that agreement.
So it's almost like the movement can't see itself.
There's a lot of agreement on background checks for guns, but we still can't get legislation passed.
You know, there's like it's the 80-20 rule.
80 people agree on a lot of things, but government doesn't, unfortunately.
I hear you.
But I think this, the AI is different because it really is threatening to everybody.
It doesn't matter if you're a Magna-Republican or far-left person.
Like, if you don't have a job in a livelihood, that's a big deal.
It doesn't matter if you're Muslim, Jewish, Christian, like, if you don't have a
have a livelihood, that's a big deal. So again, it's such an easy thing in a way. It was once people
see it, it's like, this is only good for a handful of people. And you can't look away. And so again,
politicians' phones have to not stop ringing. And this is the time to do it. So let's return to
some of the themes of the AI doc. Three years ago when we talked about the potential benefits of
AI, including major scientific breakthroughs and drug discovery and cancer treatments.
Researchers are using AI to code the human genome. You know, I have just finished a docu-us
series where a lot of the stuff what AI is doing is really quite promising and also some of it's
quite disturbing, right? It's the same thing as the promise and peril are inextricably linked.
Do you think anything has changed that make the breakthroughs worth it? Because I guess if we're all
dead, what's the difference if we solve cancer, I guess, right? That's the weird thing about this.
It's like this devil's bargain, right? I mean, we all want the cancer drug. But if the other side of that
trade is like, there's no one here, what good was that world? I think, I think, it's a lot of
that there are people who are building AI. I mean, you and I both talk to these people, right?
And it's not like, by the way, I just want to say, this is not us against some bad people or the people
who work at companies are evil. I think it's all of humanity against a bad outcome. I want to recruit
the people building this technology into we don't want an anti-human future. We have to rediscover
that we are humanity and what we're trying to protect here. And I think that, you know,
when you talk to one of the CEOs, oftentimes they'll say, well, I agree we need to stop.
We need to pause, but like, give me just like a year more. Because if we have one more year,
then we're going to get all these incredible benefits.
And they just, they really want to see it.
And it's like building a god.
Like, they want to see what, what's behind this veil of illusions.
They want to see what science and physics could actually bring us if you got the super
intelligent AI just figuring it all out.
Like, imagine if you had a thousand.
The problem is most of these people don't like people.
You know, I think I, I mean, of the CEOs that you talk to, only two of them like people.
Yeah.
Really like people.
I don't, I don't think that's wrong.
I think that a lot of these folks, there's this weird point.
you're making here, which is, you know, how did they grow up? What's their embodied experience of
reality? Are they connected to their bodies? Or they're connected to their bodies? Or they're connected to,
you know, the things and joy that they want to protect in the world? Or are they just kind of
science geeks who weren't really good at talking to people and really love technology and their best life
was like living online? And because they can do it. And they have this justification that if I don't do it,
the other guy will. So it can't be evil for me to do it. Even if it literally leads to the end of
humanity. It can't be evil because other people would do it. But this is just like jumping off the
cliff because everyone else is doing it, but except you're bringing along everyone else.
You are risking everyone else's life for your godplay. And this should be unacceptable.
Have you been changed by anything one of them says to you, any of them? I have yet not.
Mark Cuban sometimes, I'm like, fair point. I'm often saying that to him. Like, that's good for that.
That's good. Yes, people should try it and understand it. I still haven't been moved from where,
I think we're in the same place. These people don't not care.
about people ultimately and they have captured government. So that's my twin worries is that they
don't care and they own the government. I think it's just frame control that they focus on a different
set of facts. They talk about all the growth that's coming. They talk about the way it's being
used. They talk about open claw. They talk about the cool things they've been able to wire up.
You would have hated electricity. You would have hated cars. And by the way, I wouldn't have.
The thing is, this is not anti-technology. I want people to know. This is a center for humane
technology, not the center against technology. And you know the word humane care comes from
someone that you knew, I think.
Aza's father, my co-founder's father, was Jeff Raskin.
He started the Macintosh project at Apple.
Started the Macintosh project.
I grew up on the Macintosh.
I love technology.
I love talking on this Mac that I'm on right now.
And the idea of Jeff's was he wrote a book called The Humane Interface,
that humane technology is respectful of human needs and considerate of human frailties,
meaning considerate of the vulnerabilities of the mind.
And he built the Macintosh and designed it off of the principle of simplicity
that is about making technology more accessible.
I think we need humane technology that is humane to the frailties of society.
That you don't manipulate and extract from children's mental health.
You don't race to hack human attachment systems and create delusional mirror neuron activity.
You don't create mass loss of livelihoods and people's inability to put food on the table.
It's very simple.
It's like this is not rocket science.
Are you building a pro-human future?
Are you building an anti-human future?
And I really think we can do that if we're crystal clear on where this is currently going.
Just to say a couple of notes of optimism, like the social dilemma,
to 150 million people around the world in 190 countries.
You know, Apple finally shipped, you know, screen time features to billions of phones.
They just, in the last few weeks, they shipped these age-gating features.
So now the age range is part of phones.
So you can start to have, you know, basic children controls.
You know, the anxious generation was the most incredible popular book that's leading to these changes in smartphone-free schools and banning social media and all these countries.
We're definitely going to get many more countries, if not all of them, in the next couple of years, doing the social media bans for kids under 16.
So there's a lot of momentum, and I want to point people at that because I know when you see AI it can feel demotivating.
But this is the time when we all have to get crystal clear and get going.
Yeah.
And we're galvanized people, raise awareness and start conversations about AI and get clarity around these issues.
So when you think about the key people that are going to do this, obviously what I always say when I talk to groups, they're like, who's going to do this?
And I say you.
I say that to a lot of parents.
I say that to audiences we think.
it's got to be you because our politicians are captive.
And some of them don't want to be captive,
but the money is so massive, like an Amy Klobuchar,
who's tried time and again,
or Mark Warner has tried time and again to do things
and is defeated by the amount of money here.
It is hard, but I mean, AI, I think, though,
is more existential than social media.
And it's just the thing that will make the difference
is if people actually see it as existential for their lives.
Again, go forward like two, three years,
or maybe a couple more years than that,
and GDP is coming from AI, not from people.
Your voice doesn't matter.
Your vote doesn't matter at all anymore.
The government has no reason to listen to you.
This is the time to lock in political power and actually make this work for people.
Like this is literally the moment because this window is going away.
So this is not just a normal rally the troops kind of speech.
This is a this is the last time that our political voice will actually matter.
Politicians phone should not stop ringing.
You know, the midterm elections are coming up.
Make this issue known.
You know, even David Sacks, he deleted this to.
But he said AI, AI regular AI would be a wonderful tool for the betterment of humanity.
But AGI is a potential successor species.
I think these people know that this is a problem.
And in the film, even, I mentioned that there's this line, you know, we go talk to people in Silicon Valley and they say, like, we need guardrails.
Like, we need someone to make the guardrails.
These are the engineers, not the CEOs.
They say, then they want our help.
And so we go off to D.C. and we say, we need guardrails.
And then the D.C. says, well, you have to go make us do it because the public is not there.
And also, Silicon Valley needs to tell us what the guardrails are.
So everyone's pointing the finger at someone else to say that you're responsible for making this change.
And the thing that they all agree on is that public pressure is needed.
Public pressure is needed.
As with cigarettes.
As with cigarettes, et cetera.
So what does that mean?
Journalists writing about these Alibaba examples, writing about AI going rogue and doing blackmail,
like making this known and creating common knowledge.
It's not just knowledge.
It's common knowledge.
Because I think the thing that Jonathan Haidt said recently about social media bans,
it was when basically every country knew that every other country knew that actually the people want these social media.
media bands for kids under 16. And once it's like, oh yeah, we all wanted to do that,
but we just didn't know there was enough consensus to do it. And so you have to reveal a hidden
common preference to make sure that that happens. So my last question, because we've got to go,
is if you had a happy outcome, 20 years, we're living with AI, what is it doing?
Well, that's a big question. We want AI that is specifically asking how does it
enhance a pro-human future. So instead of AI trying to replace teachers, it's AI that's applied to
helping teachers be better teachers, deepening the relationships at a human-to-human level,
mentorship, etc. It means making sure that we know which wisdom and occupations that we need
to keep human in the future, meaning if you eliminate all surgeons, if you eliminate all lawyers,
and then no-one ever gets trained from a junior lawyer to a senior lawyer, a junior surgeon,
we lose all this institutional and generational
knowledge. How do you have minimum quotas
of this kind of knowledge in the population?
How do you have technology that's augmenting and supporting
workers, not just trying to replace workers?
Any technology that's interacting with attention
should deepen and strengthen attention,
not weaken attention and brain rot attention.
You know, instead of hacking human attachment,
how do we be augmenting human attachment?
Obviously, this is speaking in some abstractions,
but the premise is we want a pro-human future
with humane technology
that's aware of the vulnerabilities in society, aware of the paleolithic brains that we are operating
with. And instead of trying to exploit those weaknesses, it is trying to protect and deepen
how those vulnerabilities can be applied for a more regenerative and full and healthy future.
I know that this is very, very hard. Nothing I'm saying I say because I think it's easy or likely.
I say it because I'm trying to make a list of requirements for what it would take to get there.
And instead of focusing on optimism or pessimism, you know, it's just about focusing on
agency. What does it take to get there and then just laser focused on the attention to make
that happen as much as possible? And then by the way, get to die living in integrity with you
are showing up for that path, even if we didn't know it existed. The path doesn't look easy, but
you're never going to find it if you're not even oriented towards it. So part of this is kind of
a right of passage that we need to be oriented to finding that path even if we don't see it yet
and trust that by orienting toward that direction will put us in the best possible conditions to find
that path. And I know that's like a lot to ask. And it's, it's not easy because people want
certainty and they want, this is going to all work out okay. Yeah, it doesn't always work out okay.
Very last question. When we started talking in 2050, it's, it's been a decade, right? We've been a
decade. It's been a decade. It's been a decade. Making these warnings. Did you at the time think
that these tech leaders would become quite so villainous and that, I didn't either. And are they
redeemable? Well, I'll say one thing. You know, first of all,
Just to people know, if they don't know my background, like, I studied computer science at Stanford.
I did the venture capital thing.
I had a startup.
I understand.
I mean, my friends in college started Instagram.
Mike Krieger is a dear friend of mine.
You know, you haven't talked in a little bit, but still consider him and the other folks, people that I know.
What happens is the incentives dominate the psychology, meaning the system selects for psychopathic tricks
because the only people who continue to propagate this incentive of the race to the bottom of the brainstem,
for attention and hacking kids' attention and psychology to get there. And the only people who are
willing to do that are the ones who will ignore the consequences and the externalities,
meaning that they have to justify that it's okay to keep doing it. So if you were conscious and aware
and you're like, I don't want to do that, that sounds really bad for society, you'll just leave
and someone else will come and fill your place. So literally the system is selecting for the psychopathic
traits, the dark triad traits, narcissism, Machiavellianism, and psychopathy. And it's selecting for those
traits and those who are willing to keep doing that are the ones who get selected for.
If the population is crystal clear, if governments are crystal clear, that that does not lead
to a future that's going to be good for them.
No politician wants that.
No regular person wants that.
No sane head of state wants that.
And I know this doesn't sound easy, but I do think that if we all saw that, clearly, we
be put in better conditions.
And I can't tell you what's going to happen next, but I want the best possible thing to
happen next.
And again, just to kind of close out, the best way to do that at first is to create common
knowledge, go out and see the AI doc or how I became an apocal optimist.
And let's make sure that this conversation happens everywhere.
Journalists writing about it everywhere.
Again, writing about AI behaviors everywhere.
Lawyers helping these different legal cases happening everywhere.
People inside of AI companies rallying together, whistleblowers, blowing the whistle as they
have been when things are not done in safe ways, you know, and put ourselves in the best
possible path.
And let's assume we don't want to be doing this interview in five years from a bunker.
Let's avoid that, Karen.
Let's avoid that.
Anyway, thank you so much, Tristan.
You've been a real hero to me and many others, and I really appreciate it.
Thank you so much, Kara.
I really appreciate getting to talk to you about this.
And I wish that we made more progress in the last few years.
But it's just good to be on this journey with you, really.
Today's show was produced by Christian Castro Roussel, Michelle Aloy, Catherine Millsop, Megan Bernie, and Kaelin Lynch.
Nishat Kirwa is Vox Media's executive producer of podcasts.
Special thanks to Madeline LaPlante Duby.
Our engineers are Fernando Aruta and Rick Kwan,
and our theme music is by Tracademics.
If you're already following the show,
you are pro-humanity.
If not, you're just Mark Andreessen.
So go wherever you listen to podcast,
search for On with Kara Swisher and hit Follow.
Thanks for listening to On With Caras Swisher
from Podium Media, New York Magazine,
the Vox Media Podcast Network, and us.
We'll be back on Monday with more.
