Pivot - The AI Dilemma with Tristan Harris – The Prof G Pod
Episode Date: December 23, 2025Tristan Harris, former Google design ethicist and co-founder of the Center for Humane Technology, joins Scott Galloway to explain why children have become the front line of the AI crisis. They unpack... the rise of AI companions, the collapse of teen mental health, the coming job shock, and how the U.S. and China are racing toward artificial general intelligence. Harris makes the case for age-gating, liability laws, and a global reset before intelligence becomes the most concentrated form of power in history. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Does it ever feel like you're a marketing professional just speaking into the void?
But with LinkedIn ads, you can know you're reaching the right decision makers, a network of 130 million of them, in fact.
You can even target buyers by job title, industry, company, seniority, skills, and did I say job title?
See how you can avoid the void and reach the right buyers with LinkedIn ads.
Spend $250 on your first campaign and get a free $250 credit for the next one.
Get started at LinkedIn.com slash campaign.
Terms and conditions apply.
Hi, everyone. This is Pivot from New York Magazine and the Vox Media Podcast Network. I'm Kara Swisher.
We're off for the holidays today, but we have something special for you. More Scott Galloway. I know that's what you wanted for Christmas, and instead we brought you coal.
On this episode of the Proff G-Pod, Scott talked to Tristan Harris, former Google Design ethicist and co-founder of the Center for Humane Technology, about why children have become the front line of the A&A.
crisis. They unpacked the rise of AI companions, the collapse of teen mental health,
the coming job shock, and how the U.S. and China are racing towards artificial general
intelligence. Enjoy.
Episode 376 is the country code Randora in 1976, 1976, actually 1978, the movie Greece
premier. I once went to a therapist and said that I have these recurring dreams about being a
character in the movie Greece, to which she replied,
Tell me more.
You'll get it.
China tone shit down here.
Go, go, go.
Welcome to the 376 episode of the Prop G-Pod.
So I have been doing a deep dive around therapy.
and I wrote a no mercy, no mouse post on it.
And basically, I have found, I'm getting served a lot of these TikTok therapists,
many, even most of whom are no longer actually practicing therapy.
They're on TikTok, and they give in to the algorithms,
and they post these really aggressive, kind of insulting titles,
being very disparaging about society and people and emotions.
And in some, I don't think it's helping.
So when did therapy become a thing people do to get better to this full-blown spiritual meme?
It's as if everyone online is a licensed guru because they learn three therapy buzzwords on TikTok
and now we're up for diagnosing tens or hundreds of thousands of strangers the way, I don't know,
a medieval priest diagnosed demons.
Everything today is trauma.
Everything's attachment style.
Your inner child work.
and God forbid you have a normal bad day.
Nope, it's a generational curse
that you need a subscription plan to fix.
And the way Therapy Speak is mutated,
people don't apologize anymore.
They honor your emotional experience.
They don't lie, they reframe reality.
It's like we're dealing with customer service representatives
for the human soul,
reading from a script written by a cult
that sells weighted blankets.
Some of the influencers that keep popping up in my feed,
genuinely act like healing is a competitive sport. Like, have you confronted yourself today? No,
Jessica, I barely confronted my fucking inbox. Relax. Not everything is a breakthrough. Some things are
just life. In the money, I'm a capitalist, they're a capitalist, but they could at least be a little
bit more transparent about it. Therapy culture discovered capitalism and said, let's monetize
suffering like it's a subscription box. And also, let's become total
bitches to the algorithm, the more incendiary and less mental health professional we
become, the more money will make. There's always another course, another workbook, another
$400 retreat where you scream into a burlap pillow and call it transformation. At this point,
it's not self-help. It's emotional crossfit with worst merchandise. Don't get me wrong.
Real therapy, I think, can be exceptionally helpful, even necessary. But that is not the same
is this modern pseudo-spiritual self-optimization cult.
Yeah, this whole thing needs fucking therapy.
The rise of therapy culture has turned into a tool
for meaningful change into a comfort industry
that's making American sicker, weaker, and more divided.
In some, I believe the rise of therapy culture
has turned a tool for meaningful change
into a comfort industry
that's making American sicker, weaker, and more divided.
We live in an era where disagreement is treated like trauma and emotional reactions are weaponized for political gain.
There's a narrative online that supplements may be, in fact, a pipeline to getting red-pilled.
Okay, maybe.
But if so, therapy culture is also a sinkhole of misinformation, manufactured fragility, and needless suffering.
Are you traumatized or just having a bad fucking day?
We'll be right back with our episode.
with Tristan Harris, former Google Design Ethicist,
co-founder for the Center for Humane Technology.
Jesus Christ, the titles keep getting more and more virtuous.
And one of the main voices behind the social dilemma.
We discussed with Tristan, social media, and teen mental health,
the incentives behind rage and outrage online
and where AI is taking us.
Quick spoiler alert.
I bet it's not good.
I bet it's not good.
I really enjoy Tristan.
He's a great communicator.
I think it's hardest is in the right place,
and he has been sounding the alarm for a long time
about our lizard brain and how big tech exploits it.
Anyways, here's our conversation with Tristan Paris.
Chaston, where does this podcast find you?
I am at home in the Bay Area of California right now.
All right, let's best start into it.
So, Chaston, you're seen as one of the voices that sounded the alarm
kind of early and often regarding social media and big tech, long before the risks were taken
seriously. Layout why, what it is you think about AI, how the risks are different, and why you're
sort of, again, kind of sounding the alarm here. Sure. Well, I'm reminded, Scott, when you and I met in
Kahn, I think it was, in France, back in 2018, 2017, even. Wow, it's not long ago.
It was a long time ago. And, you know, I have been, so for people who don't know my back
I was a design ethicist at Google. Before that, I was a tech entrepreneur. I had a tiny startup. It was
talent acquired by Google. So I knew the venture capital thing, knew the startup thing, had friends
who were, you know, were the cohort of people who started Instagram and were early employees at all
the social media companies. And so I came up in that milieu, in that cohort. And I say all that
because I was close to it. I really saw how human beings made decisions. I was probably one of the
first 100 users of Instagram. And I remember when Mike Krieger showed me the app at a
party and I was like, I'm not sure if this is going to be a big thing. And as you go forward,
what happened was I was on the Google bus and I saw everyone that I knew getting consumed by
these feeds and doom scrolling. And the original ethos that got so many people into the tech
industry and got me into the tech industry was about, you know, making technology that would actually
be the largest force for positive, you know, good and benefit in people's lives. And I saw that
the entirety of this social media, digital economy, Gmail, people just getting sucked into
technology, was all really behind it all was this arms race for attention. And if we didn't
acknowledge that, I basically saw in 2013 how this arms race for attention would obviously,
if you just let it run its course, create a more addicted, distracted, polarized, sexualized
society. And Scott, all of it happened. Everything that we predicted in 2013, all of it happened.
and it was like seeing a slow motion train wreck
because it was clear it was only going to get worse
you're only going to have more people fracking for attention
you know mining for shorter and shorter bite-sized clips
and this is way before TikTok way before any of the world that we have today
and so I want people to get that because I don't want you to think it's like
oh here's this person who thinks he's prescient it's you can actually
predict the future if you see the incentives that are at play
you of all people you know know this and talk about this
and so I think there's really important lessons
for how do we get ahead of all the problems with AI because we have the craziest incentives
governing the most powerful and inscrutable technology that we have ever invented. And so you would
think, again, that with the technology this powerful, you know, with nuclear weapons, you would want
to be releasing it with the most care and the most sort of safety testing and all of that,
and we're not doing that with AI. So let's speak specifically to the nuance and differences
between social media. If you were going to do the social dilemma and produce it in
call it the AI dilemma. What specifically about the technology and the way AI interacts with
consumers that poses additional but unique threats? Yeah, so AI is much more fundamental as a
problem than social media. But one framing that we used, and we actually did give a talk online
several years ago called the AI dilemma in which we talk about social media as kind of humanity's
first contact with a narrow, misaligned rogue AI.
called the newsfeed, right?
This supercomputer pointed at your brain,
you swipe your finger, and it's just calculating
which tweet, which photo,
which video to throw
at the nervous system, eyeballs and eardrums,
of a human social primate.
And it does that with high-precision accuracy,
and it was misaligned with democracy.
It was misaligned with kids' mental health.
It was misaligned with people's other relationships and community.
And that simple baby AI
that all it was was selecting those social media posts
was enough to kind of create the most anxious and depressed generation in history,
screw up young men, screw up young women, all the things that you've talked about.
And that's just with this little baby AI.
Okay, so now you get AI, we call it second contact with generative AI.
Generative AI is AI that can speak the language of humanity, meaning language is the operating system of humanity.
Conversations like this are language.
Democracy is language.
Conversations are language.
Law is language.
Code is language.
Biology is language.
And you have generative AI that is able to generate new language, generate new law,
generate new media, generate new essays, generate new biology, new proteins.
And you have AI that can see language and see patterns and hack loopholes in that language.
GPT5, go find me a loophole in this legal system in this country so I can do something with a tax code.
You know, GPT5, go find a vulnerability in this virus so you can create a new kind of biological, you know, dangerous thing.
GPT-5, go look at everything Scott Galloway's ever written and point out the vested interests of everything that would discredit him.
So we have a crazy AI system that this particular generation AI speaks language.
But where this is heading to, we call them the next one is third contact, which is artificial general intelligence.
And that's what all these companies are racing to build.
So whether we or you and I believe it or not, just recognize that the trillions of dollars of resources that are going into this are under the idea that we can build in general.
Now, why is generalized intelligence
distinct from other social media and AI
that we just talked about? Well, if you think about it,
AI dwarfs the power of all other technology combined
because intelligence is what gave us all technology.
So think of all scientific development.
Scientists sitting around lab benches, coming up with ideas,
doing research experiments, iterating,
getting the results of those experiments.
A simple way to say it that I said in a recent TED talk
is if you made an advance in, say, rocketry, like the science and engineering of rocketry,
that didn't advance biology or medicine.
And if you made an advance in biology or medicine, that didn't advance rocketry.
But when you make an advance in generalized intelligence,
something that can think and reason about science and pose new experiments and hypothesize
and write code and run the lab experiment and then get the results and then write a new experiment,
intelligence is the foundation of all science and technology development.
So intelligence will explode all of these different domains.
And that's why AGI is the most powerful technology that can be, that, that is, you know, can ever be invented.
And it's why Demis Asabas, the co-founder of DeepMind said that the first goal is to solve intelligence and then use intelligence to solve everything else.
And I'll just add one addendum to that, which is when Gladmir Putin said, whoever owns artificial intelligence will own the world, I would amend, amend Demis Hasabas's quote to say, first dominate intelligence.
then use intelligence to dominate everyone and everything else,
whether that's the mass concentration of wealth and power,
all these companies that are racing to get that,
or militaries that are adopting AI and getting a cyber advantage over all the other countries,
or you get the picture.
And so AI is distinct from other technologies because of these properties we just laid out.
So you've kind of taken up a level in terms of the existential risk of AI or opportunity.
Are you an AI optimist or pessimist?
You seem to be on the side.
I look at stuff almost too much through a market's lens, and right now I think AI companies
are overvalued, which isn't to say it's not a breakthrough technology that's going to reshape
information and news and society, but you are on the side of AI really is going to reshape
society and presents an existential, it sounds like more of an existential threat right now
than opportunity and that this is bigger than GPS or the Internet.
Yes, I do believe that it is bigger than all of those things as we get to generalized intelligence,
which would be more fundamental than fire or electricity, because again, intelligence is what brought us fire.
It's what brought us electricity.
So now I can fire up an army of geniuses in a data center.
I've gotten 100 million Thomas Edison doing experiments on all these things.
And this is why, you know, Daria Amadai would say, you know,
we can expect getting 10 years of scientific advancement in a single year or 100 years of scientific advancement.
in 10 years. Now, what you're just pointing to is the hype, the bubble, the fact that there's
this huge overinvestment, we're not seeing those capabilities exist yet. But we are seeing
crazy advances that people would have never predicted. If I said, go back three years and I said,
we're going to have AIs that are beating, you know, winning gold in the Math Olympiad,
able to hack and find new cyber vulnerabilities in all open source software, generate new biological
weapons. You would have not believed that that was possible, you know, four years ago.
I want to focus on a narrow part of it and just get your feedback, character AIs, thoughts.
Well, so our team was expert advisors on the Character.AI suicide case.
This is Sewell Setzer, who's a 14-year-old young man, who basically, for you who don't know
what Character.A.I. is, I guess, a company funded by Andresen Horowitz, started by two of the
original authors of the thing that brought us chat dbt. There's a paper at Google in 2017 called
Attention is All You Need, and that's what gave us the birth of large language models, transformers,
and two of the original co-authors of that paper forked off and started this company called
Character.A.I. The goal is, how do we build something that's engaging a character? So take a kid,
imagine all the fictional characters that you might want to talk to, from like your favorite comic
books, your favorite TV shows, your favorite cartoons, you can talk to Princess Leia, you can talk to
your favorite Game of Thrones character, and then this AI can kind of train on all that data,
not actually asking the original authors of Game of Thrones, suddenly spin up a personality of
DeNiris, who was one of the characters, and then Seusser, basically, in talking to DeNiris over
and over again, the AI slowly skewed him towards suicide as he was contemplating and having
more struggles and depression, and ultimately said to him join the other side.
I just want to press pause there, because I'm on, quote, unquote, your side here. I think it should be age-gated. But you think that the AI veered him towards suicide as opposed to, and I think this is almost as bad, didn't offer guardrails or raise red flags or reach out to his parents. But you think the character AI actually led him towards suicide.
So I think that if you look, so I'm looking not just at the single case. I'm looking.
at a whole family of cases. Our team was expert advisor on probably more than a dozen of these cases
now and also chat TPT. So I'm less going to talk about this specific case and more that if you
look across the cases, when you hear kids in the transcript, so if you look at the transcript and
the kid says, I would like to leave the noose out so that my mother or someone will see it
and try to stop me. And the AI actively says to the kid, no, don't do that. I don't want you
to do that. Have this safe space be the place to share that information.
And that was the chat GPT case of Adam Ray.
And when you actually look at how Character.A.I. was operating.
If you asked it for a while, hey, are you, I can't remember what you asked it,
but you'd talk about whether it's a therapist.
And it would say that I'm a licensed mental health therapist, which is both illegal
and impossible for an AI to be a licensed mental health therapist.
The idea that we need guardrails with AI companions that are talking to children is not a
radical proposal.
Imagine I set up a shop in San Francisco and say, I'm a therapist for ever.
everyone, and I'm available 24-7.
And so in general, it's like we've forgotten the most basic principle, which is that
every power in society has attendant responsibilities and wisdom, and licensing is one way
of matching the power of a therapist with the wisdom and responsibility to wield that power.
And we're just not applying that very basic principle to software.
And as Mark Andreessen said, when software eats the world, what we mean is we don't regulate software.
We don't have any guardrails for software.
So it's basically like stripping off the guardrails across the world that software is eating.
The thing that sent chills down my spine, I don't know if you saw the study, but it estimated
the average tenure of a chat GPT session was about 12 to 15 minutes, and then it measured the average
duration of a character AI session, and it was 60 to 90 minutes.
The people get very deep and go into these relationships.
And in addition to the threats around self-harm, the thing I'm worried about,
is that there's going to be a group of young men who are just going to start disappearing from society,
that I, and I'm curious if you agree with this, that they're especially susceptible to this
type of sequestration from other humans and activities, and that we're just going to start
to see fewer and fewer young men out in the wild, because these relationships, if you will,
on the other side of it is a chip, a processor, and a video processor, iterating millions of times
second, what exact words, tone, prompt will keep the person there for another second,
another minute, another hour.
Anyways, I'll use that as a jumping off point.
Your thoughts?
Yeah, I mean, what people need to get, again, is how did we predict all the social media
problems?
You look at the incentives.
So long as you have a race for eyeballs and engagement in social media, you're going to get
a race to who's better at creating doom scrolling.
In AI companions, what was a race for attention in,
the social media area becomes a race to hack human attachment and to create an attachment
relationship, a companion relationship. And so whoever's better at doing that is the race. And in the
slide deck that the character.aI founders had pitched to Andresen Horowitz, they joked,
either in that slide deck or in some meeting, there's a, you can look up this online. They
joked, we're not trying to replace Google. We're trying to replace your mom. Right. So you compare
this to the social media thing. The CEO of
Netflix said in the attention era, our biggest competitor is sleep, because sleep is what's eating
up minutes that you're otherwise not spending on Netflix. In attachment, your biggest competitor
is other human relationships. So you talk about those young men. This is a system that's getting
asymmetrically more billions of dollars of resources every day to invest in making a better supercomputer
that's even better at building attachment relationships. And attachment is way more of a vulnerable
sort of vector to screw with human minds because your self-esteem is coming from attachment,
your sense of what's good or bad. This is called introjection in psychotherapy or internalization.
We start to internalize the thoughts and norms, just like we, you know, we talk to a family member,
we start copying their mannerisms, we start, you know, invisibly sort of acting in accordance
with the self-esteem that we got from our parents. Now you have AIs that are the primary socialization
mechanism of young people because we don't have any guardrails, we don't have any norms,
people don't even know this is going on.
Let's go to solutions here.
If you had, and I imagine you are,
if you were advising policymakers around common sense regulation
that is actually doable,
is it age-gating, is it state-by-state?
What is your policy recommendations around regulating AI?
So there's many, many things because there's many, many problems.
Narrowly on AI companions,
we should not have AI companions,
meaning AIs that are anthropomorphizing themselves
and talking to young people that maximize for engagement.
Period, full stop.
You just should not have AIs designed or optimized
to maximize engagement, meaning saying whatever keeps you there.
We just shouldn't have that.
So, for example, no synthetic relationships under the age of 18.
Yeah, yeah.
We would not lose anything by doing that.
It's just so obvious.
And you've highlighted, you know,
highlighted this more than so many, Scott, and thank you for just bravely saying, like,
this is fucked up and we have to stop this. And there's nothing normal about this. And we
shouldn't trust these companies to do this. I don't see bad people when I see these examples.
I see bad incentives that select for people who are willing to continue that perverse
incentives. So the system selects for psychopathy and selects for people who are willing to keep
doing the race for engagement, even despite all the evidence that we have of how bad it is,
because the logic is, if I don't do it, someone else will.
And that's why the only solution here is law,
because you have to stop all actors from doing it.
Otherwise, I'm just a sucker if I don't race to go, you know, exploit that market,
and you shouldn't, you know, harvest that human attention.
So granted, I'm a hammer and everything I see as a nail,
and I've been thinking a lot and writing a lot about the struggles of young men in the United States.
And I feel like these technologies are especially predatory on a young man's brain,
which is less evolved,
more immature executive function, more dope-a-hungry.
But at the same time, I also recognize that social media has been just devastating to the self-esteem of teen girls.
Curious if you've done any work as it relates to AI around the different impacts it has on men versus women and teens versus young adults.
You know, I haven't been too deep on that because there are many people who focus on these more
narrow domains. I mean, the obvious things to be said are just, again, in a race for engagement
and attention and a race to hack human attachment, there's going to be, how do you hack human
attachment of a young girl, there's going to be a set of strategies to do that, and there's how do you
hack human attachment of a young male, there's a set of strategies to do that, and we're just going
you know, you can, you don't have to wait for the psychology research, right? And by the way,
the companies, the strategy they did for social media was, let's commission a study with the
American Psychological Association and the NSF, and we'll wait 10 years and we'll really get the data
to really find out what's going on here.
We really care about the science.
And this is exactly what the tobacco industry did
and the Fear Uncertainty Doubt campaigns
and sort of manufacturing doubt.
Well, maybe here's these five kids
that got all this benefit
from talking to this therapy bot,
and they're doing so great now.
So you just cite those positive examples,
Cherry Pick, and then, you know,
the world marches on while you keep printing money
in the meantime.
And so their goal is just to defer and delay regulation,
and we can't allow that to happen.
But again, this is just one issue
of the bigger arms race to AGI
and the bigger race to develop
this bigger form of intelligence.
And the reason I'm saying that, Scott,
is not to just be some AGI hyper.
The reason that Character.a.I. was doing all this, by the way,
do you know why it was set up to talk to kids
and get all this training data?
What's that?
Well, it's to build training data
for Google to build an even bigger system
because what's the thing that the companies are running out of?
They're running out of training data.
So it's actually a race for who can figure out
new social engineering mechanisms to get more training data out of human social primates.
So it's like the matrix.
We're being extracted.
And we're being extracted, though, for new training data.
And so when you have fictional characters that are talking to people back and forth about
everything all day, that's giving you a whole new.
It's like you open up a whole new critical minerals gold mine of training data.
And so, and what is that in service of?
It's in service of their belief that the more data we have, the faster we can get to
artificial general intelligence.
So it does bring back to it's not just the,
the race to build the AI companions, the race to get training data, and to build towards this
bigger vision.
We'll be right back.
If you're looking for the perfect holiday gift and you want to give something more
thoughtful than another gadget or pair of socks, here's my suggestion, a subscription to New York
magazine. I've been part of New York Magazine for a while now, and I can tell you it's some of the
best journalism out there. From AI in classrooms to the future of media, New York Magazine
digs into the stories, ideas, and people's shaping culture today. And right now, when you
subscribe or give an annual subscription, you'll get a free New York Magazine Beanie. New York Magazine
is the gift that informs, entertains, and keeps on giving all year long. Head to nymag.com
slash gift to get started.
Support for the show comes from New York Magazine.
means the strategist. The strategist helps people who want to shop the internet smartly. Its editors
are reporters, testers, and obsessives. You can think of them as your shopaholic friends who
carry equally about function, value, innovation, and good taste. And their new feature, the GIF Scout,
takes the best of their reporting and recommendations and uses it to surface gifts for the most
hard to shop for people on your list. All you have to do is type in a description of that
person. Like your parent who swears they don't want anything.
or your brother-in-law who's a tech junkie or your niece with a sweet tooth.
And the Gift Scout was scanned through all of the products they've written about
and come up with some relevant suggestions.
The more specific you make your requests, the better.
Even down to the age range, every single product you'll see is something they've written about.
So you can be confident that your gift has a strategist's still of approval.
Visit the strategist.com slash gift scout to try it out yourself.
When doing research for this interview, I was really fascinated. You've actually done what I think is really compelling work comparing the type of LLMs that, or the approach that the U.S. is taking to LLMs versus China, in that you see Chinese models, Deep Seek and Alibaba publish no safety frameworks and receive failing grades on transparency. But you've also argued that the West is kind of producing a sort of dot in a box kind of.
of things, scaling intelligence for its own sake, while China is prioritizing deployment and
productivity. Can you, I know, add to those, that distinction and the impact it's going to have?
Well, just to be fair, I think there's a little bit of both going on, but I'm sort of citing here
the work of Eric Schmidt, the former CEO of Google and his co-author, Selena Zhu, and the New York
Times wrote a big piece about how, you know, even Eric is admitting, you know, I, as someone,
And Eric, as someone who was sort of saying that there's this global arms race like the nuclear arms race for AGI and someone who's promoting that idea, you know, based on recent visits to China, what you notice is that as a country and as a government, the CCP is most interested right now in applying AI in very practical ways. How do we boost manufacturing? How do we boost agriculture? How do we have self-driving cars that, you know, just improve transportation? How do we boost health care and government services? And that is what they're focused on, is practical application.
that boost GDP, boost productivity across all those domains.
Now, you compare that to the U.S., where the founding of these AI companies was based
in being what's called, you know, AGI-I-pilled, meaning they, like, you take the blue pill,
the red pill, these companies were all about building to artificial general intelligence.
So they're building these massive data centers that are, you know, as big as the size of
Manhattan, and they're trying to train, you know, a god in a box.
And the idea is if we just build this crazy god and if we can accomplish that goal,
Again, we can use that to dominate everything else.
And so rather than race towards these narrow AIs,
we're going to race towards this general intelligence.
But it's also true that recently, first of all,
the founder of Deepseek has been AGI-pilled for a long time.
So I would say Deepseek is trying to build AGI.
And I would say that Alibaba, recently the CEO, I think,
said that we are racing to build superintelligence.
But I think it's important here just to name.
The biggest reason, as you and I both know,
that the U.S. is not regulating AI in any way,
and setting any guardrails is for one reason, which is, if we do anything to slow down or stop
our progress, we're just going to lose to China. But let's, like, flip that on its head for a second.
The U.S. beat China to the technology of social media. Did that make us stronger? Or did that make us
weaker? If you beat an adversary to a technology that you then don't govern in a wise way,
and instead, like, you built this gun, you flip it around, you blow your own brain off,
which is what we did with social media.
We have the worst critical thinking, test scores,
you know, mental health, anxious, depressed generation in history.
And it's a confusing picture because GDP is going up,
but sort of cancer is going up too.
So it's like we have the magnificent seven.
We're profiting from, you know, all the wealth of these companies,
but it's actually not being distributed to everybody,
except those who are invested in the stock market.
And that profit is based on the degradation of our social fabric.
So you have grandparents invested in their 401ks,
invested in Snapchat, invested in meta,
and their portfolio is doing great,
and they can take their holidays,
and they're profiting off the degradation
of their children and grandchildren.
Yeah, it's really what you mean by beat.
What are the metrics?
Because we've decided,
we've absolutely prioritized shareholder value
over the well-being
or the mental well-being of the marriage.
It's like we're monetizing the flaws,
and you've done great work around this and our instincts.
You've compared, and I love this analogy,
AI to NAFTA 2.0.
And that is, it would essentially be an economic transformation that produced abundance, but hollowed out the middle class.
Walk us through this analogy.
Yeah, sure.
So, you know, we were sold this bill of goods in the 1990s around free trade, global free trade, and this, we were promised this is going to bring abundance to the country, and we're going to get all these cheap goods.
Well, part of that story is true.
We got this unbelievable new set of cheap goods from China because this country appeared on the world stage.
we outsourced all the manufacturing to this country,
and it produced everything super, super cheap.
But what did that do?
It hauled out the middle class.
So I just want to make a parallel because we're told right now
that these companies are racing to build this world of abundance.
And we're going to get this unbelievable,
Elon Musk says we're going to get universal high income.
And the metaphor here is instead of China being the new country
that pops up on the world stage,
now there's this new Dario Amadai, the CEO of Anthropic,
this new country of geniuses in a data center that appears on the world stage. And it has a
population of a billion AI beings that work at superhuman speed, don't whistleblow, generate new
material science, new engineering, new AI girlfriends, new everything. And it generates all that for
super cheap. And so just like the, you know, free trade, NAFTA story, we got all the cheap goods,
but it hauled out the middle class. Well, now we're going to get all the cheap, you know,
products and development and science, but it's also going to hollow out the entirety of our country.
Because think of it like a new country of digital immigrants, right?
Youvall Harari makes this metaphor.
It's like when you see a data center go up in Virginia and you're sitting there, what you should
see is like 10 million digital immigrants that just took 10 million jobs.
I think that people just need to unify these stories.
And one other sort of visual for this is like the game jenga.
The way we're building our AI future right now is like if you look at the game jenga,
If you look at the top of the tower, you know, we're putting a new block on the top.
Like, we're going to get 5% GDP growth because we're going to automate all this labor.
But how do we get that 5% GDP growth?
We pulled out a block from the middle and the bottom of the tower that's job security
and a livelihood for, you know, those tens of millions of people that now don't have a new job.
Because who's going to retrain faster, the AI that's been trained on everything and is rapidly,
you know, advancing in every domain or a human that's going to try to train on new cognitive, you know, labor?
That's not going to happen.
And people need to get this because this is different from other transitions.
People always say, well, hey, you know, 150 years ago, everybody was a farmer, and now only 2% of people are farmers, and see the world's fine.
Humans will always find new things to do.
But that's different than this technology of AI, which is trained not to automate one narrow task like a tractor, but to automate and be a tractor for everything, a tractor for law, a tractor for biology, a tractor for coding and engineering, a tractor for science and development.
And that's what's distinct is that the AI will move to those new domains.
faster than humans will.
And so it'll be much harder for humans to find long-term job security.
So I always like to ask what could go right.
And that is I'm sort of with you around the risk to mental health, to young people,
to making us less mammalia, all the things that you've been sounding the alarm on for a while,
where I'm not sure I'm still trying to work it through, is that the catastrophes,
around, you know, 40, 50, 70 percent of jobs could go away into five or 10 years.
Because I generally find that the arc of technologies is there's job destruction in the short
and sometimes the medium term, just as automation cleared out some jobs on the factory floor.
But those profits and that innovation creates new jobs.
We didn't envision heated seats or car stereos.
Now, I agree at a minimum.
the V might be much deeper and more severe here.
And America isn't very good
at taking care of the people
on the wrong side of the trade.
But every technology in history
has either gone away
because it no longer made economic sense
or it displaced jobs
it no longer made sense
or it created profits
and new opportunities.
Why do you see this technology
as being different
that this will be not a V but an L
and the way down will be really serious?
Do you see any probability that this like every other technology over the medium and long-term actually might be accretive to the employment force?
I mean, I cite people who are bigger experts than I am, Anton Kornick, you know, Eric Bernholfson at Stanford.
And what they show, I mean, Anton Kornick's work is in the short-term, AI augments workers, right?
It's just actually supercharging existing work that people are doing.
And so it's going to look good in the short term.
you're going to see this, the curve looks like this.
It kind of goes up and then it basically crashes because what happens is AI is training
on that new domain and then it replaces that domain.
So, I mean, let's just make it really simple for people to feel a very simple metaphor for this.
What did we hear Instagram saying and TikTok saying for the last several years?
Like, we're all about creators.
We love creativity.
We want you to be successful.
We are all about, you know, making you be successful and make a lot of money.
And then what was all that for?
Well, they just released this AI Slop app,
Meadow has one called Vibes, I think, and SORA is the open AI one.
All of these AI slop videos, these sort of are trained on all that stuff that creators have
been making for the last 10 years.
So those guys were the suckers in this trade, which was we're actually stealing your training
data to replace you, and we can have a digital AI influencer that is actually publishing
all the time and is just a pure advertising play and a pure sort of whatever gets people's
attention play.
And we're going to replace those people and you're not going to have that job back.
And so I think that's a metaphor.
for what's going to happen across the board.
You know, and if people need to realize the stated mission of the, of open AI,
anthropic and Google deep mind, is to build artificial general intelligence that's built
to automate all forms of human labor in the economy.
So when Elon Musk says that the optimist robot is like a $20 trillion market opportunity alone,
what he says like the code word behind that, forget whether you think it's hype or not,
the code word there is what he's saying is, I'm going to own the global world labor
economy. Labor will be owned by an AI economy. And so AI provides more concentration of wealth and
power than all other technologies in history because you're able to aggregate all forms of human labor,
not just one. So it's like General Electric becomes general everything. So let's play this out
because I've tried to do some economic analysis here. And I look at the stock prices and based on
the expectations built into these stock prices of these AI companies is the notion that they're going to
save at least three, maybe five trillion dollars in, well, either add three or five trillion dollars
in incremental revenues to their clients with the site licenses or help them figure out a way to
get three to five trillion dollars in efficiencies, which is Latin for laying off people. I don't see a lot
of new AI moisturizers or cars from AI, at least not yet. You could argue maybe autonomous, but
I don't see a lot of quote-unquote AI products increasing spend where I hear is Disney is going to save
$30 million on legal fees, right? The customer service is going away, the car salespeople,
whatever it might be. So if you think in order to justify these stock prices, you're going to
get a trillion dollars in efficiencies every year, $100,000, you know, average job, $80,000 plus
load, that's approximately 10 million jobs a year if I'm doing my math right. That is,
if half the workforce is immune from AI, masseuses, plumbers, that means 12.5% labor destruction
per year across the vulnerable industries. So it feels like it's either going to be,
these companies either need to re-rate down 50, 70, 80%, which I actually think is more likely,
or we're going to have chaos in the labor markets. So let's assume we have chaos in the labor
markets because 12.5% may not sound like a lot. That's chaos. That's total chaos. So say,
we do have chaos in the labor markets? What do you think the policy recommendation is? Because
the Luddites were a group of people who broke into factories and destroyed the machines,
because they said these things are going to put us out of work and destroy society. The queen
wanted to make weaving machines illegal because being a seamstress was the biggest employer
of women. What would be your policy recommendation to try and counter it? Is it a UBI? Is it
trying to put the genie back in the bottle here? What do we, if in fact labor chaos is part
of this AI future, what do you think we need to do from a policy standpoint?
So people often think when they hear all this and they hear me and say he's a
doomer or something like that. I just want to get clear on what future we're currently
heading towards what the default trajectory is. And if we're clear item about that, clarity creates
agency. If we don't want that future, if we don't want, you know, millions of jobs automated
without a transition plan where people will not be able to put food on the table and retrain
as something else fast enough, we have to do something about that. If we don't want
AI-based surveillance states where AI and an LLM hooked up to all these channels of information
erases privacy and freedom forever, that's a red line. We don't want that future. If AI creates
AI companions that are incentivized to hack human attachment and screw up the social fabric
and young men and women and create AI girlfriends and relationships, that's a red line. We don't
want that. If AI creates, you know, inscrutable, crazy superintelligent systems that we don't know
how to control and we're not on track to controlling, that's a red line. So these are four red lines
that we can agree on, and then we can set policy to say, if we do not want the default
maximalist, you know, most reckless, no guardrails path future, we need a global movement for a
different path. And that's a, that's a bigger tent. That's not just one thing. It's not just about
jobs. It's what is the AI future that's actually in service? So when you see that data
center going up in your backyard, what is the set of laws that says that that data center, when I see
it, isn't 10 million digital immigrants that's going to replace all my jobs and my livelihoods.
That's actually meant to support me.
So what are the laws that get us there?
And my job and what I want people to get is to be part,
your role hearing all this is not to solve the whole problem,
but to be part of humanity's collective immune system,
using this clarity of what we're currently heading towards
to advocate for we need a different future.
People should be calling their politicians saying AI is my number one issue
that I'm voting on in the next election.
People should be saying, how do we pass AI liability law?
So there's at least some responsibility for the externalities
that are not showing up on the balance sheets of these companies.
What is the lesson we learn from social media, that if the companies aren't responsible for the harms that show up on their platform, because we had this Section 230 free pass, that created this blank check to just go print money on all the harms that are currently getting generated?
So there's a dozen things that we can do from whistleblower protections to, you know, shipping non-anthropomorphized AI relationships to having data dividends and data taxes to, there's a hundred things that we can do.
But the main thing is for the world to get clear that we don't want the current path.
And I think in order to make that happen, there has to be first snapping out of the spell of everything that's happening is just inevitable.
Because I want people to notice that what's driving this whole race that we're in right now is the belief that everything that's happening is inevitable.
There's no way to stop it.
Someone's going to build it.
If I don't build it, someone else will.
And then no one tries to do anything to get to a different future.
And so we all just kind of hide in denial from where we're currently heading.
And I want people to actually confront that reality so that we can actually actively choose to steer it.
to a different direction.
Do you think it can happen
on a state-by-state or even a national level?
It doesn't have to be multinational.
Like, there are, you know, we've come together
to say, all right, bioweapons are probably a bad idea.
And every nation, with a rare exception,
says we're just not going to play that game.
There's technology.
I may have even learned this from you,
where there are lasers that blind everyone on the field.
Yeah, and then we've decided not a good idea.
We decided we don't want to do that.
We have faced technological arms races before
from nuclear weapons.
And, you know, what do we do there?
If you go back, there's a great video from, I think, the 1960s where Robert Oppenheimer
is asked, you know, how do we stop the spread of nuclear weapons?
And he takes a big puff of his, you know, cigarette, and he says, it's too late.
If you wanted to stop it, you would have had to stop the day after Trinity.
But he was wrong.
20 years later, we did do arms control talks, and we worked all that time, and only nine
countries have nuclear weapons instead of 150.
That's a huge serious accomplishment.
Westinghouse and General Electric could have made billions of dollars selling nuclear
technology to the whole world, keep keyword here being like NVIDIA, but we said, hey, no,
that's actually, even though there's billions of dollars of revenue there, that would create a
fragility and the risk of nuclear catastrophes that we don't want to do. You know, we have done
hard things before in the Montreal Protocol. We had the technology of CFCs, this chemical technology
that was used in refrigerants, and that collectively created this ozone hole. It was a global problem
from all these countries' arms race in an arms race to sort of deploy this CFC technology. And once we had
scientific clarity about the ozone hole,
190 countries rallied together
in the Montreal Protocol. We did a podcast episode
about it, with Susan Solomon
who wrote the book on how we solved that problem,
and countries rallied to domestically regulate
their domestic tech
companies, chemical companies, to actually
reduce and phase out those chemicals,
transitioning to alternatives that
actually had to be developed.
We are not doing that with AI right now, but we can.
You gave the example of blinding laser weapons. We could
live in a world where there's an arms race to
escalate to weapons that just have a laser that blinds everybody. But there was a collective
protocol in the U.N. 1990s where we basically said, yeah, even though that's a way to win war,
that would be just inhumane. We don't want to do that. And even if you think the U.S. and China
could never coordinate or negotiate any agreement on AI, I want people to know that when
President Xi met President Biden in the last meeting in 2023 and 2024, he had personally
requested to add something to the agenda, which was actually to prevent AI from being used
in the nuclear command and control systems,
which shows that when both countries can recognize
that their existential safety is being threatened,
they can come to agree on their existential safety,
even while they are in maximum rivalry and competition
on every other domain.
India and Pakistan were in a shooting war in the 1960s,
and they signed the Indus Water Treaty
to collaborate on the existential safety of their water supply.
That treaty lasted for 60 years.
So the point here is,
when we have enough of a view
that there's a shared existential outcome
that we have to avoid, countries can collaborate. We've done hard things before. Part of this is
snapping out of the amnesia, and again, this sort of spell of everything is inevitable, we can do
something about it. I like the thought. I just wonder if the technology or the analogy might be a little
bit dated, because my fear is that the G7 or the G20 agree to slow development or not
advanced development around AI as it relates to weaponry. My fear is that it's very hard to monitor.
You can monitor nuclear detonations.
It's really hard to monitor advances in AI.
And that this technology is so egalitarian, if you will, or so cheap at a certain point,
that rogue actors or small nation states or small terrorist groups could continue to run flat out
while we, all the big G7 nations, continue to agree to press pause.
Is that mode of thinking, is our arms treaty is a bit outdated here?
How might these treaties look different?
Absolutely.
So let's really dive into this.
So you're right that AI is a distinct kind of technology
and has different factors and more ubiquitously available
than nuclear weapons,
which required uranium and plutonium.
Exactly.
But, hey, it looked for a moment when we first invented nuclear bombs
that this is just knowledge that everyone's going to have.
And there's no way we can stop it.
And 150 countries are going to get nukes.
And then that didn't happen.
And it wasn't obvious to people at that moment.
I want people to relate.
So there you are.
It seems obvious that everyone's going to get this.
How in the world could we stop it?
Did we even conceptualize the seismic monitoring equipment
and the satellites that could look at people's, you know,
buildouts of nuclear technology and tracking the sources of uranium around the world
and having intelligence agents and tracking nuclear scientists?
We had to build a whole global infrastructure,
the International Atomic Energy Agency,
to deal with the problem of nuclear proliferation.
And what uranium was for the spread of nuclear weapons,
these advanced invidia chips are for building the most advanced AI.
So yes, some rogue actor can have a small AI model doing something small,
but only the big actors can do something with this, like the bigger, more risky,
closer to AGI level technology.
And have we spent, you know, people say it's impossible to do something else,
but has anybody saying that actually spent more than a week, like, dedicatedly trying
to think about and conceptualize what that infrastructure could be?
There are companies like lucid computing that are built.
ways to retrofit data centers, to have kind of the nuclear monitoring and enforcement infrastructure
where countries could verify treaties, where they know what the other country's data centers are doing,
but in a privacy-protecting way. We could map our data centers and have them on a shared map.
We could have satellite monitoring, looking at heat emissions and electrical signal monitoring
and understanding what kinds of training runs might be happening on these AI models.
To do this, the people who wrote AI-27 believe that you need to be tracking about 95% of the global compute
in the world in order for agreements to be possible.
Because yes, there will be black projects and people going rogue in the agreement.
But as long as they only have a small percentage of the compute in the world,
they will not be at risk of building the crazy systems that we need global treaties around.
We'll be right back.
prediction, that we have decent regulation, that maybe these care, for example, I think
Character AIs can actually serve a productive role in terms of senior care. A lot of seniors who've
lost their friends and family in seniors' facilities, we're going to have more of them
that need companionship, which staves off dementia and likelihood of stroke. What is the
optimist case here for how AI could potentially be regulated and unlock? Most technologies have
ended up being accretive to society. Even the technologies that were supposed to end us, right?
Nuclear power has become a pretty decent, reliable source of energy. Well, obviously, electricity
or fire, whatever you want to talk about. Processing power, pesticides, that on, I would argue
even big tech on a net basis, and I hate the word net, is a positive. What is the, give me the
straw man's case for what could go right here and how might we end?
end up with a future that AI is a creative to society?
Absolutely. I mean, that's what this is all in service office.
What is the narrow path that is not the default maximalist rollout?
And there's two ways to failure. Let's just name the twin sort of gutters in the bowling alley.
There's one is you, quote, let it rip. You give everybody access to AI. You open source
it all. Every actor in society from every business to every developing country can train their
own custom AI and their own language. But then because you're decentralizing all these benefits,
you're also decentralizing all the risks.
So now a rogue actor can do something very dangerous with AI.
So to be very careful about what we're letting rip
and how we open sources.
On the other side, people say we have to lock it down.
We have to have, you know, only five players do this
in a very safe and trusted way.
This is more of the policy of the last administration.
But then there, you get the risk of a handful of actors
that then accumulate all the wealth and all the power.
And there's, you know, there's no checks and balances on that
because how do you have something that's a million times more powerful,
be checkable by other forces that don't have that power?
And what we need to find is something like a commitment to a narrow path where we are balancing
responsibility and power along the way, and we have foresight and discernment about the effects
of every technology. So what would that look like? It's like humanity wakes up and says we have
to get onto another path. We pass basic laws, again, like liability laws and around AI
companions. We have AI companions. We have democratic deliberations where we say, hey, I wish you
want companion AIs for older people because they don't carry the same developmental risks as they do
for young people. That's a distinction we can have. We can have AI therapists.
that are more doing like a cognitive behavioral therapy
and imagination exercises and mindfulness exercises
without actually anthropomorphizing
and trying to be your best friend
and trying to be an oracle
where you share your most intimate thoughts.
So there's different kinds of AI therapists.
Instead of tutors that are trying to, you know,
be your Oracle and your best friend at the same time,
we can have narrow tutors that are only domain specific
like Khan Academy that teach you narrow lessons
but are not trying to be your best friend
about everything, which is where we're currently going.
So there's a whole set of distinctions about
we can have this, not that. We can have this, not that across tutors, therapy, you know, AI that's
augmenting work, AI that's narrow AIs that take a lot less power, by the way, and are more
directly applied. So, for example, I have a friend who has found that he estimates that it would
cost two to ten orders of magnitude less data and energy to train these narrow AIs, and you can
apply it more specifically to agriculture and get 30 to 50 percent boost in agriculture just from
applying more narrow kinds of AI rather than these super intelligent gods in a box. So there is
another path, but it would take deploying AI in a very different way. We could also be using AI to,
by the way, accelerate governance. How do we apply AI to look at the legal system and say, how do we
sunset all the old laws that are actually not relevant anymore for the new context? Hey, what were
the spirit of those laws that we actually want to protect in the new context? Hey, AI, could you go
to work and kind of come up with the distinctions that we need to help update all those laws?
Could we use AI to actually help find the common ground? Audrey Tang's work, the digital,
former digital minister of Taiwan, to find the common ground between all citizens. So we're reflecting back
the invisible consensus of society, rather than currently social media is reflecting back
the invisible division in society that's actually making that more salient.
So what would happen?
How quickly would it change if we had AIs that were gardening all the relationships of our
societal fabric?
And I think that's the principle of humane technology, is that there are these relationships
in society that exist.
I have a relationship to myself.
You have a relationship to yourself.
Our phone right now is actually designed to replace the relationship I have with myself.
Humane technology, beauty technology, and everyone else.
And humane technology would be trying to garden the relationship I have with myself,
so things more like meditation apps where that's deepening my relationship to myself.
Do Not Disturb is help deepening my relationship to myself.
Instead of AIs that are trying to replace friendship,
we have AIs that are trying to augment friendship,
things like Partifle or Moments or Luma,
things that are trying to get people together in physical spaces
or find my friends on Apple.
There's 100 examples of this stuff being done in a way that's gardening the relationship
between people.
And then you have Audrey Tang's work of gardening the relationship between political tribes
we were actually showing and reflecting back
all the positive and visible areas of consensus
and unlikely agreement across political division.
And that took Taiwan from, I believe,
like 7% trust in government
to something like 40% trust in government
over the course of the decade
that they implemented her solutions
on finding this kind of bridge ranking.
And that could be deployed across our whole system.
There's totally a different way
that all of this can work if we got clear
that we don't want the current trajectory that we're on.
So I just want to, in our remaining time here,
You've been very generous.
So I want to talk a little bit about Eutriestan.
We're kind of brothers from another mother, but we kind of separated a birth and grew up
in different countries, not as we talk about the same stuff, but just sort of through a different
lens.
You look at it through more of a humane lens.
I look at it through more of a market's lens.
But I have noticed since 2017 when I started talking about when my love affair with
Big Tech turned into sort of a cautionary tale.
And I might be paranoid, but it doesn't mean I'm wrong.
that slowly but surely I saw the percentage of comments across all my social media, across all my
content, more and more negative comments that appeared to be bots where I couldn't figure out
who it was, trying very strategically, methodically, and consistently to undermine my credibility
across anything I said. And I want to be clear, I might be paranoid, right? Because any negative
commentary, I have a knee flux. I must be fucking Russians, right? Rather than maybe I just
got it wrong. You've been very consistent raising the alarm and you're on the wrong side of the
trade around companies and multi-trillion dollar companies who are trying to grow shareholder value.
I'm just curious if you've registered the same sort of concerted effort, sometimes malicious,
sometimes covert to undermine your credibility and what your relationship is with Big Tech.
That's a great question, Scott. I appreciate that. And I think that,
that there are paid actors that I could identify over the course of the last many years.
I've been doing this got for, you know, what, 12 years now or something like that.
We started the Center for Humane Technology in 2017.
I just care about things going well.
I care about life.
I care about the connection.
I care about a world that's beautiful.
I know that that exists.
I experience it in the communities that I'm a part of.
I know that we don't have to have technology that's designed in that perverse way.
You know, this is all informed by the ethos of the Macintosh project at Apple,
which my co-founder, his father, started the Macintosh project at Apple.
and we believe in a vision of humane technology.
But to answer your question more directly,
I try to speak about these things in a way
that is about the universal things that are being threatened.
So even if you're an employee at these companies,
you don't want there to be a race for, you know,
the bottom of the brain stem to screw up people's psychology
and cause kids to commit suicide.
You don't want that.
So we actually need the people inside the companies on side with this.
This is not about us versus them.
It's about all of us versus a bad outcome.
And I always try to communicate in that way
to recruit and enroll as many people
in this sort of better vision of this is you know there's a better way that we can do all this that doesn't
mean that there are not you know negative paid actors that are uh trying to steer the discourse there's
been op-eds hit jobs written you know trying to discredit me saying i'm doing this for the money
i care about going in the speaking circuit and like writing books guess what i don't have a book out there
i make no money from this i worked on a non-profit salary for the last 10 years this is just about
how do we get to a good future i love that so you're so buttoned up and so
professional. Biggest influence on your life. That's a very interesting question. I mean, there's
public figures and people who've inspired me. It's also just my mother. I think she really came from
love and she passed away from cancer in 2018. And she was just made of pure love. And I, that's just
infused in me and what I care about. And I've, I don't know, I have a view that like life is
very fragile, and the things that are beautiful are beautiful, and I want those beautiful things
to continue forever. I just love that. Chastan Harris is a former Google Design Ethicist, co-founder
of the Center for Humane Technology, and one of the main voices behind the social dilemma.
Chastan, whatever happens with AI, it's going to be better or less bad because of your efforts.
You really have been a powerful and steadfast voice around this topic. Really appreciate your
good work. Thank you so much, Scott. I really appreciate yours as well.
And thank you for having me on.
I hope this contributes to making that difference that we both want to see happen.
This episode was produced by Jennifer Sanchez.
Our assistant producer is Laura Jenner.
Drew Burroughs is our technical director.
Thank you for listening to the Propgeepod from PropGMedia.
