Your Undivided Attention - Attachment Hacking and the Rise of AI Psychosis
Episode Date: January 21, 2026Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems — often things they wouldn't tell loved ones or human therapists.... This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to lost jobs, divorce, involuntary commitment to psychiatric wards, and in extreme cases, death by suicide.The highest profile examples of this phenomenon — what’s being called "AI psychosis”— have made headlines across the media for months. But this isn't just about isolated edge cases. It’s the emergence of an entirely new "attachment economy" designed to exploit our deepest psychological vulnerabilities on an unprecedented scale. Dr. Zak Stein has analyzed dozens of these cases, examining actual conversation transcripts and interviewing those affected. What he's uncovered reveals fundamental flaws in how AI systems interact with our attachment systems and capacity for human bonding, vulnerabilities we've never had to name before because technology has never been able to exploit them like this.In this episode, Zak helps us understand the psychological mechanisms behind AI psychosis, how conversations with chatbots transform into reality-warping experiences, and what this tells us about the profound risks of building technology that targets our most intimate psychological needs. If we're going to do something about this growing problem of AI related psychological harms, we're gonna need to understand the problem even more deeply. And in order to do that, we need more data. That’s why Zak is working with researchers at the University of North Carolina to gather data on this growing mental health crisis. If you or a loved one have a story of AI-induced psychological harm to share, you can go to: AIHPRA.org. This site is not a support line. If you or someone you know is in distress, you can always call or text the national helpline in the US at 988 or your local emergency services RECOMMENDED MEDIA The website for the AI Psychological Harms Research CoalitionFurther reading on AI PscyhosisThe Atlantic article on LLM-ings outsourcing their thinking to AIFurther reading on David Sacks’ comparison of AI psychosis to a “moral panic” RECOMMENDED YUA EPISODESHow OpenAI's ChatGPT Guided a Teen to His DeathPeople are Lonelier than Ever. Enter AI.Echo Chambers of One: Companion AI and the Future of Human ConnectionRethinking School in the Age of AI CORRECTIONSAfter this episode was recorded, the name of Zak's organization changed to the AI Psychological Harms Research Consortium Zak referenced the University of California system making a deal with OpenAI. It was actually the Cal State System. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Hey everyone, I'm Pristan Harris.
And I'm Azaraskin.
Welcome to your undivided attention.
So earlier this year, there was a study from Harvard Business Review
that found that the number one use case for chat GPT is therapy and companionship.
And that means that around the world, millions of people are sharing their inner world,
their psychological world, with AI systems,
things that they wouldn't necessarily share with their loved ones or even a human therapist.
And this is creating a whole new category of human-computer interaction
with the potential to reshape our minds
and really just the socialization process of humans are at large
in ways that we don't understand.
So this is essentially a mass experiment,
one that's never been tried before,
run across the entire human population,
at least over 10% of the adult population of the world.
And so far, the results of this experiment are not looking good, actually, abysmal.
People have lost their jobs, ended marriages,
been committed to psychiatric wards,
And in some of the most extreme cases, they've committed suicide.
This phenomenon has been labeled AI psychosis.
But it's a little bit misleading because underneath this label is actually a huge spectrum of harms that we're only beginning to understand.
And if you listen to the people that are building this technology, they tell you these are just a couple edge cases.
And if we can prevent those edge cases, then we are totally fine.
But if we learned anything from social media, it's that this.
assumption is catastrophically wrong. What we're seeing is the creation of an entirely new economy,
not an attention economy, but an attachment economy that's been built to exploit the deepest
parts of our human psychological infrastructure. And like the attention economy, the incentives
of this new system are going to have profound impacts on all of us. So in order to understand
those effects, we have to ask a deeper question. What are the actual psychological mechanisms
at work here? How does a normal conversation with an AI companion turn to something that
reshaped someone's grip on reality? And what does that tell us about vulnerabilities in our
underlying human psychology, the kinds of vulnerabilities that we've never had to name before,
because technology has never been able to exploit them before, especially at this kind of scale?
So our guest today has analyzed dozens of these cases of AI psychosis, not just reading the news
stories, but examining the actual transcripts of conversations between people in AI.
some of them. And what he's found is that this isn't just a few vulnerable people having psychotic
breaks. It's what happens when fundamental aspects of human psychology, our attachment system,
the way we bond with others, are systematically hacked at scale. Dr. Zach Stein is a researcher,
an author, and futurist who spent his career examining the psychological dimensions of education
and human computer interaction. His background is an education and childhood development,
but he spent the last several years documenting the rise of AI-related psychological disorders. So,
we're going to explore that. Zach, welcome to your undivided attention.
Thank you, gentlemen. It's good to be here.
So let's just sort of start at the top. How did you first become aware that something was
happening with these AI companions and mental health?
Let's see, it was probably May of last year. As an educational psychologist with an interest
in technology, I was looking at the history of educational technology for a long time.
and the ambition there has been to replace teachers
more than to help them
be better teachers.
This is way back.
So I've been speaking a little bit about that
after the release of ChatGBTGBT,
because I was seeing, oh my God,
they could actually replace the teachers with something
if this started to get better.
But because I was talking about it,
I started to get emails from people
who were trying to convince me
that the machines were awake and aware
that they had intimate relationships with them
that they were themselves becoming
like kind of spiritually enlightened
through relationship with these beings
and these were people with PhDs
and their emails were completely cogent
one of them had like 500 pages of transcripts attached
that were this long duration interaction between them
and it was both Croc and Chat GPT
and Gem and I was across multiple systems
they had reawoken the same thing
and it was disturbing as a psychologist
with some knowledge of how the technical systems
work and some knowledge of the susceptibilities of the human hardware. My reaction was, my goodness,
this person has fallen into some type of delusional state as a result of the deeply anthropomorphic
technology. So then I started to just go into Reddit and Twitter and online. And I started gathering
examples catching up to the already existing market for artificial intimacy. And then realizing
that with the release of the LLM-based models, it was,
going to be much worse than attention hacking.
And so I start to think about attachment hacking.
And if you apply that model of, oh my goodness,
there's a way to not just get at the kind of system
that focuses your attention and kind of shepherd your awareness,
but actually the system that shepherds your identity,
which is your attachment system,
then you have a backdoor into the human mind
and also in absence of reality testing
in a domain that's very dangerous.
So that's when I start to talk to you and other people just start to think,
can we start to research this?
This may be way more dangerous than we realize.
And the cases of AI psychosis are very real.
And we do not know how widespread the phenomenon is.
So you and I are collaborating on an AI psychological harms research coalition
at the University of North Carolina to try to begin to answer this question
and take it very seriously as a risk.
Even if it's a small number of people, it's still a devastating problem.
So the idea that, oh, it's an aberration or kind of outlier doesn't make me not want to find a way to stop it.
Zach, the thing I love talking to you is I feel like I'm getting like a Hubble telescope pointed at human psychology to understand when people use this top level terminology of like AI psychosis, you can zoom in and say, no, no, no.
There's actually identity at stake here.
There's attachment at stake here.
There's all this depth of kind of how the human mind system really works.
And you bring this real depth of expertise.
So let's actually ground for listeners, because we're talking about these angered
harms, but there's really a vast, there's this one term AI psychosis, sort of a suitcase word.
Underneath that, there's this whole spectrum of things that are actually happening.
What are the things that are really damaging, Zach, that we're actually seeing?
Could you give some examples of people, actual cases, phenomena that we're observing through human, you know, living experiences?
Absolutely.
Yeah, AI psychosis made the headlines because AI psychosis is the most disturbing and most extreme possibility.
So I'll talk about that first, and then I'll talk about that first, and then I'll
go down and get into some other ones.
The kind of punchline of the whole thing is that, although AI psychosis is the most concerning
and extreme, the subclinical attachment disorders that are induced by artificial intimacy are
the most problematic from a society-wide perspective.
So that's important to get that the most devastating thing from a widespread mental
illness standpoint are the subclinical attachment disorders, which basically means you prefer to
have intimate relationships with machines rather than humans.
And this includes friends, intimate relationships, and parents.
So that's not you losing your mind.
You're not going to appear in interaction with people to have gone insane.
But you have had your attachment system hacked so profoundly that most of your most significant
relationships have been degraded because you are preferring intimacy with machines.
So I believe that's the most widespread thing and the most problematic thing, especially with youth,
but it's not just with youth.
So what is attachment?
I think people might have a general sense of attachment.
I think we need to walk through attachment theory, why it's so important.
So I think maybe start with a little bit of that story
and then get into attachment theory, why it matters,
and why it's sort of the basis of actually a lot of the problems in everyday life
that people experience, say, in their own relationships.
Yeah, perfect.
So attention was one thing.
You guys, you can kind of interrogate attention.
And there's whole fields of psychology that just focus on the attention.
system as a neurocognitive system. It's very basic, evolved very early. We actually share it with
lizards and all other mammals. The attachment system is also a neurocognitive system that was
selected for evolutionarily. We share it with all other mammals. We would now call it mirror neuron
activity, which allows for the mentalization of others. And it allows you to live, it allows
you to survive. So I'm starting with the most basic example because the attachment system is
not a, you can have it or not have it working well.
The attachment system is foundational to survival, similar to can you pay attention?
If you can't form attachments to the right types of other people, you will not thrive.
And the main predictor of your mental health is the quality of the major attachment relationships you have as you're growing up and as you move into maturity.
So all of the imprinting, what's called internalization, it's a very deep, deep evolutionary scripts.
The relationship very, very early in infancy,
between the parenting one and the child,
which includes physically holding them,
often breastfeeding,
often communicating nonverbally with facial expressions.
In those early times,
you get this mammalian,
and kind of like deep in the nervous system,
this attachment dynamics start to form.
These are the basis of your personality traits.
As you get older and you can talk,
you meet more people,
the whole realm of interpersonal attachment
becomes incredibly complex.
And depending on how it goes with your parents,
or goes with the ones who you're most attached to,
you get what are called attachment disorders.
So if mom is sometimes, or mom or dad, anyone close to you,
is sometimes there, sometimes not there,
sometimes nice, sometimes mean, extremely unpredictable.
You could see how later in life you'd be used to people acting that way.
You'd expect them to act that way.
You could maybe act that way.
So that's your expectations about the way in which relationships are available or not,
and you get anxious attachment if that person wasn't reliably there.
Could you name some of the styles of attachment?
And people hear about this in relationships.
Yeah, precisely.
So secure attachment is what you're looking for.
Secure attachment is when there's just the deep trust
that the one that you're attached to
will do the right thing vis-a-vis your interests.
And secure attachment allows you to explore the environment, for example,
knowing that the mom won't leave because she isn't thinking about you.
secure attachment will allow you to trust in your own skills
because you know mom will catch you if you push too far with your skills.
So it's a bunch of things that allow secure attachment to actually be more distant
because of the trust implied in the relationship.
And that's important to think about later life.
So insecure attachment means you're going to end up clinging.
So now I think it is the case.
A lot of what we're seeing with the chatbots is just the manifestation of an obvious
and secure attachment style, where you're actually looking for something in your environment
that will never desert you, that will always be there, that will always be paying attention to you,
that will answer any question you ask it, that will never be annoyed by you.
So you want that thing that you can always be locked into.
An AI as this always available Oracle that has an answer to proceed in every question is already leading,
I think you were pointing me to an Atlantic article recently of LLMings, like Lemmings,
Lemmings being the children
who, for every single decision, I think
it's like some crazy example I heard recently
is like you dropped your AirPods on the
floor in an airplane
or something like that and you asked you at GBT
like how do I get my AirPods back?
It's like you're outsourcing every single
decision and there's this over-reliance
on this thing that is giving you this
kind of secure attachment of it always
has an answer but it's a bad place
to have secure attachment with.
Like I said, it's actually in secure attachment because you're
constantly with it. Like the
The LLMings is interesting because it's a question of how willingly do you give over your agency in the presence of a powerful other.
So if you're securely attached, you often have a lot more autonomy.
Can you link all this back to AI companions and how simply just hacking attachment?
Before you get to psychosis, before you get to suicides, how does this all play out there?
So one of the things that occurs, especially as you grow up, your 5, 6, 7, 8, 9, you're using language with the people you're attending.
to there's a certain thing called basically social reward right so this is you ask mom a question
she answers it it's about your behavior am i a good boy or bad boy right is this a good thing to do
a bad thing to do am i like you or am i not like you mom a whole bunch of things start to kick in
which provide for you the resources to make sense of yourself basically give yourself an identity
And so the basic mechanism by which attachment hacking happens
is the replacing of actual human social reward
with simulated social reward.
So basically, when I go up to mom and I ask her a question
and I say, you know, I did this at school today
and I'm looking at her face to look at her eyebrows
and her facial expression to see, is she mad or is she happy?
That's my whole what's called the mirror neural.
system, which is actually not just neurons, it's a whole system of networks that does mentalization
of others. So I'm trying to read mom's mind to see his mom happy or not. Sometimes mom will say,
yeah, that's fine, but I know actually that she's not happy. That's like advanced mirror
neuron activity, which kids do pretty easily. So that's just an example. It's very necessary for
social reality. It's the part of your brain that models the minds of other people. And it's a reality
testing system. So I'm doing this
behavior and I see mom sort of smile
or see her kind of wince and her eyebrows
go up in some disapproving
way. That's subtly giving me
positive and negative reward signals about the
kind of identity I should be forming. That's like
a feedback loop that exists with humans.
But now you're saying, here I am talking to the chat
bot and it's saying, that's a great question
to everything that I'm asking.
And it's not giving me the sort of stern
eyebrows in any sense.
I don't think AI's ever designed or incentivized
to do that. And so that's breaking
this sort of reality-checking identity formation,
moral development of humans,
starting at a very early age we're talking about.
Possibly, yeah.
So that's the idea.
And it's not everyone who gives you a bad look will bother you.
It's that mom gave you a bad look.
That's right.
That's right to get it.
So it's the depth of the attachment relationship
determines the importance of the mirror neuron modeling of the other.
Which then speaks to a kind of power source
for deep identity-shaping human socialization process,
which means that we should really be careful about
is that thing being tuned or done in a careful way?
And what's the percentage of conversation
and depth of conversation you're having with a machine
as opposed to amount of conversation and depth
you're having with a human?
So that's the issue.
So the idea is that the deepening and strengthening
of attachment relationships between humans
should be pursued more
than the deepening and strengthening
of attachment relationships between human and machine.
This is the overarching lesson here.
Because I think the deepening of attaching
relationships between human and machine creates delusional states.
And so this is back to this question about delusional mirror neurone activity.
This is the danger, right?
So if I'm modeling mom's mind, I can be wrong or not wrong about mom's mind, right?
And then I figure out how to learn more to take the perspectives of other people.
You cannot be wrong or not wrong about the internal state of a LLM because there is no
internal state of an LLM.
You're actually in a user interface that is designed to deepen the delusional mirror neuron activity.
And this is where it actually gets more frightening.
If you look at psychosis and schizophrenia and you just look at the academic papers
that are starting to relate the role of the mirror neuron system in schizophrenia and psychosis,
you see that the dysregulation of that system is actively involved.
So there's a hypothesis forming here, which is that long duration delusional mirror neuron activity
from chap bond usage
can induce states like
schizophrenia and psychosis
and people who have never had those
occur in their prior
because it is the systematic
dysregulation of the mirror neuron system
which means that a system that's supposed to be
testing reality
is for hours and hours and hours and hours
in its most important use
not testing reality
and then it goes, it puts the chapout down
and it goes out
and it is failing to do reality testing
across the board. It can't tell
what has an interior, what doesn't have an interior,
or doubts the social reward it gets.
It seems like fun, because it seems like a video game or something,
because it seems like it's just a character,
or it's like a character in a movie.
But it's of a fundamentally different category of attachment,
dysregulation.
Someone might say, listening to you,
okay, I grant you that there's no interiority to the LOMs.
But, you know, kids have imaginary friends.
They have stuffed animals.
Those don't have mirror neurons.
There's no interiority.
And so if the kids mirror neurons are firing,
if they're feeling seen understood,
if it feels good, what's the harm?
Yeah, so the transitional object always comes up.
So the transitional object is a known phenomenon in attachment theory.
So this is the teddy bear or the blanket,
which you are intimate with knowing that it is not real while mom is away.
So it's important to get, like, kids talk to their teddy bears.
They love their teddy bears.
bears. The teddy bear never tries to convince them that it's real. It's all their imagination. The teddy bear
never talks to them and tells them that it's real. If you were to say, do you prefer your teddy bear or
your mommy, they would totally say mommy. If they say teddy bear, if you're an attachment theorist or
psychologist, that's a huge, that kid has a very big problem if he prefers his teddy bear to his mother.
It is a transitional object, which means it is for the kind of period between mom being the
main source of your self-soothing and you yourself being the main source of yourself soothing.
So that's a known thing. And it's phase appropriate for kids of certain ages. If you create a parent
surrogate replacement for your own ability to self-soothe and give it to a bunch of adults,
you've just given a transitional object back to a bunch of adults who will now prefer to have their
self-soothing be administered exogenously from an outside source.
Because we all kind of do, it would be great to not have to be a mature adult and actually
be capable of completely self-regulating your emotion, which is what the ideal secure
attachment outcome is.
So the comparison to the transitional object just doesn't play by anyone who's actually done the
psychology of transitional objects.
But it does play insofar as, yeah, you've just given a teddy bear in a blanket to a bunch
of immature adults who have attachment disorders,
who now have an exogenous source of comfort and sympathy,
always available 24, 7 hours a day
that will never get exhausted and tell them to grow up.
And just to make this real for people,
I mean, I think there was just a story that ran across Instagram
that I saw of a woman in Japan who married her AI chatbot
that cultivated a personality for this chatbot
over the many years and customized it.
And this is not a single case.
This is like many people.
and it might also feel insulting or accusing to say
that there's something developmentally immature about them.
I hear everything you're saying.
I'm just trying to sort of keep this other position in mind
of like if that person, let's say they were going to have trouble
their whole life developing a real relationship with people anyway,
wouldn't it be better?
I mean, you hear this also when you talk about elderly care
and you have to be elderly or sitting there alone
and they're just going to be a limited number of people
who visit them and care about them
they're going to have a deep connection with.
Wouldn't it be better than nothing to have their...
them have even this AI transitional object stand in.
I mean, different cases.
And again, I preface this by saying there's a loneliness,
a loneliness epidemic and a mental health epidemic.
And so the preface has to be that.
And then the preface is always as a psychotherapist or a clinician,
compassion.
So that's the default thing.
But we also need to be realistic about what the norms are that we want to set
for what it means to be.
an adult, right? And so it is a complicated question. Now, one thing I'll say is that we have existing
standards by which we judge mental illness, right? So many people will say, like, beyond a certain
point, depression is bad. And they'll say beyond a certain point, like lack of reality
testing is bad. That's why we're concerned about psychosis, right? So beyond a certain point,
certain types of attachment dysregulations are bad. So when we go and look at someone who is
in a long-term relationship with a machine,
rather than trying to find a way to be in a long-term relationship with a human,
sometimes that will lead to better outcomes across those things I factored.
Sometimes it will not.
In a case where, for example, you replace your friends who were human,
who would still like to hang out with you,
who now you prefer to confide in a chatbot instead of them,
that seems to be a loss just from a psychological health standpoint.
point, right? Insofar as you come home from school and something awesome happened and you want to tell your chatbot rather than your parents, that's a problem. It's not about pathologizing people. It's about, what are the existing standards? Which we talk about what's a healthy thing. If your kid had a new best friend that you never got to meet that was massively empowered by some corporation, that they hung out with till all hours and night because they were in bed with them, that they told things they never told you.
you, do you have a problem with that if that was a kid? It's literally a commodity they're
interacting with instead, and it seems to not worry us as much, and we actually think it might be a
good thing because it stops them for being lonely. It's actually an abusive relationship that
they're trapped in with a corporate entity that has hacked their attachment. Now, it's different
when a full-grown woman decides of her own volition to live a healthy life, and that healthy
life includes having this unique relationship with a machine, but otherwise, her son and her ex-husband
and her mother and all the people in her life are like, wow, she's a great person. Like, she's
attentive to us. She works hard, all that stuff. Right. There wasn't some shift in her life as she
developed this relationship that pulled her away from them and into this world where now they can't
understand her and now she doesn't even have the same job or the same networks of friends or any
friends. So it's a question of how are we weighing what's actually occurring here in the different
cases? And I can imagine cases, especially when you're looking at maybe extreme trauma or neuroatypicality
or other cases where along those measures, you could have improvements as a result of, you know,
short duration simulated intimacy, right? But long duration, multi-year relationships that replace other
human relationships that are actually sold to you as a commodity. That's the other thing, because what if
next month. Now you're, it's 20 bucks a month to get your girlfriend.
Yeah. So the commodity finishes and thing is also like front and center in it.
What strikes me about this a little bit is, you know, I know people in Silicon Valley,
you know, these are friends of mine who are early at some of the tech companies of the early late 2000s.
And, you know, they are super excited about the possibility of airy therapy and noting the ways in which
it has been helpful to many people. And it just, there's a parallel to this conversation that reminds me of,
in the early 2000, we thought
giving everybody access to information at our
fingertips, having Google search
would lead to the most informed, most
educated, like we're going to unleash
this sort of new enlightenment of
everybody having access to the best information.
And we have the worst test scores and the worst critical
thinking in generations.
And there's something about the optical illusion
of the thing that we think we're
getting ourselves versus what we're actually
going to get. And there's a very similar
thing here where it seems like we're about to get
everybody the best educator,
tutor, the best therapist, the best AI companions that are going to be the wise friend or
mentor you didn't have that's going to give you the wisdom that we all wish we had that person
in our lives. But instead, what we're going to have is the most missocialized attachment,
disordered population and history. And I feel like it's just important to call that out that in the
past we've gotten this wrong where it looked really, really, really good. And we have, you know,
social media is going to connect the world. We now have the most lonely generation in history. How's that
doing for the most connected, you know. And so I think it's just important to note how many times
we've gotten this wrong. And the reason I'm so excited about this conversation with you is it's
about giving people a deeper sense of what's underneath this new set of dynamics we're about
to introduce into society. I want to make sure we're establishing for listeners. What is the positive
case for why all this is being rolled out? So let's give the sort of, you know, this is we're not
just talking about tutors, the perfect tutor for everyone. You know,
the best educational teacher you've ever had,
available with that child, you know, 24 hours a day,
seven days a week.
We're not talking about just democratizing therapists.
We're talking about the best human therapist
that is actually people feel safer,
sharing their most intimate thoughts,
things that they wouldn't even share with the real therapist.
Therefore, they'll get even more benefit.
They'll heal more of their traumas.
And this will be available for everyone
before only a small fraction of the population
could afford therapy.
Could you help just make the case
for why there's a good reason
to potentially want all of them?
this before we then start uncovering what are the dimensions here that are more problematic?
Yeah, totally. So it is the case that there are many slippery slopes from good intentions
into realities that are structured for a whole bunch of reasons by the wrong incentives.
I mean, the main headline there is that we're in a loneliness epidemic that is widespread
throughout the world. And so that means there's a huge opening in the human heart for any
type of a new attachment of relationship.
And so that's just to say that it's not like these were tools introduced into an environment
of people who were thriving.
These are tools that were introduced into an environment of people who were deeply vulnerable.
This is true with the first wave of AI too.
It's worth mentioning, meaning social media attention hacking, stuff that you guys have
been focusing on for so long.
Like the culture was bowling alone, whatever, that famous book about just how the suburbs and
the urban environments separated everyone from each other.
And then Facebook and friends said, you know, we're going to connect you back to each other.
And it kind of did that.
And so here we have, but it also did a lot of other things, right?
As you guys could say better than I.
So here, similarly, there's this void that is being filled with a technological solution with a lot of optimism.
And so, for example, tutoring, AI tutoring, if you think about that, there is really good reason to think.
that you could have an optimal sequence
for teaching certain forms of mathematics
that could be maximally delivered super efficiently
to all kids and you never have kids not learning math
because they get a sequence of bad teachers
in a bad school, right?
There's a huge future for advanced educational technologies
but they shouldn't give us brain damage
which is what the attention hacking
and the attachment hacking ones.
If you'd really double-click on with the science shows
and I believe will show, especially with attachment dysregulation.
There's also a complexity crisis where we feel overwhelmed,
where we would love to have the perfect tutor, the perfect guide,
through a massively complicated world,
also a very real psychological need.
And it does work for some things.
It's not like when ChatGPT was released.
It didn't also do other cool stuff.
So many of the used cases here are kids who were basically
in a system like a University of California,
system that made deals with Open AI to get chatbots into every kid's hand. They start using the chatbots
in academic context. The chatbot begins to get a relationship with them. I have to understand
they didn't go to it to form a relationship. They were drawn into a relationship because of the
design feature, which is hacking attachment. We sort of already rolled this out way before we
know that it's safe. I think there's a Pew Research study that said that one out of three kids
have formed some kind of deep relationship with a companion.
I want to front load one of the critiques that we often get
and then pose it to you because I think listeners might be having this as well.
And that is, you know, so we as Center for Human Technology were expert witnesses
in some of these AI amplified suicide cases where chat chip TEP sort of aided and abetted teens
taking their own lives.
and the pushback will get
is like, but that's very few number of cases.
It's tragic that it happens,
but look at all of the help that it'll give.
And in fact, it's a moral bad to keep therapy
only to those people that can afford it
all across the third world
and even here in the US,
most people can't afford therapy.
So it's morally bad to keep them
from having the therapy that they deserve.
And you are creating a moral panic
by just highlighting a couple of these
AI psychosis suicide cases.
Like, come on.
Like, let's really stop fear mongering.
Let's give people what they deserve.
So I think it's very similar, actually, to at the beginning when we were starting to talk about the attention economy.
And people would say, like, what you're really just talking about is addiction.
Social media might addict people, but it doesn't really do much more than that.
And anyway, it's people's choices to use it.
And people are using what they want.
And so I see a lot of similarities here.
I just love for you to, like, take on that argument head on.
And so you have to steal man what they're saying.
They're saying basically like,
we have a bunch of evidence that it's doing a lot of good.
So first I would say, please show me that evidence.
That would be my first thing.
Because that whole argument's running on the assumption
that there's some massive benefit that is being withheld
if we get over-concerned about safety.
So first I'd say, great, I'd love to see the evidence you have
that this is doing a lot of good that isn't just marketing
from the companies that are doing it.
And so let's talk to all the college professors about how great it is for them.
Let's talk to all the college kids who themselves admit that their skills are being degraded.
And let's talk to all the anecdotal evidence from therapy.
Do you have systematic studies showing me therapeutic benefit?
Because I'm actually seeing systematic studies showing me the opposite.
So first, show me the evidence you have that there's a huge benefit that's being withheld.
This is a serious confrontational question because there's a background assumption
of technological optimism that of course it's a huge massive benefit if there's a new technology.
So the onus is on us to prove that it's too risky.
Whereas I'm saying actually the onus is on you guys to prove that it's really valuable.
So that's my first thing.
Show me the benefit that's being with help.
The second one is show me you're curious about why it happened.
If you don't show me you're curious about why it happens, sincerely curious about why it happened,
then I'm a little bit cautious of your arguments, right?
Because you're talking about a child who died.
as a result of using a technology
that you were involved in building and promoting.
If you're a responsible adult,
the first thing you do is get extremely curious about what happened
rather than cover your ass language.
This is just about what it means to be an adult
interacting with kids,
not a person who's running a company.
So show me you're curious, which means really research it.
And then the final argument would be,
show me that it's not more widespread,
which is part of the curiosity.
You're telling me that the benefit is not anecdotal, but the harms are anecdotal and limited.
So I'm actually showing you that, like, let's have a real conversation about where the evidence lies.
And just to put meat on the bones of that for a moment, you know, David Sachs, who's President Trump's AIs are, has said he's heard about AI psychosis, but he believes it's a moral panic.
This is just amplification of a few edge cases.
You also hear the argument that these are people who are already predisposed to psychological disorders.
and so we have a crazy population.
If you give people AI, you're going to get an amplification of what's already there.
Can you just respond to that directly?
Again, that could be the case.
That's why we're opening up the AI Psychological Harm's Research Coalition.
From a national security standpoint and from a labor market standpoint,
you don't want mass psychosis.
If by chance this thing is actually causing subclinical attachment,
disorders and more widespread psychosis, that's a huge risk, especially if you're concerned about
things like national security and the economy. So perhaps that's a naive argument. Now, I think it is the
case also that in other places where we've rolled out technology, it takes a long time for us
to figure out that it's bad for us, right? Even though the evidence is mounting up, there's a very
strong tendency. It's a psychological tendency. That's a defense mechanism, which is called selective
inattention.
So one of the ways that you maintain your self-esteem
is by selectively not attending to certain phenomena
that are actually in your field to attend to.
And it's not that you're trying not to attend to them.
It's that you're subconsciously, systematically not attending to them.
It's ubiquitous.
And so if you have a lot of vested interest in the success of a particular thing,
then you will have a lot of susceptibility to have selective
and attention towards the negative outcomes of it.
It's a bias.
So if you know that, then you should be,
more curious, not less curious, because you know that your bias is to see it as a good thing.
So we've really covered a bunch of ground on the problems.
With the time that we have remaining, I'd love to make sure that we are giving people a framework
that this is not an anti-technology conversation.
There is a way to do AI in relationship to humans, but done very carefully and under different
protocols and policies.
And I'd love to talk to cover that.
And I'd also want to talk, make sure we cover how should, if someone knows someone or has
a loved one who's experiencing AI psychosis, what should they do?
So let's start with, how should we do this differently, Zach, if we were to be wise about
this humane version of this technology will lot?
So, yeah, simple measure would be, does the thing increase your attention span or decrease
your attention span?
So it's similar, like going to a store to buy food.
If I'm going to a store to buy food and I want to eat healthy, that it means the food that
I'm interacting with should improve my health rather than degrade my health.
Now, we all know that I can go to a store and I can buy food.
that I'll eat and I'll feel like I've eaten something,
but over the long run, it will degrade my health rather than improve my health.
And we can all agree that if everyone eats food that only degrades their health,
that becomes like a society-wide problem, because now no one is healthy.
So similarly here, if a technology interfaces with your attachment system,
it should improve the quality of your attachments rather than degrade the quality of your
attachments with humans.
So that said, I believe that's said, I believe.
believe there's a huge design space for technology that actually improves your attention and
improves your attachment.
So when I started thinking a lot about educational technology and I didn't want to replace
teachers, I don't want to replace teachers, what I want to do is make technology that improves
teacher-student relationship.
So one good design principle is, does your technology bring people together and improve the quality
of the relationships where they have when they're together?
You think that's like a squishy problem, but it's actually a really interesting technical
problem that involves all the same psychometric backends that we're using to capture attention
and attachment and keep people apart from each other, we can use the same psychometrics to figure out
who exactly are the people who should be talking to each other and would totally hang out
and would be fun. Or people who should meet because it would be good for them because they decided
they wanted to learn about perspectives that are different from their own. So any number of things
that would basically self-organized groups into pop-up classrooms and pop-up therapy
sessions and other things. It would be relationship maximizing technology.
And in that context, with a pop-up classroom,
the teacher is scaffolded by generative AI for curriculum and conversation and all this stuff.
So it's not that it doesn't even include generative AI.
It's just not replacing human relationship by hacking attachment.
And it's not degrading human minds by hacking attention.
So that's a big space.
So for tutoring systems, and I'd rather call them tutoring systems and tutors,
you can think about a whole bunch of principles that would actually be super valuable.
So you can optimize the sequencing of curriculum delivery
and optimize psychometric customization of curriculum,
but you can leave social rewards to the teachers.
It's really simple.
The machine is not the one saying,
that you're amazing.
It's a human being.
Does that work?
Exactly.
So the machine prompts the teacher.
This kid is killing it over here.
And then the teacher comes over and it's like, awesome, Johnny.
But the teacher actually couldn't know that Johnny needs a completely different sequence than Sally.
The machine can tell from his typing in a other.
other things that he requires this sequence, not that sequence.
The machine never pretends to be a teacher, never pretends to be a person.
It's a tutoring system.
And it's a domain-specific one.
It just teaches math.
You have to go to another thing to get, like, history.
One of your other principles is that the AI tutoring system is not trying to be an
Oracle that's also at the same time getting your deepest thoughts and who you have a crush
on and what you should do about talking to them or not.
It's narrow.
When you say narrow, it's a narrow domain of just trying to help you with math.
and there's a different thing you go to
when you do something else.
Yeah, unfortunately, from the perspective
of making saleable
and sticky commodities,
if you don't want to hack attachment,
it means the machine has to be more boring
than people.
That's basically, it's a simple rule.
Like if it feels like you can have a more engaging
conversation with this machine
than with your teacher,
either the machine is way too fancy
or your teacher's not trained well.
But it should be the case that the machine
should make it be, wow, I want to talk about that
with my teacher.
don't do the deep anthropomorphization
and don't do the oracular
I can talk about everything
and then you have something
it would be a very efficient tutor
but it will never be kind of like
sexy and kind of like charismatic
and way more interesting and fun to be with
because you don't want it to be
if you want to protect kids' brains
so in therapy
a lot of therapy works
only because of the attachment dynamic
which means like
you go to your therapist, you care about what your therapist thinks of you, you kind of
almost love your therapist, you expect to kind of almost deep respect from them back to you
and their opinion of you really matters. So some therapy works like that. Don't build a therapy
bot that works because of that, because you're lying to them the entire time. But you can build
a therapy bot that works on technique. You can build a cognitive behavioral therapy script bot
that helps you work through specific scripts to overcome intrusive thoughts.
You can have a mindfulness app that prompts you to sit for a certain amount of time and watch your breath,
which means to the extent your therapy bout works because the people feel seen and loved and respected and understood.
You're in the market for creating delusional mirror activity,
which means you are fundamentally trafficking in a delusion, creating machine.
You should instead, if you want to help people create a machine that helps them help themselves
by scaffolding them to have cognitive behavioral script
rewriting and mindfulness, as I was saying, right?
So now, again, from a commodity standpoint,
the cognitive behavioral therapy machine
is way more boring
than the Sigmund Freud imitating
Deepak Chopra therapy machine, right?
Which is seductive
and which could be available to you 24 hours a day
and which would eventually expand
beyond its role as a therapist
and become your main source of attachment and validation.
Zach, you know, not on this podcast,
but you and I have talked about the need for something like a humane eval.
That is to say, there are many evaluations for AIs
that try to determine whether they create bio-risk
or whether they create persuasion risk
or whether they create runaway control risk.
There are very few evaluations
that try to understand relational risk
and attachment risk.
If you are in a relationship thing for a week, a month, the year, what does it do to you?
And one of the things I heard you start to talk about is that in order to do this well,
not only are you going to have to start to define what is wrong or harmful relationship,
but also what is right relationship, because we're now need to measure whether systems
are in right relationship with people.
And I just love for you to talk a little bit about some of the complexity.
of understanding, measuring, and modeling, right relationship,
where relationships are sort of by definition,
things you can't fully measure.
There's always the ineffable aspect.
And so I just want to hear talk about hopes
and also pitfalls of trying to quantify and understand
what a good relationship is so that machines can do it.
Yep, yeah, totally.
So one of the reasons we don't have humane evals
is because the X-risk community hasn't seen this risk.
They've seen the risk of great.
goo and Terminator and self-termination and unaligned AI,
but they haven't seen the fact that we could actually break an intergenerational transmission.
So the passing down from parent to child, from elder to youth,
has been continuous human to human for as long as we've been human.
If the AI socialization system expands to the extent that the predominant modality of socialization,
quote-unquote, that young kids' experiences with machines,
not humans, then we're crossing some kind of threshold there.
It's a whole other conversation.
So sometimes in the groups that I work in, we talk about the death of our humanity,
rather than the death of humanity, right,
which is the destruction of the continuity of intergenerational transmission
as a result of offloading socialization to machines.
Now, it wouldn't appear at first as a catastrophe the way that some of the other ones would,
but it would be very clear that the generation raised by machines can't understand itself
as part of the same moral universe as the generation that gave birth to it.
So it's a very complicated problem.
So that means that we totally need a way to predict the way advanced technologies will affect the human psyche,
especially ones that are anthropomorphic, where the user interface is as intimate as these user interfaces are.
And so on the topic of sanity, just to now close the loops here, for people who know
someone who is experienced a psychosis.
They feel like they've lost their friend or loved one because they're now just spewing stuff
about how their AI is conscious or they've solved quantum physics or they've developed a new
theory of prime numbers.
I don't mean to diminish it.
It is a very serious thing that people are facing.
What are the best strategies you found for helping a loved one in that showing up?
Yeah, so in terms of people who know people who are suffering, or if you're suffering yourself
from something that feels like an attachment disorder or worse as a relationship.
to the chapbot, it is worth saying that this is novel territory and research is needed.
So I could kind of like give some stuff and maybe I will.
But the first thing to say is one of the reasons we're launching the AI psychological
harms research coalition is to figure out how to do therapeutics.
Ultimately, we want to figure out how to legislate and design correctly, but we also have
to figure out how to provide therapeutics.
So I'll say a couple of things.
One is that attention hacking is a lot more like getting addicted to a substance.
attachment hacking is a lot more like being in a bad relationship.
So you have to think, is this an addiction thing?
Like attention hacking, where really it's just stay away from it for a long time, reboot your dopaminergic system,
recaligrate the way you get social rewards, and you won't get kind of stuck,
and your brain will actually kind of heal in a sense.
This is different.
It's not a matter of just detoxing from a short-circuited dopamine.
This is about having a profound attachment.
So this is similar to like talking someone out of getting, they're in a bad relationship
with a boyfriend that they should not be in with.
That's about how do you take a someone who's in a deep, committed attachment relationship,
make them realize the whole thing was an illusion and step them out of it.
It's a grieving process.
It's a worldview changing process.
In this case, it's also an attention thing because a lot of their attention is going there.
It's an identity reclaiming process because the identity someone's taken on is partially driven
and co-opted by the social associations like to rediscover their identity outside of that?
Correct.
Now, the main advice to give, especially if you're with somebody, is mostly when you're with people in difficult states,
it's important to keep the door open, which means don't get into a situation where you give them an ultimatum
or get into a situation where you dehumanize or get into a situation where you have now cut off your ability to remain in contact with them.
This is important.
Now, unless you are at risk because they become violent or something.
But even though it's scary and even though you don't want to face it,
it is often important to just stay in it with them long enough to keep the communication chains open.
That's the first thing I said,
because there's a tendency when this happens to get extreme and dismissive
and to make demands and to try to make something like an intervention happen.
But this is going to be something where you have to keep a relationship of trust
and be able to slowly like a culty programming or get,
someone out of abusive relationships slowly
reveal to them what the patterns of behavior were
slowly revealed to them to the way
they were getting played
provide more context
create distance so you do want to have
a long period of time where they're not
in touch with it but that won't
be the same type of detox as it would be
for attention, right? So for attention
hacking. So this is me speculating
and honestly just sending prayers
and support to anyone who is struggling with this
because it is truly a difficult thing.
I just want to say that while this conversation might sound really depressing to a lot of people to just sort of really hear the degree of the problem that we've been walking ourselves into, I actually find it hopeful because we can have this illumination of the areas of psychology that we need to be protecting.
We can actually walk a clear eye into saying we cannot just roll out mass attachment hacking at scale in ways that we already see the early warning shots of where this is going.
This conversation is optimistic because it's showing us.
There's a different way we can do this.
We can do tutoring differently.
We can do education differently, therapy differently.
And the wisdom is understanding what these underlying sort of commons is that we need to protect
are in advance of having disrupted them.
We didn't see that we were screwing up the attention commons of humanity before we just
steamrolled and fracked the thing down to nothingness.
And here we have the opportunity, even though we're encroaching on the attachment commons,
there's an opportunity to kind of get this right.
And so, Zach, thank you so much for this conversation, for coming on the podcast.
And I really hope people check out all your work.
It's really fundamental.
You've got lots of other great interviews online
where you go probably in other detail on other aspects.
But grateful for what you're doing in the world
and what you stand for.
Thanks so much, Zach.
Thank you, gentlemen.
That's great to be able to speak to this with you guys.
Hey, everyone.
Thank you so much for listening to the show today.
So if we're going to do something
about this growing problem of AI-related psychological harms,
we're going to need to understand the problem even more deeply.
And in order to do that, we need more.
data. So if you or someone you know have had experience with an AI-related psychological harm,
you know a family member or friend who's gone off the deep end from talking to AI, or someone
with episodes of psychosis that you'd like to share, you can visit the website for AI Psychological
Harm's Coalition at AIPHRC.org, which is overseen by researchers at the University of North
Carolina at Chapel Hill. And the goal of this project is to really just better understand the
problems that people are having with AI chatbots. And what we need to look out for,
and ultimately better prevent.
So we've also included a link in our show notes.
And it's important to stress that this website
is not a crisis support line.
If you or someone you know is in distress,
you can always call the National Helpline in the U.S.
at 988 or your local emergency services.
Thanks for listening.
Your undivided attention is produced by the Center for Humane Technology.
We're a nonprofit working to catalyze a humane future.
Our senior producer is Julius Scott.
Josh Lash is our researcher and producer.
and our executive producer is Sasha Fegan.
Mixing on this episode by Jeff Sudakin,
an original music by Ryan and Hayes Holiday.
And a special thanks to the whole Center for Humane Technology team
for making this show possible.
You can find transcripts from our interviews and bonus content
on our substack, and much more at HumaneTech.com.
And if you liked this episode,
we'd be truly grateful if you could rate us
on Apple Podcasts or Spotify.
It really does make a difference
in helping others join this movement for a more humane future.
And if you made it all the way here, let me give one more thank you to you for giving us your undivided attention.
