Your Undivided Attention - Echo Chambers of One: Companion AI and the Future of Human Connection
Episode Date: May 15, 2025AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even gi...ve you life advice. It’s no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we’re connecting with another person.But these AI companions are not human, they’re a platform designed to maximize user engagement—and they’ll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.RECOMMENDED MEDIAFurther reading on the rise of addictive intelligence More information on Melvin Kranzberg’s laws of technologyMore information on MIT’s Advancing Humans with AI labPattie and Pat’s longitudinal study on the psycho-social effects of prolonged chatbot usePattie and Pat’s study that found that AI avatars of well-liked people improved education outcomesPattie and Pat’s study that found that AI systems that frame answers and questions improve human understandingPat’s study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction Further reading on AI’s positivity biasFurther reading on MIT’s “lifelong kindergarten” initiativeFurther reading on “cognitive forcing functions” to reduce overreliance on AIFurther reading on the death of Sewell Setzer and his mother’s case against Character.AIFurther reading on the legislative response to digital companionsRECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to DeceiveWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonEsther Perel on Artificial IntimacyJonathan Haidt On How to Solve the Teen Mental Health Crisis Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.
Transcript
Discussion (0)
Hey everyone, this is Daniel Barquay.
Welcome to your undivided attention.
You've probably seen a lot of the news lately about AI companions, these chatbots that do way more than just answering your questions.
They talk to you like a person, they ask you about your day, they talk about the emotions you're having, things like that.
Well, people started to rely on these bots for emotional support, even forming deep relationships with them.
But the thing is, these interactions.
with AI companions influence us in ways
that are subtle, in ways that we don't realize.
You naturally want to think that you're
talking with a human
because that's how they're designed.
But they're not human.
They're platforms incentivized to keep you engaging
as long as possible, using tactics
like flattery, like manipulation,
and even deception to do it.
We have to remember that
the design choices behind these companion bots,
they're just that. They're choices.
And we can make better ones.
Now, we're inherently,
apparently relational beings.
And as we relate more and more to our technology
and not just relating through our technology to other people,
how does that change us?
And can we design AI in a way that helps us
relate better with the people around us?
Or are we going to design an AI future
that replaces human relationships
with something more shallow and more transaction?
So today on the show,
we've invited two researchers who've thought deeply about this problem.
Patty Maas and Pat Pat Perinutupon
are co-directors of the advancing humanity
through AI lab at MIT.
Patty is an expert in human computer interaction
and Pat is an AI technologist and researcher
and a co-author of an article in the MIT Technology Review
about the rise of addictive intelligence
which will link to in the show notes.
So I hope you enjoy this conversation as much as I did.
Patty and Pat, welcome to your undivided attention.
Thank you.
Happy to be here.
Thanks for having us.
So I want to start our conversation today by giving a really high-stakes example of why design matters when it comes to tech like AI.
So social media platforms were designed to maximize our users' attention and engagement, keeping our eyes on the screen as long as possible to sell ads.
And the outcome of that was that these algorithms pushed out the most outrageous content they could find and playing on our human cravings for quick rewards, dopamine hits, eliminating any friction of use,
and removing common stopping cues
with dark patterns designs like Infinite Scroll.
It was this kind of race to the bottom of the brainstem
where different companies competed for our attention.
And the result is that we're now all more polarized
and more outraged and more dependent on these platforms.
But with AI chatbots, it's different.
The technology touches us in so much deeper ways,
emotional ways, relational ways,
because we're in conversation with it.
But the underlying incentive of user engagement is still there,
Except now, instead of a race for our attention, it seems to be this race for our affection and even our intimacy.
Yeah. Well, I think AI itself can actually be a neutral technology.
AI itself is ultimately algorithms and math, but of course the way it is used can actually lead to very undesirable outcomes.
I think bot, for example, that socializes with a person could either be designed to
replace human relationships or it could be designed to actually help people with their human
relationships and push them more towards human relationships. So we think we need benchmarks to
really test how or to what extent a particular AI model or service ultimately is leading to
human socializing and supporting them with human socializing versus actually pulling them
away from socializing with real people and trying to replace sort of their human socializing.
I want to challenge Patty a little bit when she said that technology is neutral.
It's remind me of Marwin Cranzburg, first law of technology where he said that
technology is neither good nor bad nor it is neutral.
And I think it's not neutral because there always someone behind it and that person is either
having like good intention or like maybe bad intention.
So the technology itself is not something that act on its own, but it's always, even though
you can create the algorithm that's sort of self-perpetuated or like going in loop, but there's
always some intention behind.
So I think understanding that sort of allow us to not just say, well, technology is out of control.
We need to ask who actually, you know, let the technology go out of control.
I don't think technology is just coming after us for affection.
It's also coming after us for intention as well, like shaping our intention.
So, you know, change the way that I want to do things
or change the way that I do things in the world, right?
Like changing my personal intention, whether I have it or not.
So it's not just the artificial part that is worrying,
but the addictive part as well
because this thing, as Paddy mentioned,
can be designed to be extremely personalized
and use that information to exploit individual
by creating this sort of like addictive use pattern
where people just, you know, listen to things that they want to listen to
or the bot just tell them what they want to hear
rather than telling them the truth
or things that they might actually need to hear
rather than what they want to hear.
I think the term that we use a lot
is the psychosocial outcome of human AI interaction.
People worry about the misinformation
or AI taking over jobs and things like that,
which are important,
but what we also need to put attention on
is also the idea that these things are changing who we are.
Our colleague Sherry Toko once said
that we should not just ask what technology can do
but what it is doing to us
and we worry that this question
around addiction, psychosocial outcome, loneliness,
all these things that are related to the person's
sort of personal life
being ignored when they think about the AI regulation
or the impact of AI on people.
And can you go a bit deeper because
when those of us in the field, we talk about
AI sycophancy, right? Not just flattering you
but I'm telling you what you want to hear
going deeper. Can you lay up for our audience
other kinds of mechanics of the way that AIs can actually get in between our social interactions.
Totally. I think in the past, right, if you need an advice on something or if you want to sort of get an idea,
like you'll probably go to your friends or your family, you know, they will serve as sort of like the sounding board.
Like, you know, this is sound right. Is this something that makes sense? But now you could potentially go to the shotboard.
And a lot of people say, well, this chatbot are trained on a lot of data.
Then what would you hope for is that this chatbot would just, you know, based on all these data,
will tell you the unbiased view of the world, like what is the scientific, accurate answer
on a particular topic. But because the bot can be sort of biased intentionally or unintentionally,
right now we know that this system contain frequency bias. The thing that they see in frequently
will be the thing that they might actually say it or they have positivity bias.
They always try to be positive because, you know, user one here, right, doesn't sort of say
negative thing. And if you, you know, being exposed to that repeatedly, it can also
make you believe that, oh, that's actually the truth, and that might actually make you go deeper
and continue to find more evidence to support your own, right? We have identified this as sort of
the confirmation bias, where you might initially have skepticism about something, but after you
kind of being repeatedly exposed to that, then it becomes something that you have a deep
belief in. Yeah, and I also want to dig in a little bit there, because people often think
there's this mustache twirling instinct to take these AIs and make them, you know, split us apart.
That's a real risk, don't get me wrong,
but I'm also worried about the way in which these models
unintentionally learn how to do that.
We saw that AI is being trained to autocomplete the internet
end up playing this sort of game of improv where they
they sort of become the person or the character
that they think you want it to be.
And it's almost like codependency, right?
In psychology, where the model's kind of saying,
well, who do you want me to be?
What do you want me to think?
And it sort of becomes that.
And I'm really worried that these models are
telling us what we want to hear way more than we have.
think and we're going to get kind of sucked into that world.
I think with social media, we had a lot of polarization and information bubbles, but I think
with AI, we can potentially even get to a more extreme version of that, where we have bubbles
of one, where it's one person with their echo of a psychopathan AI, where they spiral down and
become, say, more and more extreme and have their own worldview that they don't share.
with anyone else. So I think we'll get further pulled apart even than in the social media age
or era. We did a study where we investigate this sort of question a little bit, where we prime
people before they interact with the same exact chatbot, different description of what the
chatbot is. In one group, we told people that the chatbot have empathy, that it can actually
care for you, it have deep beneficial intention to actually help you get better. In the second group,
we told people that these chatbot were
completely manipulative, that they
act nice, but it's actually
very, it actually wants you to buy
a subscription. And the third one, we told
people that the chatbot was actually a computer
code. And at the end, they were all
talking to the same exact LM model.
And what we found is that people
talk to these chatbot differently,
and that also triggered the bot to respond
differently as well. And these
sort of feedback loop, whether it's a positive feedback
rule or negative feedback loop,
influence both the human behavior and the
AI behavior, right? As Patty mentioned, it could kind of create this sort of bubble. It kind of
create certain belief or reaffirm certain belief in the user. And so that's why in our research
group, one thing that we focus on is if we want to sort of understand the science of human
AI interaction to uncover the positive and the negative side of this, we need to look at not
just the human behavior or the AI behavior, but we need to look at both of them together to see
how they reinforce one another. I think that's so important. I mean, what you're saying is that
AI is a bit of a roar shock test. If you come in and you think it's telling you the absolute
truth, then you're more likely to start an interaction that continues and affirms that belief.
If you come in skeptical, you're likely to start some interaction that keeps you in some
skeptical mode of interaction. So that sort of begs the question, like, what is the right mode
of interacting with it? I'm sure listeners as podcasts are all playing with AIs on a daily
basis. What is the right way to start engaging with this AI that gives you the best result?
Yeah, I think we have to encourage very healthy skepticism, and it starts with what we name AI, right?
We refer to it as intelligence and a claim that we're nearing AGI general intelligence.
So it starts already there with misleading people into thinking that they're interacting with an intelligent entity, the anthropomorphization that happens.
Well, and yet all the incentives are to make these models that feel humans.
because that's what feels good to us.
And Pat, only a few months after you wrote that article in MIT Tech Review,
we saw the absolute worst-case scenario of this
with the tragic death of Seusser,
this teenage boy who took his own life,
after months of this incredibly intense, emotionally dependent,
arguably abusive relationship with an AI companion.
How does the story with Seusser highlight what we're talking about
and how these AI designs are so important?
When we wrote that article, it was hypothetical, right?
that, you know, in the future, the model will be super addictive and it could lead to really bad
outcomes. But as you said, after, I think, a couple months, when we got the email about
the case, it was, you know, shocking to us because we need to think that it will happen, you
know, so soon. As a scientific community, you know, sort of start to grasp with this question
of how we design AI, we are at the beginning of this, right? You know, these two are just two years
And it sort of launched to massive amount of people.
Pretty much everyone around us
are sort of using these two on a regular basis now.
But the scientific understanding of how do we best decide this to are still at the early age.
I mean, we have a lot of knowledge in human-computer interaction.
But previously, none of the computer that we designed
have this interactive capability.
Totally.
It doesn't model user in a way that this LOM have
or it doesn't sort of respond in a way that it's so human-like that we have right now.
we had like early example like the Eliza shotbot that even with that limited capability
Eliza I think was a shot bot that was invented in the 70s or something like that yeah yeah
it can only sort of rephrase what the user said and then you know engaged in conversation in that way
they already have impact on people but you know now we see that even more so going back to the
question of the suicide case that you know was really devastating um I think
now it's more important than ever
that we think of AI, not just an engineering
challenge, it's not just about
improving the accuracy or
improving its performance, but we need
to think about the impact of it on people,
especially the psychosocial outcome.
We need to understand how each
of the behavior, you know, psychophancy,
bias, anthropomalization,
how does it affect things like loneliness,
emotional dependence, and things like that?
So that's actually the reason we, you know,
start doing more of that type of work,
not just on the positive side of AI,
but it's also equally important to study the condition
where human doesn't flourish with this technology as well.
I wanted to start with all these questions just to lay out the stakes, right?
Like, why does this matter?
But there's also this world where our relationship with AI
can really benefit our humanity, our relationships,
our internal psychology, our ability to hold nuance,
speak across worldviews, sit with discomfort.
And I know this is something that you're really looking at closely.
At the lab, at advancing humans with AI initiative, can you talk about some of that work you're doing there to see the possible futures?
Yeah, I think instead of an AI that just sort of mirrors us and tells us what we want to hear and tries to like engage us more and more, I think we could design AI differently so that AI makes you see another perspective on a particular issue.
Isn't always agreeing with you, but it's designed maybe to help you grow as a person.
an AI that can help you with your human relationships, thinking through how you could deal with
difficulties, with friends or family, etc. Those can all be incredibly useful. So I think it is possible
to design AI that ultimately is, well, created to benefit people and to help them with personal
growth and the critical thinker, the great friends, etc. Yeah. Yeah. And,
And to give more specific example, as Patty mentioned, we did several experiments.
And I think it's important that we highlight this word experiment.
It's because we want to also understand scientifically, like, does this type of interaction benefit people or not?
Right.
I think right now there are a lot of big claim that, you know, these two can kill loneliness or can make people learn better.
But there was no scientific experiment to compare different type of approach or different type of intervention on people.
So I think in our group, we take sort of the experimentation approach
where we build something and we develop the experiment
and also control condition to validate that.
For example, in critical thinking domain,
we look at what happened when the AI asks question
in the style of Socrates, where he used Socratic method
to ask or challenge his student to think rather than always providing the answer.
I think that's one of the big questions that people ask,
like what is the impact of AI on education.
And, you know, if you use the AI to just give information to students,
we are essentially repeating the sort of the factory model of education, right,
where kids are being, you know, given or feed the same type of information
and they're not thinking, they're just absorbing.
Right.
So we flip that paradigm around and we design AI that, you know,
flip the information into a question.
And instead of helping the student by just giving the answer,
it will help students by asking questions, like, oh,
if this is the case, then what do you think, you know, that might look like?
Or, you know, if this conflict with this, what does it mean for that, right?
So kind of framing the information as question.
And what we found is that when we compare to an AI that's always providing the correct
answer, again, this is an AI that's always providing the correct answer.
This is, you know, that's not happening in the real world, right?
AI always hallucinate and give wrong answer some time.
But this is comparing to the AI that always give the correct answer.
We found that when the AI engage people cognitively by
asking the question, it actually helped people arrive at the correct answer better than when
the AI always give the answer. This is in the context of helping people navigating fallacy.
So when they see statement and they need to validate whether this is true or false,
right? So the principle that we can derive here is that human AI interaction is not just
about providing the information. It's also about engaging people with their cognitive
capability as well.
And our colleague at Harvard coined this term
cognitive forcing function
where the system present
some sort of conflict or challenge or question
that make people think
rather than eliminate that by
providing the answer. So this type of
design pattern, I think, can be integrated in education
and utter tools. I think
that's really interesting because
we've been thinking a lot about how to prompt AI
to get the most out of AI. And what you're saying is actually
making AI that prompts humans, like
prompts us into the right kind of cognitive frame.
Yeah, totally.
I think the promise is really there, but one of the things I think I worry about is that you run headlong into the difference between what I want and what I want to want, right?
I want to want to go to sleep reading deep textbooks, but, you know, what I seem to want is to go to sleep scrolling YouTube and scrolling, you know, some of these feeds.
And then, you know, in an environment that's highly competitive, I'm left wondering, like, how do you make sure that these systems that engage our creativity, that engage our humanity, that engage the sort of,
deep thinking, out-compete, right?
That's, I think, a really great question.
Our colleague, Professor Mish Resnick, who run the Lifelong Kindergarten Group at the Media
Lab, he said that, you know, human-centered AI is a subset of human-centered society.
If we say the technology is going to fix everything and we can, you know, create a messy
society that exploit people and have the wrong incentive, then this tool will be in service
of that incentive rather than supporting people.
So I think maybe we're asking too much of technology, right?
Like we say, well, how do we design technology to support people?
We need to ask bigger question and ask, how can we create human-centered society?
And that requires more than technology.
It requires regulation.
It requires civic education and democracy, right?
Which is sort of rare this day.
So I think right now technology is sort of on the hotspot, right?
We want better technology.
But if we zoom out, technology is sort of a subset of an intervention that happened in society.
And we need to think bigger than that, I think.
Yeah.
Patty, zooming in a bit.
You two were involved in this big longitudinal study in partnership with Open AI
that looked at how chatbot use, regular chatbot use, was affecting users.
And this is a big deal, right?
Because we don't have a lot of good empirical data on this.
What were those biggest takeaways from that study?
And what surprised you?
Yeah, it was really two studies, actually.
And one of the studies just looked at the prevalence of people sort of using pet names, etc.,
for their chatbot.
and really seemingly having a closer than healthy relationship with the chatbot.
And there we looked, or Open AI specifically looked at transcripts of real interactions
and saw that that was actually a very small percentage of use cases of chat chbt.
Of course, there are other chat.
Sorry, what was a small percentage?
The people who sort of talk to a chatbot as if it's almost a lover.
or their best closest friends, sort of, yeah.
But of course, there's other services like replica and character.a.i, et cetera,
that are really designed to almost replace human relationships.
So I'm sure that on those platforms, the prevalence of those types of conversations is much higher.
But then we also did another study, which was a controlled experiment.
And do you want to talk about that a little bit, Pat?
Yeah, totally.
So as Patty mentioned, the two study, you know, we sort of co-designed them with OpenAI.
The first one, we call it all-platform study, where they look at the real conversation.
And I think we look at 40 million conversation on chat GPD and trying to kind of identify, you know, first heavy user, people that use it a lot.
But we want to understand what is the psychosocial outcomes.
That's the term that we use in the study on that.
So we create a second study that, you know, be able to capture rich data around people.
not just how they use the shotboard.
So for this second study,
what we did was we recruit about 1,000 participants
and we randomly assign them into three conditions.
And the first condition, they talked to advanced voice mode,
which is the voice mode that at the time,
I think people associated with the Scarlett Johansson scandal,
and we intentionally designed two voice mode.
One is the engaging voice mode,
where it's designed to be more flirty, more engaged,
And then the other one is like, you know, more polite and more sort of more neutral, like more professional.
Yeah. And then we compared to the regular text. And then the third group, we prime people to use it sort of in the open world. They can use it whatever.
So what were the key findings from that, these different modes of engaging with AI that had different personalities?
So I think we found that there was a driving force with the time that people use it.
If they use it for a shorter period of time, I think we see some like positive improvement.
Like, you know, people become less lonely, people have healthy relationship with the bot.
But once they sort of pass certain threshold, once they use it longer and longer,
we see this sort of positive effect diminish.
And then we see people become lollier and have more emotional dependence with the bot,
have used it in a more problematic way, right?
So that's the pattern that we observe.
Longer every day, not just number of days.
But the more you use it in a day, the less good outcomes were in terms of,
people's loneliness, socialization with people, et cetera.
So people who use these systems a lot each day
tend to be lonelier, tend to interact less with real people, etc.
Now, we don't know what the cause and effect there is.
It may go both ways.
But in any case, it can lead to a very negative feedback pattern
where people who already possibly are lonely
and don't hang out with people a lot,
then hang out even more with chatbots,
and that makes them even more lonely
and less social with human relationships and so on.
Yeah, so it feels like one of the coherent theories there is, right,
instead of it augmenting your interactions,
it becomes a replacement for your sociality,
in the same way that being at home alone and watching TV,
sitcoms have the laugh track.
Why do they have the laugh tracks
so that you get this parisocial belief that you're with other people?
and we all know that replacing your engagement with people is not a long-term successful track.
I want to zoom in on two particular terms that you talk a lot about.
One is sycophancy, and the other is model anthropomorphization,
the fact that it pretends to be a human or pretends to be more human.
So let's do sycophancy first.
Most people misunderstand sycophancy is just flattery.
Like, oh, that's a great question you have.
but it goes way deeper, right?
And the kind of mind games you can get into
with a model that is really sycophantic
go way beyond just flattering you.
Let me go one step further to tell you why I'm really worried
about sycophancy.
Like in 2025, we're going to see this massive shift
where these models go from being just sort of
conversation partners in some open web window
to deeply intermediating our relationships.
You know, I'm not going to try to call you or text you.
I'm going to end up saying to my AI assistant,
oh, I really need to talk to Patty about this
and let's make sure,
can Patty come to my event?
I really want her there.
And on the flip side,
the person receiving the message
is not going to be receiving the raw message.
They're going to ask their AI assistant,
well, who do I need to get back to?
And so as we put these models in between us,
as we're no longer talking to each other,
we're talking to each other through these AI companions.
And I think these subtle qualities
like telling us what we want to believe
are going to really mess with a ton of human relations.
And Patty, I'm curious of your thoughts on that.
Yeah. Well, I think AI is going to mediate our entire human experience.
So it's not just how we interact with other people, but of course also our access to information,
how we make decisions, purchasing behavior and other behaviors and so on.
So it is worrisome that because what we see in the experiments that we have done
and that others like more Naman, not that Cornell are doing,
that AI suggestions influence people in ways that they're not even aware of.
They're not aware that their beliefs and so on are being altered by interaction with AI
when asked.
So I'm very worried that in the wrong hands or in anyone's hands, as Pat talked about earlier,
there's always some value system, some motives that ultimately are baked into.
to these systems that will ultimately influence people's beliefs, will influence people's
attitudes and behaviors when it comes to not just how they interact with others and so on,
but how they see the world, how they see themselves, what actions they take, what they believe,
and so on.
Right.
And I also see that as something that might have a negative effect on skill as well.
Like we may have skill atrophy, especially skill for interpersonal relationship.
If you always have this translator or this sort of, you know, system that mediate between human
relationship, then you can just be angry at this bot, and then it was just like translated in the
nice version to the person that you want to talk to.
So you might lose the ability to control your own emotion or know how to sort of talk to
other people.
Like you always have this thing in between, right?
But, I mean, going back to the question of AI design, right?
I mean, if we realize that that's not the kind of future we want to do, then I hope that
as a democratic society, people would have the ability to not adopt this and go for a different
kind of design. But again, there are many types of incentive and there are sort of the larger
market force here that I think is going to be challenging for this type of system, even though
it's well designed, well backed by scientific study. So I really appreciate your center for doing
this kind of work. Well, likewise. Yeah, to ensure that we have the kind of future we want.
Well, our future with AI is being determined right now by entrepreneurs and technologists.
Basically, increasingly, I think governments will play a key role in determining how these systems are used
and in what ways they control us or influence our behavior and so on.
And I think we need to raise awareness about it and make sure that ultimately everybody is involved in deciding what future.
with AI we want to live in and how we want this technology to influence the human experience
and society at large. In order to get to that future, we exist to try to build awareness
that these things are even issues that we need to be paying attention to now. And why we're so
excited to talk to you is people need to understand what are the different changes and the design
changes that we can make. And I'm curious if you know, if you've come to some of these, like
clearly pat the problem you talked about that i that i call the pornography of human relationships like
these models becoming these always on always available never you know needing your attention you're the
center of the world and it becomes such an easy way to abdicate human relationship how do we design
models that get better at that that that get us out of that trap yeah i think you know first of all
i think the terminology that we use to describe this you need to be more specific right it's not just whether
you have AI or don't have AI,
but what's specific about AI
that we need to rethink or redesign, right?
I really love this book called AI Snake Oil
that they talk, they say that, well, we use the term
AI for everything and you will not be able to,
in the same way that we say car for, you know,
bicycle or truck or bus,
then we would treat all of them the same way.
When in the real world, you know, they have different
degree of dangerousness, right? So I think
that's something that we need to think about AI as well.
So we need to increase in our
literacy, the specificity
of how we describe or talk about
different aspect of AI systems.
And also benchmarks, as we talked about earlier,
for measuring to have extent particular models
show a certain characteristic or not.
Yeah, so talk more about that for audience
who may not be familiar.
What kind of benchmarks do you want to see in this space?
Yeah, so I think right now, the benchmark that we use,
most of them don't really sort of consider the human aspect as well.
They don't ask, well, if the model can do very well
on mimicking famous artistic style,
how much does it affect the artist doing that,
or how much does it affect human ability
to come up with creative original ideas?
These are things that, you know,
it's kind of hard for a test to be able to measure, right?
But I think with the work that we did with Open AI,
I think there's a starting point
to start thinking about this sort of human benchmark,
like, well, whether the model make people lonelier or less lonely,
whether the model make people more emotionally dependent
or less emotionally dependent,
on AI. And we hope that we can be able to scale this to other aspect as well. And that's actually
one of the mission of our AHA program or the Advancing Human with AI program is to think about
this sort of human benchmark that when the new model come out, we can sort of simulate or
have an evaluation of what would be the impact on people. So the developer and engineer
could think more about this. Okay, so let's go one level deeper. So we covered sick of
and we covered this always available, never-needy kind of super-stimulus of AI.
But what about anthropomorphic design?
Like, what are specifics in the way that we could be making AI that would be preventing
the sort of confusion of people thinking that AI is human?
Yeah, I think the AI should never refer to its own beliefs, its own intentions, its
own goals, its own experiences, because it doesn't have them.
It is not a person.
So I think that is already a good start.
And again, we could potentially develop benchmarks,
look at interactions and see to what extent models do this or not.
But it's not healthy because all of that behavior encourages people to then see the AI
as, say, more human, more intelligent and so on.
So that would be some sort of metric you could put on is how often even the statement like,
oh, that's a really interesting idea, is a fake emotion that the model is not a
actually experiencing.
Exactly.
But I mean, I think this is a complicated topic, right?
Because on one hand, as a society, we also enjoy sort of art form like cinema or theater
where people do this kind of role playing and, you know, portray fictional character.
And we could sort of enjoy the benefit of that where we can, you know, engage in a
video game character and interact with this sort of fantasy world, right?
But I think it's a slippery slope because once we start to blur the boundary that we can no longer
tell the difference, I think that's what it gets dangerous.
We did a study also where we look at what happened when students learn from virtual character
based on someone that they like or admire at the time we, you know, we did a study with
virtual Elon Musk.
I think he was less crazy at the time.
And we see the positive impact of the virtual character for people that like Elon Musk,
but people that did not like him back then, they're also not doing well, right?
It had the opposite effect.
So that personalization or creating virtual character based on someone,
that you like or admire, it can also be a positive thing.
So I think this technology also, you know, heavily depend on the context as well and how we use it.
That's why I think the quote from Crensburg that is neither good nor bad nor neutral is very relevant today.
It wouldn't be Chth if we didn't direct the conversation towards incentives.
One of the things I worry about is not just the design, but the incentives that end up driving the design.
Right.
and making sure that those incentives are transparent
and making sure that we have it right.
So just one, I want to put one more thing on the table,
which is right now we're just RLHFing these models,
which is reinforcement learning with human feedback.
We see how much are human-liked or even worse,
how many milliseconds they just continue to engage with the content
as a signal to the model of what's good and bad content.
And I'm worried that this is going to cause those models
just to basically do the race to the bottom.
They're going to learn a bunch of bad manipulative behaviors.
And instead, you would wish you had a model that would learn who you wanted to become,
who you wanted to be, how you wanted to interact.
But I'm not sure the incentives are pointing there.
And so the question is, do you two think about different kinds of incentives
about the way you could push these models towards learning these better strategies?
Maybe we need to give these benchmarks to the models themselves
so they can keep track of their performance, like try to optimize for the right benchmarks.
But I think, you know, Patty, you have said to me at one point, you said that if you're not paying for the software, then you are the product of the software, right? I think that that's really true, right? I think majority of people don't pay social media to be on it, right? They subscribe or they get on it for free. And in turn, the social media sort of exploit them, you know, as a product for selling their data or selling their attention to other companies. But I think for the AI companies, I think if, you know, people, you know, people
are paying subscription for this, then at least they should be able to, in theory, have control,
even though that might fade away soon as well.
So I think we need to figure out this sort of question of how do we create human-centered society
and human-centives and then the technology would align once we have that sort of larger goal
or larger structure to support that, I think.
But even with a subscription model, there may still be other incentives that play
where these companies want to collect as much data about you
because that data, again, can be monetized in some ways
or can be valuable training, or it also makes their services more sticky, right?
Sam Altman just announced that there's going to be more and more memory, basically,
in chat chit of previous sessions.
And, of course, on the one hand, you think, oh, this is great.
It's going to remember previous conversations.
and so it can assist me in a much more personalized way.
I don't have to explain everything again and so on.
But on the other hand, that means that you're not going to switch to another system
because it knows you and it knows what you want, et cetera.
And so you keep coming back to that same system.
So there's many other things at play, even though there might be a subscription model.
Well, it reminds me the term conman comes from the word confidence.
And the entire point is these people would instill enough confidence in people that they would give them their secrets, their bank accounts, their this, and then they would betray the confidence, right?
And so while an aligned model and an aligned business model, knowing more about you and your life and your goals is fantastic, a misaligned model or a misaligned business model, having access to all that information is kind of the fast path to con man.
Mm-hmm. Yep.
Okay, so maybe just in the last few minutes, I guess I have just the question of,
I'm curious whether you've done any more research on the kinds of incentives or design steers
that produce a better future in this sense, right?
In the sense of what are the kinds of interventions, the kinds of regulation,
the kinds of business model changes, like what would you advocate for if the whole public could change it?
I was actually reflecting on this a little bit before coming on to the podcast today.
Like, what is our pathway to impact?
And I think for me, as a researcher, like what we are really good at is sort of trying to understand
understand this thing at a deeper level and coming up with experiment and new design that can be
alternative. I think that there's something sort of interesting about the way that historically,
like before you can attack the demon, you need to be able to name it. I think it's similar
with the AI in order for us to sort of tackle this wicked challenging thing. You need to have a
precise name and terminology and understanding of what you're dealing with. And I think that's
sort of our role, I think, as a researcher in academia, is to shed light on
this and enhance our understanding of what's going on in the world, especially with AI.
Yeah. Well, here at CHT, we say that clarity creates agency. If you don't have the clarity,
you can't act. And that's why I want to thank you both for helping us create the clarity,
name the names, find the dynamics, do the research so that we know what's happening to us in
real time before it's too late. Yeah, I think I might want to say it a little bit about
technologies of the future. What we hope is that this is not just, you know, our work.
work, right? I hope that more researchers are jumping on to do this kind of work. And so for people
developing AI across, you know, academia industry, that they will start thinking bigger and broader
and not to see themselves as someone who can just do the engineering part of the AI or doing
the training part of the AI, but thinking what is the downstream impact that what they are doing
is going to, how is it going to impact people so that maybe, you know, we will steer away from
the conversation around like, well, you know, it's inevitable.
This thing is individual, but we need to kind of be on the race to this.
If more people have that sort of awareness and more people listen to you guys,
like this podcast and follow the work of the Center for Humane Technology,
I think we will have technologies that are more thoughtful.
Yeah, and AI should become a more interdisciplinary endeavor, I think.
Not just, again, the engineers, the entrepreneurs,
and maybe the government as well,
but we should have historians and philosophies.
philosophers and sociologists and psychologists, etc. They have a lot of wisdom about all of this.
And so I think it has to become a much broader conversation, not just educating the entrepreneurs and the engineers.
I agree with that 100%. And as this technology can meet us in such deeper ways than any technology in the past,
as it can touch our psychology, as it can intermediate our relationships, as it can do things out in the world,
we're going to need all of those other specialties to play a part.
I think it's really clear that this is a really, really hard question, right?
It touched on so many aspects of human life, not just our sort of, I think right now
what people focus on productivity with the AI will help people work better,
but I think even just the productivity alone or just the work area alone, you know,
it's also touched on the question of like purpose, right?
What does it mean to actually do something, right?
It will change the way that we think of.
about the purpose, our sort of meaning in life, and things like that.
So even just these domain alone is never just about work by itself.
So, I mean, that's why AI is a really hard question that require us to think in many
dimension and in many direction at the same time.
And we don't necessarily have all the answer to this big question, right?
But I think that the more that we can learn from auto-discipline,
the more that we can learn from wisdom across, you know, culture,
across different group of people, across expertise,
the better we could start to comprehend this
and have better clarity on the issue at hand.
I'm so thrilled that you're doing this work.
I'm so glad that you're in this world
and that we get to work together
and build on each other's insights.
And thanks for coming on your undivided attention.
Great to be here. Thank you.
Thank you so much.
Your undivided attention is produced by the same.
Center for Humane Technology, a nonprofit working to catalyze a humane future.
Our senior producer is Julia Scott.
Josh Lash is our researcher and producer, and our executive producer is Sasha Fegan.
Mixing on this episode by Jeff Sudaken, original music by Ryan and Hayes Holiday.
And a special thanks to the whole Center for Humane Technology team for making this podcast possible.
You can find transcripts of our interviews and bonus content on our substack, and much more at
humanetech.com.
You can also watch all episodes on our YouTube.
channel. Just search for Center for Humane Technology. And if you like this episode, we'd be
grateful if you could rate it on Apple Podcasts and Spotify. It really does make a difference in
helping others join this movement. And if you made it all the way here, let me give one more
thank you to you for giving us your undivided attention.