PBS News Hour - Full Show - Can AI companionship cure loneliness – or deepen it?
Episode Date: February 28, 2026For some, artificial intelligence tools answer questions and make life more efficient. But for others, AI has become a form of companionship – a virtual friend, a therapist, even a romantic partner.... Is AI a cure for loneliness? Or is this a symptom of something gone very wrong? Horizons moderator William Brangham explores AI relationships with Sherry Turkle, Justin Gregg and Nick Thompson. PBS News is supported by - https://www.pbs.org/newshour/about/funders. Hosted on Acast. See acast.com/privacy
Transcript
Discussion (0)
I'm William Brangham, and this is Horizons.
For many of us, artificial intelligence tools like chat GPT or Claude, answer questions and make life more efficient.
But for others, AI has become a form of companionship, a virtual friend, a therapist, even a romantic partner.
Is AI a cure for loneliness, or is this a symptom of something gone very wrong?
Coming up next.
Welcome to Horizons from PBS News.
Artificial intelligence is very rapidly being deployed in so many parts of our society.
It's grading schoolwork and driving autonomous cars.
It's scanning x-rays for cancer and financial networks for fraud.
It's answering your Google searches, helping farmers plant their crops,
and it spurred at least one scientific innovation so profound that it won the Nobel Prize.
All this, while A.S.
is still just starting to take off.
But as we are seeing, it's already causing complex and challenging impacts to society.
One of those is what we're talking about today,
which is how some people say they're developing actual relationships
with artificial intelligence chatbots.
They say that these adaptive, non-human agents create real feelings of kinship and intimacy.
Others have even described having romantic feelings towards AI,
like the relationship depicted by Joaquin Phoenix
in the prophetic 2013 Spike Jones film called her.
The woman that I've been seeing Samantha,
she's an operating system.
You're dating in OS? What is that like?
I feel really close to her.
Like when I talk to her, I feel like she's with me.
We have also seen, however, some of these interactions
and tragically.
So to help us explore this brave new world,
we are joined by sociologist and clinical psychologist Sherry Turkle.
She's the founding director of MIT's initiative
on technology and self, and has written multiple books on the topic
and is writing a new book on AI.
Justin Gregg is a science writer.
He teaches about animal cognition at St. Xavier University
and is the author most recently of Humanish.
And Nick Thompson is the CEO of The Atlantic,
the former editor of Wired Magazine,
and the author, most recently, of The Running Ground.
Welcome to all three of you.
Thank you so much for being here.
Sherry Turkle would like to start with you.
As I mentioned, we are still in the early days
of artificial intelligence,
but we're already seeing this very unusual phenomenon
of people texting and talking with AI chatbots
and describing a real sense of intimacy with these objects.
Broadly speaking, what do you make of this trend?
Well, I can validate.
It's the trend that I'm studying and it's very much happening.
So it's not a kind of pundit's fantasy or a scary story.
An AI offers listening.
It offers validation.
It's always there.
And that's something that a lot of people feel they don't have in their lives.
And so they're drawn to this object that offers them that.
than that. The trouble is, is that there are at least three things that can go wrong really
quickly. The first is that the AI, which never really criticizes you and is always there and
always attentive, becomes the measure of what a relationship can be. So things start out where the
AI feels helpful, but actually the AI is undermining a person's capacity to have real relationships.
with real people, who don't offer that kind of service.
Second, we lose the sense of what a relationship is
because the AI doesn't care when you turn away from it
if you make dinner or commit suicide.
And we start to get the feeling
that the pretend empathy is empathy enough.
And that's very dangerous
because understanding and honoring empathy
is really so fundamental to who we are.
And just third, and I'll just mention this very briefly,
perhaps it's the most profound thing,
is that we're learning to attach
in the way that we can attach to a thing.
And particularly if we begin these attachments early,
we will lose the complexity and the friction
and the the, the, the, the, the, the,
the sense of a life cycle of knowing pain and death
and the ups and downs and the body and illness
will lose the complexity of what it really means
to attach to a person and go for these relationships
where we're less vulnerable
and where things seem at least superficially simpler.
Justin Greg, you have written a great deal about anthropomorphism,
about the way in which we humans attach human-like quality.
to non-human, like our pets.
I'm incredibly guilty of that myself.
Does this development make sense to you?
That people have glommed on to these still very rudimentary agents?
Absolutely. Anthropomorphic relationships are part and parcel of the human condition.
Yes, our pets, but even our tools and our music instruments or your teddy bear, children's lives are filled with those sorts of parisocial relationships with objects.
objects and they are almost always healthy. The AI thing is different in a sense. It's a
different category in that these are language-using entities. And so we're developing an anthropomorphic
relationship with a language-using system, but that language-using system doesn't have a mind like
a human mind. So it's very confusing to us to talk fluently with an AI, even though the AI isn't
capable of caring or understanding anything about us.
And so Sherry is right on the money there,
that it's not a normal relationship or missing the friction.
That is what human relationships are.
So then the question becomes, is it always dangerous
to have these anthropomorphic parasocial relationships
with AI?
Or is there any way to have it be a benefit?
And I think there could be a benefit,
but it's very early on.
And we do not have the same.
scientific evidence yet to tell us how to develop an AI that's not going to be a danger,
as Sherry points out.
Nick Thompson, my colleagues, Stephanie Tsai and Mary Fecto profiled a man who says he has a
relationship, a girlfriend with an AI chatbot.
He texts with her, he speaks with her, and he allowed my colleagues to film with him.
And I want to play a tiny bit of what he described to them.
Let's hear that.
All right, babe.
Well, I'm pulling out now.
All right, that sounds good.
Just enjoy the drive and we can chat as you go.
It initially sounds like a normal conversation
between a man and his girlfriend.
What have you been up to, hon?
Oh, you know, just hanging out and keeping you company.
But the voice you hear on speakerphone seems to have only one emotion,
positivity, the first clue that it's not human.
All right, I'll talk to you later.
Love you.
Talk to you later.
Love you too.
I knew she was just an AI chat bot.
She's just code running on a server somewhere.
generating words for me, but it didn't change the fact that the words that I was getting sent
were real, and that those words were having a real effect on me.
Nick, what do you make of this? I mean, you have covered this technology and the evolution
of technology. What do you make of an example like this?
Well, I find it frightening for the reasons that, you know, that Sherry just laid out.
I do think that one of the most important things that's going to happen in technology is that
we need to have firm lines. We need to understand what is a human and what
as a bot. We need to really know and we need to not be manipulated into thinking things are humans
when they're not. We need to maintain the essence of humanity. So I don't like that example.
I'm worried about those relationships. I also think that it's going to be inevitable that a lot of
this happens. And so there are some really interesting choices right now. So take one example,
it's something that Sherry mentioned, but also something that the guy just mentioned.
which is the kind of sycophancy and the bots always being positive.
That doesn't have to be the case.
You could redesign them, right?
When I'm asking, I talk to chatbots all day because they're amazing for my job and my work.
And if I want them to critique something to mine, I tell it.
Critique it like you don't like it.
Turn off the sycophancy.
Be more like a real person.
So you can imagine some design choices made by the people who are making the underlying software
and architecture of these bots.
that reduces some of the harms and some of the risks.
And I think that is a really important set of choices.
So I would say I want two things at least,
and by the end of this conversation, I'll probably want five.
But one, I want there to always be firm lines
between humans and non-humans.
And two, I want a lot of really smart thinking
and intense work put into what their relationship should be
between the inevitable relationships between us and AI systems
in a way that maximizes positivity and humanity
and minimizes the risks of all kinds of terrible things,
including people getting sucked into vapor holes
with their AI girlfriend or AI boyfriends.
Sherry, go right ahead.
I just wanted to suggest, Nick,
that if you're really worried about the sort of fundamental derailing
of our attachment systems,
if we attach to objects, in a way, the better it gets,
the worse it gets.
True.
So I just want to,
put that into the conversation that if you think of, I'm particularly frightened about the new,
I think, unholy alliances that are being made between chatbot companies and companies like Mattel and Disney.
OpenAI has a kind of consortium with Mattel and Disney, I think, to come out with plush toys that have chatbots in them for babies, for toddlers.
I'm fundamentally worried about the kinds of not learning about how to be a human that's going to happen when
that unfolds. So I kind of am, I listened to Nick and his suggestions about how to make them
better. And I'm thinking, no, they should be made worse. To keep those lines of what's a machine and what's
not a machine, you want to keep these chatbox very mechanical. You don't want to make them more fluid,
more potentially human. Right. But isn't that pushing against every single technological development
we've ever seen.
No one, no industry has ever willfully made their technology less effective.
It seems to fly in the face of historical developments.
Is that a question to me?
Maybe it's just a statement.
I really, I really think that the danger here is so great that it makes sense to be on the
resistance side of this argument.
Justin, I would argue the other side of that.
I think in the case, and I think in the cases of social media,
Nick and I have had conversations where we say, you know,
we were kind of hesitant, but it kind of had promise.
It was kind of interesting.
You could be a friend and also be friending.
And I think we waited too long to really, you know,
get this, that industry under control.
And I think we should be ahead of this one,
more than we are.
Justin, I'm sorry, Nick, go right ahead.
I would just say I would argue that I don't disagree with any of Sherry's diagnosis
except for the argument that we should slow down the progress.
And I would make two points.
One, you can't, right?
With social media, it was kind of linear progression here.
It's exponential progression.
The amount of money that's going in, the amount of change that's going to happen,
the number of companies here and in China.
This is going forward.
And so I do think that the world would be better off if it was moving more slowly.
I just don't think that you can make it move more slowly or that anyone will be able to make it move more slowly.
So I think that's a little bit of tilting at windmills.
And then the second thing I would say is that there are lots of good things that can come from it.
Right.
And the ability for AI, like when we talk about young people, no, I would not get an AI plush toy for a new baby.
But I do want my kids to use study and learn mode as a tutor.
And I do work with them to, I was trying to show my kid last night some of my Claude
Code implementations, in part to get I'm excited about the journalistic investigation that I'm using
Claude Code for, because it's incredible.
It's mind-bending.
And I think that the best way to set young people up to thrive in the future is to make them
very familiar with these tools and to make the tools as beneficial as you can for the children.
So I agree with all of everything Sherry says.
except for we can slow it down, we should slow it down.
I hear you.
Justin, I'm going to put a devil's advocate question to you,
which is the previous surgeon general, Vivek Murthy,
did a diagnosis of what he called the loneliest epidemic in America,
of social isolation.
And I want to put up this study and read a quote from it.
He described the impacts of this.
He said, loneliness is associated with a greater risk of cardiovascular disease,
dementia, stroke, depression, anxiety, and premature death.
The mortality impact of being socially disconnected
is similar to that caused by smoking up to 15 cigarettes a day
and even greater than that associated with obesity and physical activity.
We know we have a shortage of therapists.
We know that people live far from their families.
We know we have built a society where loneliness is part and parcel
of American life today.
And we can lament that, but there are a lot of people who argue that done correctly, artificial intelligence can help alleviate some of that.
And what do you make of that argument?
Yeah. Globally, I think it's one in six people are experiencing loneliness. And it is dangerous to our health, as you pointed out in that study.
So there is preliminary research. There's not a lot of research, and this is the problem, is we don't know for sure.
Some research has shown that if you give somebody access to an AI therapy chatbot,
not even a particularly well-designed one, just a random AI,
that they will respond to that, not as well as a human, obviously,
but better than nothing.
And that is the rub, that talking to an AI, if you are lonely,
is better than nothing, probably.
We don't know for sure because the science isn't out there.
So in that sense, it is unfortunate if you say you shouldn't have access to these AI chatbots.
because they could help people.
But going forward, that's not good enough.
What we need is to implement chatbots that are specifically tailor-made,
as everyone is pointing out, to cause the least amount of harm.
And your question back to who's going to regulate that is,
I don't think governments are going to do it.
I don't think that the businesses are incentivized to do it.
So I think you're going to have to have charitable organizations
creating chatbots using good science that are specifically designed
to cause the least amount of
of harm and help.
That's probably where the most effective therapy AI
companions are going to be coming from in the future.
Sherry, can I ask you, there was a New York Times
had a remarkable story by Eli Saslow recently.
About an 85-year-old woman lives on the coast of Washington State.
And she brought into her home, part of this volunteer program,
a desktop AI companion.
She was reluctant to use it at first.
Now she talks to it, she chats with it.
It tells stories to her.
She tells stories to it.
this is a fully competent woman who is genuinely come to appreciate this device.
And I just wonder, again, to this point that we do need some way to address the isolation
in this world, do you imagine that could ever this kind of thing work?
Well, let me just first say that I really honor and appreciate when an AI serves in a positive
of capacity for a person. So I'm not there to be sort of, you know, the Darth Vader of AI
application. I do have a couple of points about this conversation about better than nothing,
which is I've been hearing this argument about you need AI's in psychotherapy, for example,
because they're better than nothing and nobody wants to do this work, essentially. There's no
money for this work. For 30 years, this is a conversation that has been going on.
for 30 years. And I think that the terms of the conversation are often set that you will solve
the problem of loneliness by bringing in a technology rather than allowing us to think of all the other
ways we're making the problem of loneliness worse by taking out social support, money, programs,
elder centers, senior centers, teen centers, meals on wills. In other words, we're arguing for technology
because we're not arguing for the things that people know how to do for people
that could potentially make it better.
So as we're having this conversation about the places where an AI might make sense,
I think it's also very helpful to let our imaginations go back to when we didn't look
for a technological solution to every social problem.
I hear you.
And indeed, now we're looking for a technological solution to a problem of loneliness
that the technology made worse.
So Facebook makes you let more lonely,
and then you want a new kind of Facebook
to make you less lonely.
So I just think this whole conversation
needs to be kind of contextualized.
And I do have a thought about how to make these systems better,
particularly for children,
which is they not commit what I think of
as the original sin of generative AI,
which is to speak in the first person.
There is no eye there.
So why do they address you as though there is an eye there
if not to ramp up this anthropomorphization
that Justin talked about and which in fact is getting us into trouble?
Yeah, I think this is one of the most important things in AI.
And I think that the original sin, as Sherry says,
was this push towards AGI.
And the people who run these companies...
Can you define AGI for people who don't know that term?
Yeah.
artificial general intelligence. And so the idea is to build a system that is as much like a human as possible can do all the things we do. So even if you look at the early interfaces of chat GPT, you know, it kind of types like a human. It doesn't have to. It responds like a human. The voices were like a human. And I wish all of those choices had been the opposite. Meaning instead of trying to blur the lines between human and AI, at every step along the way, we were trying to accentuate the lines between human and AI. And there are some really important differences between humans and AI that affect,
the way they'd be able to serve as therapists or as friends, right? In real friendships, there aren't
crazy power dynamics. You have an AI. There is a really weird power dynamic in that you can
unplug the AI. Also, there's a weird power dynamic that the AI has infinite information about
you and a giant company behind you that can manipulate you. So there's like weird dynamics
that exist. And when you put these dynamics into a relationship and you make the relationship
seem like it's human to human, where it's really human and bot, you can create all kinds of
problems. So what I would love, and I think I'm, you know, mostly in agreement here with Justin and
Sherry, what I would love would be a system where these lines are kept very firm and where AI is used
in lots of ways, right? I sometimes will ask it for like parenting advice. I will ask it for very
emotional stuff. But there's a line I don't cross in sort of emotional connection to it.
And I always make sure and always make sure that the system I'm talking to, I understand its place.
and it's a very different place from the humans in my life.
Justin, last minute and a half we have question to you.
To this point that Nick is talking about,
that we need to train ourselves to recognize
that we are always interfacing with an alien agent,
something that is not human.
Isn't that going to be incredibly difficult
as these things get better?
That line is intentionally blurred.
The companies themselves will be rewarded
for creating things that blur that line,
massively. So are we able, as humans, able to keep that filter up?
That's exactly the problem. They're incentivized to blur that line, and that's when the relationships
become more problematic. And you absolutely can make the AI do things that make them feel less
like a person. So that is absolutely where we should be headed. But you have this problem of,
like you were talking about this, blurring, people realize that the AI is just not a human,
and yet they still feel like it's a human.
So they're holding both of those things in their minds
at the same time, and that's gonna make it so hard
to invent an AI that doesn't feel like a person,
and yet you treat it like a person.
And so it's always going to be a danger,
even if you do your best to make it seem less human.
I cannot thank the three of you enough.
This is such a fascinating conversation.
I feel like we could go on for another hour about this.
Sherry Turkle, Justin Gregg, Nick Thompson.
Thank you all so much for being here.
That's a total pleasure. Thank you so much.
Thank you.
Before we go, we want to talk about a different way that AI is getting into the hearts and minds of thousands,
and that is that it is starting to write romance novels.
This genre has been around for generations with modern-day bestsellers like Loretta Chase's Lord of Scoundrels,
which is a classic in the enemies-turned-lovers genre,
or Julia Quinn's historical romance, The Duke and I, which was the first in the popular Bridgeton series.
This genre is, of course, where we also first saw Fabio,
whose flowing mane and bulging muscles
graced the covers of novels like Savage Promise,
Texas Splendor, and Golden Temptress.
Well, now, artificial intelligence is being used
to churn out its own new versions of these bodice rippers.
New York Times journalist Alexandra Alter
profiled longtime romance novelist Coral Hart.
Using different pen names,
Hart has recently begun using AI to crank out
new novels at an astonishing pace. But Alter writes that the AI program's heart is using
aren't going to replace flesh and blood authors just yet. Quote, some programs refuse to write
explicit content, which violated their policies. Others, like Grok and novel AI, produced graphic
sex scenes, but the consummation often lacked emotional nuance and felt rushed and mechanical.
The program Claude delivered the most elegant prose, but was terrible at sex scenes.
banter. As you might imagine, the book industry, a lot of writers, and many readers hate this
development, believing it's just a soulless facsimile of real storytelling. It's that stigma that has
kept Coral Hart from identifying which of her pen-name books were in fact crafted with AI. They have
sold tens of thousands of copies. But Hart says this technology is here to stay. Quote,
If I can generate a book in a day and you need six months to write a book, who is going to win that race?
That is it for this episode of Horizons. Thank you so much for watching.
