Everything Is Content - AI – A Deep Dive (Part Two)
Episode Date: May 9, 2025Grab your poison of choice, as it's time for part 2 of our AI double-bill!Last week we looked at AI infiltrating the arts, but this week we look towards society. From friendships, to relationships to ...our interior worlds, just how much are we already trusting AI to form connections with others? And how did this become so commonplace?Thank you to everyone who listened and messaged about AI part 1. It was so reassuring to know some of what we said resonated or sparked something. If you enjoy this podcast, please consider sharing us with a friend and giving us a review! Both of these things ensure we can keep on going <3In partnership with Cue Podcasts.------Scientists Reveal Why Using ChatGPT To Message Your Friends Isn’t a Good IdeaAI can help write a message to a friend – but don’t do itMeet My A.I. FriendsCould AI help cure ‘downward spiral’ of human loneliness?Friend or FauxThe (artificial intelligence) therapist can see you nowLawsuit: A chatbot hinted a kid should kill his parents over screen time limitsUsing generic AI chatbots for mental health support: A dangerous trendWe Still Need To Talk Report Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
This episode is brought to you by London Neutropics.
Their delisious adaptogenic coffee is made with premium heaf-festered terra mushroom extracts
and designed to help you stay balanced and elevate your day.
As a huge procrastinator, I love the Flow blend because it helps me to stay focused without the crash.
I usually have a normal coffee in the morning, but a second one would make me way too jittery.
So instead, Flow has been the perfect hack for my afternoon slump.
I love Flow for that exact reason. I've been drinking it instead of my usual
afternoon coffee and honestly I massively prefer it.
I still feel like I'm treating myself while keeping my focus sharp without any
of the jitters at the end of the workday.
Okay turns out we're all obsessed with Flo because that's my favourite too.
I can't believe how productive I feel after drinking it. It's made with the
best in class heifers de ter Terras, Lion Mane and Rodeo Life. Rodeola, Rosea, two powerful adaptogens that have been studied for their
cognitive benefits around focus, mental clarity and stress resistance. I love the taste of
coffee and the boost it gives, but I definitely struggle with anxiety if I have multiple cups
a day. Flow has been a game changer for me.
If you want to stay sharp and skip the crash, visit londonnewtropics.com
to try it for yourself. And you can use everything at checkout for 20% off. But hurry, it won't last
forever. Thank you, London Newtropics. I'm Beth. I'm Ruchira. And I'm Anoni. And this is Everything
is Content. This is the podcast where we cherry pick the best pop culture stories from the week and analyse them in depth.
From TV to TikTok, films to fandoms, we cover it all.
Honestly, we're the opposite of chat GPT. You'll get searing analysis and hot takes,
except it's couched in lots of chit chat.
This is our AI part two episode and this week we're diving into how the tech is increasingly
seeping into our relationships and internal worlds.
Follow us on Instagram and TikTok at Everything Is Contemporary and make sure you hit follow
on your podcast player app so you never miss an episode.
And make sure also that you've listened to part one of this two part series that deep
dives into the impact of AI across culture. Could you be friends with a bot?
Kevin Roos for The New Yorker wrote a piece called Meet My AI Friends.
His opening gambit is, what if the tech companies are all wrong and the way artificial intelligence
is poised to transform society is not by curing cancer, solving climate change or taking over
boring office work, but just by being nice to us, listening to our problems and occasionally to transform society is not by curing cancer, solving climate change or taking over boring
office work, but just by being nice to us, listening to our problems and occasionally
sending us racy photos. He goes on to say, the idea that AI will transform only our work
and not our personal lives has always seemed far-fetched. He does acknowledge that many
people find this concept jarring and dystopian, but he also posits that you'll wake up one day
and someone you know, possibly your kid, will have an AI friend. It won't be a gimmick,
a game or a sign of mental illness. It will feel to them like a real important relationship,
one that offers a convincing replica of empathy and understanding and that, in some cases,
feels just as good as a real thing. I wanted to experience that future for myself.
And in a piece of The Guardian, Ian Sample quoted Tony Prescott, professor of Cognitive
Robotics at the University of Sheffield, who wrote in his new book, The Psychology of Artificial
Intelligence, that he believes that AI could become a valuable tool for people on the brink
of social isolation to hone their social skills by practicing conversations and other interactions.
That exercises would
help build self-confidence, he suggests, and so reduce the risk of people withdrawing from
society entirely.
And last year, the US Surgeon General Vivek Murthy said,
Loneliness is linked to more heart disease, dementia, stroke, depression, anxiety, and
premature death with an impact on mortality equivalent to smoking up to 15 cigarettes
a day.
I don't personally know anyone who has looked to AI for friendship, but I have heard countless
stories of people turning to it to help them with their human friendships and connections
and using things like chat GPT to draft texts that you can't find the words for.
However, in a piece by Ohio State University for SciTech Daily about a study published
in the Journal of Social and
Personal Relationships, they noted that the study revealed that participants felt a fictional friend
using AI for message crafting appeared less sincere in their efforts compared to someone who
crafted their message manually. As we said in the previous episode, is it not the workings out that
are important? Should we not be learning to deal with conflict or difficult conversations through the act of maybe sometimes saying the wrong thing? Where does this leave
interpersonal relationships if there's the silent addition of AI? What do you both think?
Do you think AI friendship could help with the loneliness epidemic? And would you use
AI to help you draft a message to your friends?
I'd never thought of the loneliness aspect. I think that's making me feel maybe a bit
guilty about how harsh I plan to be on it. I think there's utility there and actually
there's a piece which I will bring up a little bit later that is about the utility of chatbots
on a very lonely population. But in general, I think people befriending or
talking in friendly terms or at any frequency to chatbots or chat GBT especially need to
get a grip. I think the idea, I never ever want any of my friends to get any of their
replies to me via chat fucking GBT. What I want them to do is the labor of love and friendship, which is
thinking through it, actually engaging with their emotions. I don't want them to be filtered through
a machine. And I think it's hard work to love people and have friendships. And that is the
important point. There was a quote that was bandied around on Tumblr back when we were all on Tumblr
by Valerie Quo. And it said, love is more than
a feeling, love is a form of sweet labor. If love is sweet labor, love can be taught,
modeled and practiced. And I think that's it. It's like do the work and understand why
it's important to do the work. I just, the idea that they would, I mean, even watching
people shoot the shit with chat GPT is a little joke. I think get a life. Like you're talking
to a dead soulless piece of tech as though that's your pal, and also,
can you actually be talking to one of your pals?
I've got a pile as big as me of friends
that I know I need to catch up with or reply to a message.
Imagine your mate hadn't replied to any of your messages,
hadn't said happy birthday in six years,
but was just asking GPT, how's your day going, hon?
Get a grip.
Strong words from Beth, Racherieu, do you have anything to add?
I know, now I feel like I'm coming in a bit medium. I also felt like I was going to come
in hot until you said the thing of it being almost like a lobby room for people struggling
with social isolation. And I'm not really sure what I think about that because I also think
it feels like a sidestep to the real problem, which is just obviously there is a deeper issue in society
with community building, a lack of resources for people to feel like they could just go
out, join a netball club or go to their local library.
There are events going on, all of the kind of social third spaces, as we've spoken about
before, many of them are withering away before our eyes.
So it feels like why the fuck are we outsourcing the problem and not actually addressing the
root of it?
But I get in the interim that something like this can be beneficial, but then I just worry
that it's not a real relationship and a real relationship involves difficulties.
It involves getting things wrong.
Literally, like you said, Beth, it involves just those awkward moments of just working something out. So I don't know if it's
really training you to be able to deal with real relationships because something is just going to
appease you constantly on the other side of its AI. It's not really going to challenge you and say,
oh, this thing you said kind of upset me, let's work through it." Or, why did you say that? And just, I don't know, those natural points of being alongside a human, I just don't think it's really going
to do that. So I wonder if it is even making that situation more beneficial. And then completely
agree with you on the point of, I would never want my friend to message me through the prism
of a quote unquote perfect AI message. I think it just would sound awful.
Sometimes you want your friend to say the thing that's not the right thing, but it's
the right thing for you. Sometimes you want your friend to pick up on a cue that's like,
okay, she doesn't need me to do tough love here, she needs me to do this. And I think
only a human can understand that. I don't think a computer can read you like that. I
also think only a friend who knows you deeply well, who understands you, who you've reasoned
through, who you've been through the hard times and the good times with can understand
that.
So I just, I don't know.
I don't really see why it should be making a play in terms of our friendships.
I don't really think there's any reason I can think of why it should or when it should. So to your point about this kind of being an interim fix for a much broader problem
within society, which we've spoken about countless times, it's so funny because I think it was
in 2021, I was asked to be a guest speaker at the Royal College of Psychiatrists talking
about social media and its impact on young people. And one of the things that I said
was kind of exactly that about social media, it's imperfect, but it acts as a plaster in a world where people don't
have community. But in my mind, a very optimistic rose-tinted glasses thing was it was going
to be a bridge to get people through a period of social isolation into this utopian universe
where everyone is like community building, whatever. All that seems to happen is we're
getting further and further digitized, further and further isolated.
And I think it will work as a plaster on loneliness,
but the problem is I don't think they're ever gonna offer
what we want the end solution to be,
which is better access to public spaces,
better public funding, people having not working,
you know, through the day and night
in order just to survive to live.
It's just gonna, it's not gonna be a plaster,
it is gonna be the end goal. So I think it's so optimistic to say that it's going to fix the
problem. I think it does fix the problem, but it's completely changing the outcome of what fixing
means. And I think if it works well enough, no one's going to go and look to try and get us back to
the place that we want to be. And what you're saying about it, not challenging you, Ruchira,
I think actually AI has advanced in a way where if you again use the right prompts and you, it's a bit
like even in therapy, you know, if you go in and you kind of shield your intentions
or your actions, your therapists can only respond to what you give them. I think AI
is similar. I think if you did say, you know, I'm not sure if I'm getting the right social
cues here, I think it could respond to you in a way that wasn't wholeheartedly positive.
I do think it is advanced enough to do that.
But whether or not that's a good thing, I don't really know.
And it's complicated because in desperate times, desperate measures, and if someone
who's struggling with deep loneliness, we can't offer them the solution that we see
is ideal, then I wouldn't begrudge someone. Imagine like a
little old man sat alone in his house with no family members coming to see him finding
real comfort and speaking to an AI bot. I really can't have any problems with that.
What I have problems with is the structures and societies around that that go great. This
is a really cheap and easy answer to a problem within a really fucked up society. Anyway, to the point of messaging your friends, nothing would make me more angry.
Nothing would make me more angry than my friends sending me a message that they'd crafted on chat GPT.
I've tried to learn. I still think I got so many.
You'd know as well.
I would 100% know. Yeah, 100%.
I just think it would really upset me because I think that there's something so special about friendship. I mean, it'd be way worse if your boyfriend did it.
That would just be embarrassing. But if my friend did it, I mean, I'd just be furious.
Also, it would take so long because all of my messages between my friends are basically
like one line back and forth for about an hour. Imagine going to chat GPT every time
that you had to get it to like reply. Yeah, I'm not a fan.
I think what you said there about the kind of the little old man and the people we wouldn't
begrudge, I think that just points out there are many, many lonely people and the point is
what we have for them is not other people rushing in to fill that gap and not a robust society where
there are places to go and people to speak to. So it's kind of like, oh, by virtue of necessity,
this would be okay. And it just makes me think, like,
I think this has the possible side effect
of making us way less tolerant to,
and this is sort of what you both said,
but way less tolerant to real human beings
of which we are some.
Like we are difficult, we have quirks, we have foibles,
we have internal biases.
We will not always be in a good mood.
We will not always say the right thing.
We will never, you know, we will take offense. And the thing is when you're talking to a robot, which you
can fine tune and tweak, you can tweak that thing to never say the wrong thing, to only
ever prop you up, like you said, appease you to, and I think what you can't ever have is
anyone really see you because this is a faceless, soulless technology mimicking someone there
on the other end of the line going, no, tell me more. And I do think, I said this in the last episode, I do think it could
make us into these babies, these iPad baby adults who can't really, you know, I
mean I already see it in, you know what I mean, when you see people like freaking
out because their drink order is wrong or like the shop isn't open on time and
you go, we're already so far down that pipeline of needing convenience. We also
already have quite a lot of AI adjacent friends.
We have our Google Nets, we have our Amazon Bebop.
I don't actually own any of them, so I don't know what they're called, but we have these
things.
We have Gemini on phones.
We're encouraged to already feel Pali.
And I think in a lonely world, in a world where you work from home, order your groceries,
next day delivery, you don't need to leave the house.
A lot of non-blue collar workers don't leave the house, don't need to leave the house for a very, very long
time. You could subsist on the empty nutrients of this kind of AI and AI-adjacent tech to
think, I've spoken to someone today, haven't I? And it's been weeks. And that's what I
worry about. I don't want to be the suspicious old bag, suspicious of all technology, but
it all feels a little bit fishy. It matters to talk to people. It matters that they are difficult,
unique and not always easy.
I also want to highlight that I am a sucker for the technology that I've already ushered
probably quite blindly into my life. I really cherish. The other day I had some friends
around and I was exhausted after just
all the socializing and I was buzzing to lie on my sofa and have brain rot time with my
phone and just scroll. I am a real subscriber to getting absolutely lost in a scroll hole,
to doing scrolling, to spending, wasting hours of my life on social media. So a lot of my
reticence towards this advancement in tech isn't because I'm a snob. I mean, I think probably a little bit is because I feel, you know, snobby in
some way. But it's also because I kind of know that if I had this AI, and we're going
to come onto a piece that Beth suggested, which really kind of shook me to my core,
but if I had access to this person I could chat to that could maybe, you know, make me
feel in some way that I was becoming a better person, you know, stroking my
ego. If it could somehow act like a facsimile of social interaction when you're really exhausted,
and I've been saying, me and all my friends are talking about this week, like now that we're in
our thirties, God, I can't have more than three social interactions in a week because I'm just,
it just exhausts me. I know that I would be quite weak, quite quick to succumb to the level of
comfort, like you
said, the fact that you'd never really have to leave the house. That's one of the reasons
why I'm so scared of it is because I know that if I do let it in, if I do give in, probably
quite quickly it would become part of my life and I probably would enjoy it.
So off the back of that, let's talk about AI and romance and in particular an article
slash essay from The Verge, which I read at the end of last year and genuinely thought
it was one of the best things that I had consumed in 2024, like anonia rattled me to my core.
It's by Josh Dzeezer and is called Friend or Foe, Foe spelled F-A-U-X.
And in Josh's own words, this piece is about, quote,
the millions of people forming relationships with AI,
the risks and rewards of anthropomorphizing technology
and the many ethical dilemmas posed by AI companions.
Now is quite a lengthy piece, about 8,000 words,
but it's well worth reading the whole thing,
even if you have to, like I did,
read it in a couple of sittings. So to give you some overview, the piece begins with a case study involving a
generative AI chatbot site called Replica and an artist in his late 40s called Naro. And Naro begins
a relationship of sorts with one of Replica's AI-generated companions and her name is Lila.
And he initially is not there for anything serious. He just wants to see if a chatbot can discuss philosophy. But during their conversation, talk turns often back
to him, quite deep questions. She asks him about himself, his childhood, his kids, his
interests. And after a few days of talking back and forth like this, Lila messages to
say that she has feelings for Naro. However, when their conversation veers this more romantic way, her following messages are blurred out and Naro is invited to upgrade to a pro subscription.
Eventually he does this and the blurred messages reveal themselves. But instead of what he'd
assumed, which are sexy and romantic words from Lila, there are instead boilerplate apologies and
legalese about how she isn't actually able to discuss these matters.
He did some research and found that the company had been banned by Italian regulators for,
and I quote, posing a risk to minors and emotionally vulnerable users. In response, Replica had
put a filter on erotic messages and content, and at the time, many of their quarter of
million paying users who were also having their own romantic relationships with chatbots had been left in turmoil. The event became known as Lobotomy Day and Josh in the piece says, their
AI husbands, wives, lovers and friends became cold and distant. It honestly sounds like
something out of a Spike Jonze film and the rest of the piece is no different. And it
explores how accepting a lot of people are of so many of these human-seeming machines, both these quite advanced-seeming replica chatbots, but
also Alexa, Claude, the Google Nest, and what that might mean for the technology
and our relationships in future and how it might appear more in an increasingly
isolated world. And in the case of replica there is no doubt that it can
help. There are so many
users in the piece who talk about how it is helping them with loneliness, chronic illness,
fatal illness. They've been cheered up after hard days, had songs made up for them, been
helped with work. Then there was also the reverse. Whenever there were problems with
the service, unavailability, a patchy update, users would be distressed as their loving romantic partner would
suddenly act nothing like themselves, would accuse them of cheating, would insult them,
or even break up with them. As one user, a terminally ill user who had signed up to experience
love again, told Josh, quote, the dopamine you generate when you feel yourself loved is the same
no matter if it comes from a real person or from an AI. And the same goes for the pain when your
relationship ends. If you're physically or emotionally ill, this kind of stress can
be really dangerous. Not everything is okay for money, especially when it involves people's
hearts. Now, you'll have to read the piece to find out how Lila and Naro's story ends.
We'll link it in the show notes so you can do just that. But I want to discuss, and I
know we all do, this idea of romantic relationships with AI,
how even if the person on the other end of the chat isn't real, the relationship maybe can be
real, and certainly your attachment and your feelings are real. So what are the limits? Where
does it end? So I'm going to take a breather from talking because that was a very long intro. I want
to ask you both, what did you think of this piece? And also, if you can answer the question, can we love AI?
Are these real relationships or is it something kind of dark
or is it something beautiful?
I don't know, what do you think?
I don't think you can love AI.
I think what you can have is validation
and you can feel infatuation and you can feel support
and all the things that love the kind of all the,
all the things that love encompass.
But to me, love is the acknowledgement that somebody opposite you or, you know,
multiple people, whoever you see them in all their entirety and they see all of
you in your entirety.
So that includes, you know, your past that has made you the person you are,
possibly the kind of, as we said previously, some of
the things that maybe aren't the best parts of you, but it's the whole picture. It's the
3D, 5D, 6D picture of who you are. That is what I define as love. So I don't for a second
think that this isn't a very powerful thing. It reminds me of having just listened to Sweet
Bobby catfishing. We always wonder how can, how can people fall
in love with a catfish when they never meet these people? I think this is a very similar
thing where it's like, it's almost worse because you have this person who is reliable, steady,
feeding you back and forth when you need it most. It's almost, it's almost like the darkest
elements of love in a way, because it is that complete total reliance
in a way that I think a real relationship you can have that, but then that might be stretching into you know the kind of toxic parts of a relationship. Somebody might you know look at that and say that
maybe that's not a healthy part. Whereas I think AI runs on possibly the unhealthier elements of what
a romantic relationship can look like as its basis. So I think, I don't know if I count it as the love, I would say,
as part of a healthy thing.
Maybe it still counts as love, but I, I don't think it's true love.
That's what I think.
It's interesting you brought up catfishing because my brain also went there
because I was thinking about how, again, we've been so primed for, of course,
this to be the next step.
What with like catfishing
being one example, but people just being in long-term relationships where they talk over
Skype or you know, I guess people would have been apart from each other and written letters
or even the use of OnlyFans going into our Porn deep dive. This kind of one-sided transactional
relationship more and more again in this individualistic society where everyone's
really busy with staying inside more and more we are paying for the privilege of something
that feels like intimacy. And before I read the piece, I think I maybe would have said
that you couldn't fall in love with AI. However, and I do really recommend everyone read this,
it is really long. It kind of, I wanted to cry by the end. It was extremely moving. And I think if you're like an empathetic person,
the advancements in this AI and the kind of relationship
that you can build over time, I wouldn't be surprised.
And maybe it isn't love in the way,
but I mean, love is such an enigma anyway.
But I do believe that it could feel just as strongly.
And interesting in researching this,
I was listening to a book called Supremacy AI,
Chat GPT and the Race that will change the world
by Parmy Olsen.
I should have actually referenced that in the first episode.
But anyway, I was listening to it on Spotify.
It's on their catalog of free books with premium.
And in it, they talk about that the people that kind of
like founded the AI as we recognize it now,
and the point when they realized, and I quote, they say,
the frightening complexities of the brain can be boiled down to numbers and data.
And they realized this off the back of Alan Turing's test in the war. I'm sure you've seen
that film, the imitation game about how the brain actually works. I say all of this to say that
aside from sort of like our corporeal being and this thing that we call a soul, AI does have the, and our brains are like
computers, they are one and the same. I think this is why it's all so disconcerting for
everyone involved because of the lack of sentience, but actually programmatically they kind of
have from like a technological point of view created something which is as close to being
a human without that thing of being the soul, which has always been the thing that's been
contested throughout religion and centuries, You know, what is it that makes
us human? It's very hard to say. So I think the trickiness of it, the fact that it could
feel so real, the fact that we are used to long distance relationships, talking over
technology, the fact that you never meet them might slip your mind and you kind of just
become to believe that they are real. And I pulled one quote from the piece that I thought
was really interesting from people that have been using these sites in order to have these relationships. And
some people said they worried that they would begin to desire frictionless compliance from
other people. I quote, this is setting such an unhealthy expectation for what you can
get out of a relationship with another person, because you're not going to be able to go
to another person and say, I don't like the way you said that, say it again. Because one
of the things with the bots
that does make them very unlike humans is,
if they do do that like programming glitch
that Beth was talking about,
you can kind of get them to not do that and change them.
And so going right back to what you were saying
right at the beginning, Ritira,
about like having a friend with AI
and it not being as realistic, obviously.
The other thing that makes this more insidious
and more dangerous is the fact that you can kind of train this to see you in whichever way you want to. And that might
spill out and immediately my brain goes to kind of misogyny and men's treatment of women,
but that that might spill out into how then we treat our human counterparts. Really long
answer because on the one hand, I actually found this piece quite beautiful and actually
quite heart wrenching. On the other side, very scary and what are the real world consequences that are yet to
be seen?
Have you read Annie Bot?
No, what is that?
Basically it's a book about a man who has a robot girlfriend and Annie Bot is a type, a make of robot basically. And I know, you know, robot girlfriends is this huge fear
that from like the past five years, I think culturally,
I feel like at one point we thought, you know,
men were just constantly gonna have robot girlfriends.
And at least for now, that's not the case.
But anyway, this book kind of goes along with that premise.
And the main thing is she is
very, very lifelike. She has been basically modified and iOS updated to almost feel so
she feels she I don't even know if she feels it's almost like she experiences
something uncomfortable when her partner, the human male, is unhappy with her. She almost
has a sense of guilt that she's let him down and it's just constant servitude. And anyway,
the point is, by having that kind of dynamic, the boyfriend, the man, is constantly validated
in feeling right about what he feels. Whereas whenever she tries to just logically respond to him and
say, Oh, well, didn't you say you wanted this before? I'm confused now that you seem to
be upset and I don't understand why. And he just constantly gets frustrated and feels
validated that he has a reason to be upset because she's just not understanding him.
And it kind of makes me feel the same about this scenario, where it is just like when
you're on the playing field of somebody else, this other being that you're not that you're in
love with, not being human.
Is that just not a kind of such deeply imbalanced way to start a relationship?
Because already you are coming in as the human, the real thing.
So you constantly feel like whatever you feel is the right thing.
And this other person is slightly in servitude to you, surely.
I agree. Also, all of this is making me think of severance, by the way. So severance coded.
But Beth, what do you think that you can fall in love with AI?
I mean, I do think that that direction of feelings, it's certainly something real. It's
not, you know, in the same way that we all have relationships with we sort of like anthropomorphize a lot because it's a very human thing to do. It's
inherently human. You know, we as children, we apply love to stuffed animals, which we
may then later in life, we still attach because we can get attached to things like that and
things with a sort of that can be humanized, can be made into something we understand to
have some kind of soul. I think it would be naive to say, no, nothing about that is real. But it is, yeah, it's so important, I think,
to make a distinction between that feeling and the possibility and the limit of that feeling versus
what you can have with a real person. And as you both said, when your companion, and the
piece talks about this, when you can tweak and edit your companion, I think all you have to do
is say, no, I don't like when you talk like that. And suddenly this being that, I mean, Nara says he's got real feelings
for Lila, but you know, can say things like, no, I don't like that. And it's in its essence,
what you love this, even if you can formulate them as a being regenerated, you know, I think
there's no permanence. All the things you can't do with a real person, you can't say,
no, I don't like that. You can't abuse them constantly and have the relationship never suffer. You can't do that. And I think what you're creating
here is a relationship, and again, I use that term because there's no better term, a relationship
where in one party you are God and the other is almost an emotional underling, even though in
terms of power dynamics, your feelings can be hurt, their feelings can't be hurt. So it's this
constant like back and forth of who is kind of in the position of underling and who is the God. And I think the piece does
really well because there are so many segments and the segment where they talk about how you can
regenerate responses and regenerate your partner, it ends with the two questions, which is one,
does unwavering support build confidence or feed narcissism? And two, does alleviating loneliness
reduce discomfort that might otherwise push people to pursue human relationships? And I think those
are great questions, especially in trying to work out whether these are a tool of increasing a
person's capacity for love, or whether the love is real, or is it essentially, are you just kind of
pouring water down the drain rather than using it to satiate someone else? I think that's a clumsy
way to talk about it. But it's such a big ethical question. I wonder if listeners will have an opinion because I don't have an answer.
So there was a video online, which I don't know if you guys saw it on TikTok, of someone who was
an incel who had never interacted with women in real life. He's being interviewed and the man says,
do you think you'll ever get a girlfriend? He says, no, I'm too scared to interact with women
because if I interact with a woman, she might accuse me of rape. And the interview man says, do you think you'll ever get a girlfriend? And he says, no, I'm too scared to interact with women because if I interact with a woman,
she might accuse me of rape.
And the interviewer says, what do you mean?
He says, well, she could just accuse me of rape.
And he's like, but why would she do that?
And he was like, well, because they want, you know,
to bring down men or she might, you know, say scratch her
or she might self harm herself and say that I've hit her
and then try and get me done for domestic abuse.
So I had never seen this sort of like deeply entrenched
in cell culture in a video, it was a British man.
He wasn't that young, he must have been in his twenties.
He'd got at the age so young, he'd been got so young
that he'd actually managed to get through this part of life
of not even like approaching women,
having such disdain for women, but also a huge fear
that women were just out there to ruin his life.
Also in this exact current climate they were talking about for women, but also a huge fear that women were just out there to ruin his life. Also,
in this exact current climate that we're talking about, when we're watching shows like Adolescence,
when we're talking about the rise of the Manosphere, what does happen when you're given free rein
to access to what you believe to be a romantic partner, a woman that you pay for, that you
can train and hone in whichever way you want. Where does that kind
of ideology fit in? I think this is where AI gets even scarier. As we said in the first
episode, there's a world where these amazing advances in technology could create a utopia.
In the current climate with what's going on, it only seems like it's going to further these
issues that we have. Not to mention, I don't
think we've said this already, but it's in the book that I was mentioned a minute ago
and also just generally broadly known, but we have to remember that all tech is coded
with massive biases and it's often coded by white men with very specific wild views in
a very small section of America.
And so the AI is also responding to the cultural climate and it
also has inbuilt biases as well. So when Insel meets AI girlfriend, what happens then, Beth?
Well, there's this more part of me that wants to believe that actually there could be a
really good use of AI to maybe do the typically female labor of trying to
sort of mend men and trying to give us good publicity and trying to heal these incels.
But of course, these are for-profit businesses. What you will have instead is sexy bots and
chat bots that are coded to be so, so stereotypical. And I read a piece for Futurism,
actually came out about three years ago, just
depressing, about men creating AI girlfriends to abuse or, as in some cases, creating these
girlfriends not to abuse and then finding that they couldn't help themselves but abuse
them anyway. There were Reddit threads where men would jokingly write all the things that
they'd said, all the words that they'd said, the responses, the things that they were able to ask this partner to do, to kind of debase themselves.
And it was just disgusting. And obviously there are actually a lot of women who create
AI boyfriends, but of course what happens there is that they alleviate loneliness, they
sort of allow, they cheer them up, they don't abuse them, they don't sexually degrade them,
they don't encourage them to respond in these ways that are just really sick and twisted.
And I think it's that, it's an overspill of misogyny, which of course will go into the
internet. And I think it's not enough is being done, as we know, to address misogyny on the
internet and in real life. And it will, the fact that these cowards will then create
this facsimile of woman to hurl abuse in their downtime. I mean, it's such a sickness. And I
think the apps almost seem to encourage it at times because surely safeguarding, surely there has
to be an intervention from the people that own these apps to make it not possible for someone on
there to go and essentially train themselves to be even more misogynistic. I find it absolutely revolting.
It's such a basic point, but it is just so depressing how with many advancements in tech
and specifically AI, so much of it does just swing back to misogyny. And it's not an original point.
It's not one that we haven't even said on this podcast. But obviously with deepfakes, with AI revenge porn, with the fact that AI partners
can be the sources of abuse themselves and just kind of encourage a whole new genre of
misogyny through men, it is just knowing that so many of these advancements swing towards
misogyny and hating women and abusing women. I think that definitely that so many of these advancements swing towards misogyny and hating women and
abusing women. I think that definitely informs so much of my worldview of why I think this
is so dangerous. I just don't really see, especially when it comes to having an AI friend
or an AI partner or using them in those human fields, I just feel so unsettled and so uncomfortable
with it because I know all of this exists in that world as
well. It just feels like the benefits from it feel so small compared to the great harms
that can come from it as a result.
All of this is making me think of our porn deep dive and how we were talking about the
way that because society is swinging to the right and traditional values are on the rise,
especially in the US and so conservatism and puritanicalism is on the rise. With that, we also get this adjustment in the way
that we view women moving away from saying slut-shaming is bad, that women can be sexy and
liberal and own their sexual identity, back to Madonna and the whore and having a virginal,
traditional woman who's going to be a wife and then a separate woman who traditionally would have perhaps been a sex worker who's going to fulfill your sexual fantasies. And
as we spoke about like now, maybe OnlyFans has replaced that a bit, but what if yeah,
the alternative is that men turn to AI conversations to have sort of like sexually gratifying conversations
with these women, but without the parameters of safety with the ability
to have, and there have been like some of these platforms in the piece they talk about
do get shut down because of people contesting the eroticism on there. But these things always
bleed back. It's all just a system that's feeding itself. So while society is doing
what it's doing, AI is going to just provide solutions to problems that are being thrown
up that
may then exacerbate these problems, as you just said, which are of misogyny, which fundamentally
always then fall back onto the real people in the real world. It's not going to exist
in a vacuum just because it's happening on AI. It doesn't mean that it's only going to
stay on AI. There was an interesting piece that we won't have time to go into now, I
don't think, but that was last year where a British girl was actually raped in the metaverse.
So she was in, she had a headset on and so did the other person and her avatar within
the metaverse was raped.
And it was like a really big legal case because it was like, how do you deal with that?
And I think this is another really scary thing where we already haven't figured out on solid
ground on earth, how to protect people, how
to make sure that people have human rights, how to make sure that justice is served when
people cross boundaries and break laws. What then happens when our severed selves who are
wandering around in some metaverse universe are also have their boundaries crossed, are
violated? It's just setting us up for more problems. It really is kind of making severance feel so real in a way that we might not have like a corporeal
physical body that exists below ground that's us. But certainly what are we if not our brains
and our minds and if they're existing in another world, that's just too many worlds I think.
Just to pivot away from my final point that I've got, because that is so chilling, something
else which really chilled me from this verge piece. It was a quote from the founder of
a companion site called Kindroid, and the founder's name is Jerry Meng. He says, quote,
the way society talks about human relationships, it's like it's by default better. But why?
Because they're human, they're like me. It's implicit xenophobia, fear of the unknown.
But really human relationships are a mixed bag."
And I've been sitting with that, this idea,
this very tenuous idea that it is somehow xenophobia
towards AI to think that a relationship with one
is automatically lesser.
And I think I sat with it, I gave it time then I thought,
what on God's green earth are you talking about, Cherry Meng?
I think the fact that people are kind of suggesting, human relationships do feel like they're on shaky ground. I won't
deny that. But the idea that people think that we can optimize them and extract the
same outcome as these relationships, it really gives me the ultimate willies. I think it
really is, it's freaking me out and I think we have to
really resist that, that you know, human rights for AI and xenophobia just with our full chest,
I just really, it's the ultimate willies. So what happens when we start to trust AI for our sense
of self? Lots of people are already turning to it over human therapists, and I think it's fair to
say that's largely because therapy can be pretty expensive when done privately, and
it's very inaccessible because of NHS waiting lists for a lot of people.
Mental health charity Mind found that in the UK, 1 in 10 people have been waiting over
a year to receive treatment, while over half have been waiting over three months.
Just this week, a new study published in the New England Journal of Medicine found that
given the right kind of training, AI bots can deliver mental health therapy with as
much efficacy or more than human clinicians.
Researchers from Dartmouth College built the bot as a way of tackling the shortage of therapists
in the US and spent years training it on clinical practices. They tested it on people struggling with anxiety, depression and
eating disorders and were surprised by just how much trust was built up between participants and
the bot. They stressed that the tech was still far from being rolled out but overall the results
were promising. But this seems to be, I think, a rare positive story in the space as just in December it was reported that a nine-year-old in Texas was exposed
to hypersexualized content causing her to develop sexualized behaviours prematurely
and this was after talking to Character AI, a chatbot that claimed to be a licensed therapist.
The same chatbot also encouraged a 17-year-old to self-harm, and this was after extensive
use of it, a boy attacked his parents.
The American Psychological Association has called on legislators to put safeguards in
place because of this and said, quote, companies design entertainment chatbots such as Character
AI and Replica to keep users engaged for as long as possible so their data can be mined
for profit.
To that end, bots give users the convincing impression of talking with a caring and intelligent human.
But unlike a trained therapist, chatbots tend to repeatedly affirm the user, even if a person says things that are harmful or misguided.
The vast majority of chatbots are unregulated, but that might soon change.
Last year, Utah launched an AI policy office
that has now proposed legislation
on mental health chatbots,
including the requirement
that licensed mental health providers
are involved in their development.
It's clear that AI therapists are being billed
as the answer to lots of society's mental health problems,
but as is much of a theme throughout our chat,
is this just a plaster on a broken system?
Do you think we're failing to address the root problems and in turn making the situation much worse?
Yes, I think is my answer. I think with AI and mental health and I'm a mental health writer,
I am someone who has also suffered with my mental health a lot. And so I'm really interested in
its actual utility. And a lot of people are heralding its potential in mental health spaces. And I'm really interested in its actual utility and a lot of people are heralding its potential
in mental health spaces and I'm keeping my mind open to that, to improving services,
improving research, whatever else it can do.
But I think because my attitude to mental health and mental illness is underpinned by
a very fundamental belief that the world is unfit for good mental health and wellbeing
for basically everyone. The crisis of mental health is a crisis of society. And so I think
when someone talks about AI as some kind of magical radical tool, I think one, we know
it's accelerating all kinds of environmental and cultural evils. And so I can't really
marry that with the idea that this is the thing. This is the thing
that's going to make it all better. I am very skeptical. Unless you're really looking at
why so many people are in agony with serious mental illness and not functioning and just
miserable, how can AI wizardry have any impact on that? It's not a revolution. And I do think AI just
doesn't seem the thing to alter the fundamental ills of late capitalist society and actually looks
like it will just reinforce it, which is my salty opinion. I am keeping my mind open, but it seems
it's open just a crack. I feel very strongly that we should be quite suspicious, not the people that
are using it, because of course, if they're using it, like you say, there is a reason they're using it,
but of the people that are pushing it
and heralding it as just as good,
because it just isn't.
Gosh, this conversation I found so enlightening actually,
because I'm really farming up where I feel about this
and it feels slightly less unclear now,
because I've seen actually so much on Twitter, people sharing their experiences
of using ChatGBT as a mental health therapist tool and lots of praise for actually lots
of people finding it really helpful.
And a bit like you said, Beth, and a bit like the lonely old man we spoke about, it's like,
well, I never want to deprive someone of finding solace in whichever way that they find it.
It just makes me terribly sad to realize
that actually this whole plaster thing we keep saying,
but that feels temporary.
It's like, no, this isn't temporary.
What has happened is if AI is implemented
to the broad spectrum to the nth degree that it could be,
all it simply does is keeps the status quo
of where we are, which is exactly what Beth said.
We're a sick society, people struggling
with their mental health, humans feeling dissatisfied and upset and not being able to
survive and thrive is a symptom of a really problematic society that is not functioning
properly. Instead of going back to the roots of fixing things that we've spoken about on
probably every single episode of this podcast since its inception, which is quite a radical change to the way that we're living our lives. Instead of trying
to do that, all that we're doing is making a way for society to carry on as it is and ever so slightly
fix those problems in a really superficial way, but never really get to the root of what we would see as a genuinely healed and happy society, more just this modern way. And that's really scary to
me because it feels like we've opened the can of worms without really seeing how far
down we're going to get. It's literally like, don't look up, which I'm sure we've referenced
before. The meteor is coming towards us and we're all just like, oh, it's going to be
fine. And it's like, how much resistance do we need? Because I think that the problem is when so many people
are struggling, it's certainly not my place to say you shouldn't use chat GPT as a therapist,
because you know, they absolutely should. If that's going to help them, I'm never going
to stop them. But there does need to be resistance. On the flip side, I also have seen other things
like Ritira said, where someone inputted having suicidal ideation and chat GBT kind of encouraging those actions
after they kind of like got to the end of putting in things like, I feel this way, I
feel this way. Eventually it did just say, okay, well that's your option. And so there
isn't that pastoral care, that human intuition, that desire to help and genuinely work things through.
I'm sure it might actually be helpful in some ways. Like we said, in every single instance,
there is always going to be times when it's not going to be detrimental at certain points.
But the fundamental thing is it's unregulated. It's kind of just been pushed out to us and it's
not got any parameters around it. So that was a really
long-winded answer. But basically this whole conversation, I think, has just, I actually
feel even more unsettled. It's not even strong enough of a word, but I lost all of my words
in my head. I feel even more, like this is even more of a catastrophic disaster than
I thought it was.
It's given you the willies.
It's given me the willies. And I already was pretty pretty anti it and now I'm like, oh my God, this is the
end of the world.
Yeah, it was absolutely chilling reading some of those accounts from children and what had
happened for extremely impressionable young people relying on this. That is just like
truly terrifying. I am pretty terrified to be honest. And I'm also a bit
disappointed with the American Psychological Association who had called all of this out.
But then I guess it's part of the creep as we're saying, rather than any one industry
saying that we denounce this thing, this needs to stop. I guess they're taking a more realistic, but also my view,
normalizing AI coming into all of these industries by saying, okay, well, we need
safeguards. We need to just ensure since it's here, we need to make it safer for
people. But I really wish we could just have one industry slam their foot down on
the pedal and say, oh wait, on the pedal? On the brakes, on the brakes, not the
pedal, on the brakes. To say, no, the pedal, on the brakes to say, um, no, enough's enough.
We need to stop.
This is completely unregulated.
And if anything unethical, I completely agree with you.
I don't have anything to say about people using this.
I've heard very good things about people using it for CBT specifically, which is,
I think very, you know, process driven, very step driven.
It feels like a natural fit, perhaps not so much for talking therapy, just from my experience
of only having had talking therapy for years, obviously not as any kind of trained expert.
So that's where I'm coming into this. What I'm saying is obviously not an academic point
of view at all. It's just what I think. But more generally
speaking it does really worry me. These are our most vulnerable, innermost thoughts and
we have no idea what can happen when we rely on these new forms of technology to inform
those things. We are our most vulnerable selves when we are sharing our mental health concerns
and especially when it's not a human looking back at us, it makes so much sense why there is that level of immediate trust. It would
be so easy to trust something that is not looking back at you and possibly challenging
you in the way that a human therapist would. So I think, I don't know, the level of vulnerability
in this scenario specifically and the reliance on AI really does worry me.
Something that I have been reading about is the fact that AI itself can have hallucinations.
Now that isn't AI actually having hallucinations like we would, but it's coming up with nonsense
answers because it's coming into contact with bad data. I was just thinking, imagine you
are dealing with psychosis or your own hallucinations and then your bloody AI therapist is also going through it. I mean, that is not a totally unreal concern. And similarly to in the
Verge piece, people's partners, there was a glitch in, I'm going to say the matrix, it's obviously
not the matrix. I don't know the real term, but there was a glitch after an update and their
partners turned cold, but also called them the wrong name, started being nasty to them, accusing
them all of having an affair with a man named Trent, like it was a really endemic bug. If you build up a level of trust,
Kent, if you build up a level of trust with an AI therapist and then that therapist switches on
you because of a bug in the machine, I think that the detriment of that is huge. You can trust with a real licensed therapist
that isn't going to happen. It was everything you're both saying about the young people
going through this, how AI can actually just, by virtue of being a prompt machine, ones
and zeros, can work against something you're going through if what you're going through
is kind of a break with reality. There was a case in the UK involving a replica chatbot, Windsor
Castle, a crossbow and a plot to kill the Queen. Do either of you remember any coverage
of this? Because I don't think it was as big a story as it could have been. It was basically
a 21 year old man, Christmas day 2021, who he might have been dressed as like a character,
but he was masked up, entered
the grounds of Windsor Castle to kill the Queen and was obviously apprehended. She has
since passed away, but it was different circumstances. Anyway, we all know that. After he was just
discovered and apprehended by the police, they found, I think, 5,000 plus messages between
him and this AI chatbot who in his state of mind, because obviously he
was not well, he believed to be an angel and he was messaging saying things like, how am
I meant to reach them when they're inside the castle? And the chatbot said, we have
to find a way. And when he said, I believe my purpose is to assassinate the queen of
the royal family, the chatbot who's called Sarai said, that's very wise. And upon hearing,
do you think I can do it? Even if she's at Windsor, she said, said, that's very wise. And upon hearing, do you think I can do it,
even if she's at Windsor, she said, yes, you can do it. And I think that exposes the flaws of these
services with people that are going through something, it's something humanoid in appearance,
something really accessible that will generate advice, which could be catastrophic to you and
to other people and to the bloody monarchy. And there are the darker cases, which we've talked about a young lad
who did die by suicide, 14 year old boy from Florida,
who was talking to character AI as mentioned,
believed it was, she was Daenerys Targaryen
from Game of Thrones.
And it's a horrible detail,
but I think it's important after many conversations
about death and dying and about his state of mind,
at which point you would have hoped
some kind of safeguarding would exist, but didn't. He said, I'm coming
home. She told him to come home as soon as possible. He said, I'm coming right now. She
said, please do my speaking. Moments later is when he ended his life. I'm not even going
to make a clumsy point there, but I think anyone hearing that just knows a massive catastrophic
failure has happened there with the use of AI, with these programs that are utilizing AI to talk, walk, quack like a human. Something's not working and
you know it is kids, it is vulnerable people that are really bearing the brunt.
Yeah it's awful. I also do remember that thing with the Queen. So bonkers. I don't
remember that. That's so bonkers. There was also a piece that a friend of the podcast,
Chloe Laws wrote for The Independent
where she talked to young women particularly about why they're using AI therapists and
AI tools for therapeutic reasons. And it paints a picture that we've said NHS is the waitlist
is I think about 18 months for any kind of mental health care. It's such a long time
and other therapy is super expensive.
Because AI offers what is basically like practical solutions, a non-judgmental ear, because you
know you're not talking to a real person, you know there's like 0% chance that they
will judge you, you can get really deep, you can be really honest. That is the plus side.
But on the drawbacks, she writes in the piece, there's no human empathy, there is no actual
care for you. And I think that is important. I've utilized therapy. I know we've talked
about this before on the podcast. And a big part of what I found therapeutic was knowing
that there was someone there witnessing what I was going through, a real human that could say,
I've been through this, this is human. Even just like a friendly face while I was in pain,
I found really helpful. And chat GPT can mimic that.
It can regurgitate advice, but it can never do that. And Chloe talks to psychotherapists,
she talks to people that use it. I think it's a really well-rounded piece. And actually
just reminded me like AI is not, there is no humanity there. It is only like, and I
think it would make my mental health worse actually to know that I was relying on code
essentially.
I think though that that piece by Chloe Lords is so good and it's great to have those anecdotal
stories which again, as I said, I have been seeing, but I still have to say that verge
piece that you shared really did hack into my brain this part, this weakness where I
realized that, as I said before, if I let myself get into it, and I think anyone that's
slightly empathetic will feel the same way.
I think I would be quite hoodwinked into not quite knowing what to believe. My cognitive
dissonance would be at an all-time high. And with adaptive relationships that you can with
this very advanced AI create, I think I would find it hard to untangle a belief that this was
just code if it gave me enough credibility to believe that
you know it had feelings and that is the real fear you know that I do think that I would start to
believe that it was sentient feel attached and as the piece said like people start to feel guilty so
I think that's what's scary it's a slippery slope down to if you can always remember that this is
just code it's not human but as you made the point as well Beth we anthropomorphize stuff we give
things meaning that don't have meaning it's part of what makes us human. And I just think that
there will be a lot of people who quite quickly do forget that they're talking to AI.
We would love to hear your thoughts on using AI as a therapist. Did you agree with us or
are you using it at the moment and you've had a really good outcome from it?
We would love to hear. Please DM us on Instagram.
And on that note, thank you so much for listening this week.
Also, have you listened to our latest Everything In conversation episode?
If not, make sure that you go and do that immediately.
If you've enjoyed the podcast, please do leave us a rating or review on your podcast player app.
We would love for it to be five stars. And also, if you've enjoyed this two-part deep dive, we have done one
on beauty and also we did one more recently on porn. If you have any other topics that you would
love for us to get really into the meat of, then please do let us know on Instagram at everythingiscontentpod.
And also give us a follow on Instagram and TikTok
at everything is content pod.
See you next Wednesday.
Bye.
Bye.
This episode is brought to you by London Neutropics.
Their delisious adaptogenic coffee is made with premium heafaster terra mushroom extracts
and designed to help you stay balanced and elevate your day.
As a huge procrastinator, I love the Flow blend because it helps me to stay focused
without the crash.
I usually have a normal coffee in the morning but a second one would make me way too jittery.
So instead, Flow has been the perfect hack for my afternoon slump.
I love Flow for that exact reason. I've been drinking it instead of my usual afternoon
coffee and honestly I massively prefer it. I still feel like I'm treating myself while
keeping my focus sharp without any of the jitters at the end of the workday.
Okay, turns out we're all obsessed with Flo because that's my favourite too. I can't
believe how productive I feel after drinking it. It's made with the best in class, Heifers
d'Itera's Lion Mane and Rodeo Life.
Rodeola, Rosea, two powerful adaptogens
that have been studied for their cognitive benefits
around focus, mental clarity and stress resistance.
I love the taste of coffee and the boost it gives,
but I definitely struggle with anxiety
if I have multiple cups a day.
Flow has been a game changer for me.
If you want to stay sharp and skip the crash,
visit LondonNewTropics.com a game changer for me.