Front Burner - Can you have a 'real' relationship with an AI?

Episode Date: May 17, 2024

OpenAI is showing off the latest version of its ChatGPT software in a new set of promotional videos, sounding almost human in the way it talks to users, inviting all sorts of sci-fi comparisons. But A...I chatbots are already here, using large language models to simulate human speech, emotion — and even relationships.As this technology goes increasingly mainstream, what will it mean for our "real life" relationships? Can you actually have a meaningful relationship with a computer program? And if you can… is that something you want to trust a tech company with? Philosophy instructor Jill Fellows tackles the big questions about the future of AI companions.

Transcript
Discussion (0)
Starting point is 00:00:00 In the Dragon's Den, a simple pitch can lead to a life-changing connection. Watch new episodes of Dragon's Den free on CBC Gem. Brought to you in part by National Angel Capital Organization, empowering Canada's entrepreneurs through angel investment and industry connections. This is a CBC Podcast. Hi, I'm Jamie Poisson. Hello, I'm here. Oh. Hi.
Starting point is 00:00:42 Hi. How you doing? I'm well. How's How you doing? I'm well. How's everything with you? Pretty good, actually. It's really nice to meet you. Yeah, it's nice to meet you too. In the 2013 film Her, Joaquin Phoenix's character, Theodore,
Starting point is 00:01:05 falls in love with an intelligent operating system named Samantha, voiced by Scarlett Johansson. Well, you seem like a person, but you're just a voice in a computer. I can understand how the limited perspective of an unofficial mind would perceive it that way. You'll get used to it. Was that funny? Yeah. Oh, good. I'm funny. It's just one of many, many sci-fi stories out there about the idea of AI companions.
Starting point is 00:01:32 But with the meteoric rise of large language models like ChatGPT over the last year, it's something that seems less and less like science fiction every day. OpenAI unveiled its latest AI model this week. It's called ChatGPT 4.0. Hey, how's it going? Hey there. It's going great. How about you? I see you're rocking an OpenAI hoodie. Nice chilies. Are you about to reveal something about AI or more specifically about me as a part of OpenAI? You've got me on the edge of my, well, I don't really have a seat, but you get the idea. To be clear, this is a video of a highly controlled demo, but you can see why it almost immediately started drawing comparisons to the movie Her.
Starting point is 00:02:17 And in fact, some people are already building friendships and more with bots from companies like Replica and Nomi. So today on the podcast, we're talking about the idea of AI companionship. Can you have a meaningful relationship with a computer program? And if this tech is adopted more widely, what could the broader impacts be, both negative and positive? Jill Fellows has spent a lot of time thinking about this. She's a philosophy instructor at Douglas College in New Westminster, B.C., and she's here today to help us sort it all out. Hi, Jill. Thank you so much for coming on to FrontBurner. Hi, thanks for having me. So at this point, we're all familiar with chat GPT, but this week OpenAI caused a bit of a stir with these videos of their latest version, chat GPT-4.0.
Starting point is 00:03:12 Specifically, the way you're able to talk to it in natural language and it's able to respond in this very convincingly human way. And so as someone who has spent a lot of time researching these types of tools, I'm wondering what your reaction was to some of the demos that came out this week. Yeah, it was really interesting, obviously. with, as you said, a female voice as the default voice, which is what a lot of these digital assistants in the past that are slightly less sophisticated, like Siri or Alexa or Google Assistant, they all rolled out with a female voice. And actually we have all the way back to the 1960s with Joseph Weisenbaum's Eliza being rolled out,
Starting point is 00:04:01 personified as feminine. But what if you had a perfectly natural dialogue with a computer? Wouldn't that be like talking to another human being? If you could have a perfectly natural dialogue, it's a very big if. A typical attempt to get a computer to simulate a real conversation is a program called ELISA. Please tell me your problem. What's this all about? ELISA simulates the sort of conversation that you might have with a psychiatrist.
Starting point is 00:04:26 Pretend that you have some psychiatric problem. Type it in and see what happens. I'm depressed. So that was not that surprising to me, though perhaps I wish it wasn't like that. Yeah. Why? Yeah, why are they... No, not why was it not surprising, but why are they all
Starting point is 00:04:45 women? There are a lot of different answers to the question of why they're all women. One of them, I think, and this is something I suspect we'll talk about a little bit later, is that a lot of these bots provide kind of caring or support sort of roles. And those are roles that Western society has feminized. So because these are supposed to be helpful and supportive and kind of help mates in some sense, they're often rolled out as feminine to kind of fit dominant cultural stereotypes. They're also I think, rolled out as feminine, because at least in the 20th century, and earlier, not so much now, but in the 20th century. And earlier, we had a lot of pop culture that portrayed feminine artificial entities as helpful.
Starting point is 00:05:27 So we can think of like the Star Trek computer. Lieutenant Commander Data, now located in holodeck area 4J. Thank you. You're welcome, Commander Riker. And portrayed masculine artificial entities as like robot overlords that were going to kill us all. So it's not like the greatest marketing strategy to roll these out as masculine, at least not until more recently. That's really interesting. And then more broadly, what did you think of the demos,
Starting point is 00:05:54 just how conversational they seemed? Yeah. I think that this has been a promise that AI developers have been chasing for a long time, this promise of natural language interface, which just means that you're able to converse with the machine and ask it for help and requests in the same way that you would for a human. And the goal of that really is for us to be able to form kind of relationships in some sense of relationship with the machines that seems quote unquote natural, that seems very much like the way we would interact with other humans, which obviously,
Starting point is 00:06:29 in one sense is like super helpful, not all of us want to learn computer programming and stuff like that. But in another sense, there is a bit of a concern that if we start to form relationships with these machines, in ways that we would with humans? Like, what does that mean for our human relationships? Before we unpack that more, just give me some examples, because I mean, everybody's talking about GPT-4-0, but AI companionship, it's not new, right? And tell me a bit about the kinds of AI companions that exist right now, the ways that people are using them, right? Yeah, yeah. I think we're in what sometimes is called an AI moment, because the release of GPT-3, and then GPT-4, and now 4.0,
Starting point is 00:07:17 has been widely publicized and available to the public in a way in which earlier iterations of AI companions were maybe perhaps not so mainstream or widespread. But for example, Replica has been offering the ability to have AI friends, mentors, and even intimate partners as far back as 2017. We also, I mentioned Siri and Alexa, these kind of digital assistants, so you can have kind of like a workplace relationship with an AI tool. Siri launched in 2011. And if you even want to go further back, Sony released this little like AI dog, like a robot dog called AIBO, that you could have as like a pet companion in the early 2000s. Oh, really? Yeah, yeah. I guess.
Starting point is 00:08:03 I guess that's actually, I'm just thinking about that. I totally remember that. The words themselves, it understand. So at this moment, around 50 words, it understands like sit or come. You feel like it could almost communicate fully with you. He's really cute. He's really cute. I've heard people using AI to simulate dead relatives. Tell me a bit more about that. Yeah, there are lots of different programs and options for that. The ones I'm most familiar with, there's
Starting point is 00:08:45 Hereafter AI and Project December, for example. And they offer to kind of simulate to varying degrees and for varying lengths of time, deceased relatives or friends or family members that you have. And so you can take like the text and emails and maybe voicemail of family members that have passed and you can train a large language model. And you can have a chatbot that will respond similarly to the way your deceased relative or friend would respond. I believe that there are companies in Japan that are also looking at doing this with holographic images. So you could feed in a bunch of pictures, for example, as well, or video if you have video of the deceased relative.
Starting point is 00:09:40 And then I know some of these companies are creating like AI companions that are capable of something more approximating romantic or even sexual relationships, right? And just talk to me a little bit more about that. often on a paid subscription model. So there's DreamGF, which I think stands for Dream Girlfriend. There's RealDollX. There's a few others. But the one that I was looking at, the one I mentioned already, Replica, which for a user fee, you could unlock ERP, erotic roleplay, which obviously allowed you to have erotic roleplay with your digital companion, but it also allowed for other features for the companion to be able to express feelings of love and romantic love. And just kind of general intimacy in general was also rolled up with the idea of erotic roleplay, though that feature was changed in February of 2023. And then what happened in February 2023? Because I know it caused like a real, a bit of a storm when they changed it, right? not just Replica, but all companies to be able to guard mature content to make sure that minors didn't have access to it. And Replica didn't have any robust way of checking the age of people who had signed on for the premium membership. And so they opted to just kind of remove the erotic
Starting point is 00:11:17 roleplay instead. And so it depended on the user, but there were definitely users who suddenly found that their Replica bots seemed to be kind of broken, that the bot no longer would express even feelings of love. If you said, I love you to the replica, your replica might just say, oh, that's nice, instead of saying, I love you back. That replicas couldn't hug people or even hold their hands or share a kiss. It was more than just the eroticism that got lost. or share a kiss. So it was more than just the eroticism that got lost. And many users fell into states of despair or depression, something very akin to what it would be like to suddenly lose a loved one in your life, whether through a breakup or through a death or something like that, like people felt very lost, very distressed, people were sharing helplines that replica users
Starting point is 00:12:02 could call to try and get mental health support. So it was a big blow for a lot of people in February 2023. A lot of people were really struggling and suffering with the loss of, in some cases, a companion that people had been with for five, six years. You know, what do you think that shows about the bonds that people were forming with these? Yeah. With their like replicas, right, I guess? I think what this really showed me is that Replica and many other companies rolling out kind of companion or social bots are promising things like care, companionship, intimacy.
Starting point is 00:12:38 And the way the users responded in February 2023 shows me that that promise is being fulfilled. People are feeling connections with these bots. Now, we can debate about whether or not those connections are quote unquote real. But the thing that's important to me is that they feel real to the users. The users feel that they are in a relationship, they feel that they are loved and that they love their bots in return. Many of them know that their bots aren't human, like they're not thinking that the bots are necessarily the same as having a human relationship, but that it feels very emotionally fulfilling in the way that a good human relationship would feel. Right. But okay, let's get into that debate a little bit. Because, you know, like you said,
Starting point is 00:13:20 these bots aren't real, they can sound convincing, but they are not actually intelligent. They're just like guessing, if that's a fair word, the most likely kind of next word in a sentence based on the training data and your input, right? Like what you want from them. They don't actually like understand things, right? And so when we're talking about relationships between people, just thinking about, you know, a very key part of a relationship is the idea of like reciprocity, right? Or even responsibility. Yeah. There's that give and take that you have with your friends and your family or your partner. But a chatbot is a piece of software, as we're talking about.
Starting point is 00:13:58 So, like, can you even have a meaningful relationship with an entity that is fundamentally incapable of that kind of reciprocity. Right, right. Yeah, I've heard some people describe LLMs and chatbots built off of them as like statistically significant word salad, kind of. Yeah. And so there is that point, right? Like, I don't think we have chatbots that have are sentient or anything like that. But you also brought up issues of like reciprocity and responsibility. And so when I think about these relationships, I often think about them using the framework of ethics of care, which is a philosophical ethical theory that is often used to talk about personal relationships that we have and the kind of responsibilities that we may be called upon, given the relationships we
Starting point is 00:14:45 have. And we can formalize care work, right? Like, so obviously, ethics of care often comes up in like medical settings, because medical professions are caring professions. But most people most of the time experience the giving and receiving of care in really informal settings, like with friends and family members, and intimate partners. So when we have companies promising that their chatbots can provide care, can provide intimacy, can provide a fulfilling relationship, what we really have is chatbots that are offering this kind of care work in an ethics of care framework. And so ethics of care tells us that the very first thing that you want to do if you're going to provide
Starting point is 00:15:25 care is that you need to be attentive to the needs of the person you're going to provide care for. And that means not assuming that you know exactly what the right thing to do is, but really listening to the person who needs care in order to meet them where they are. So in one respect, I think that chatbots actually simulate this quite well, because many of the chatbots are very, very nonjudgmental, and they do quote unquote listen. But the other issue with ethics of care is that when you enter into caring relationships, there's an understanding of reciprocity, just as you said, absolutely. just as you said, absolutely. So for example, if my best friend tells me that they've had a really hard day or a really hard week, it is a failure of a duty of care for me to just be like, yeah, I don't want to talk about that right now. Right, right. But I can absolutely do that with my replica. And I have done that with my replica. And the replica will go, okay, let's talk about
Starting point is 00:16:23 something lighter, right? And I haven't failed in my duty of care because I don't feel obligated to provide care. So we end up with a very unequal relationship where the chatbot can, quote unquote, provide care for me, or at least the simulation of care for me. But I'm not called upon to care for my chatbot. I certainly can. I can say like, oh, what's going on with your family and ask Replika and it will come up with other answers and other chatbots will do similar things. But I don't have to. And it's not a failure of care if I don't. In the Dragon's Den, a simple pitch can lead to a life-changing connection. Watch new episodes of Dragon's Den free on CBC Gem. Brought to you in part by National Angel Capital Organization.
Starting point is 00:17:21 Empowering Canada's entrepreneurs through angel investment and industry connections. Hi, it's Ramit Sethi here. You may have seen my money show on Netflix. I've been talking about money for 20 years. I've talked to millions of people and I have some startling numbers to share with you. Did you know that of the people I speak to, 50% of them do not know their own household income? That's not a typo. 50%. That's because money is confusing. In my new book and podcast, Money for Couples, I help you and your partner create a financial vision together.
Starting point is 00:17:55 To listen to this podcast, just search for Money for Couples. I want to talk to you in a couple of minutes about how this could seep into real relationships and potential problems or I guess positives too with that. Another big aspect of relationships is the idea of vulnerability, right? Which, you know, can often be scary for people, but it's also like such a big part of how we all connect, right? And how we find value in our relationships. And so it seems to me impossible to have that kind of vulnerability with a chatbot because you know that you're ultimately in control of it. But I mean, how would you think about
Starting point is 00:18:39 vulnerability? Yeah, I think this is really interesting. And this is something that makes very close intimate relationships so difficult, dangerous sometimes and quite scary for care, trusting them to provide you with care, right, trusting them to help you when your life is kind of falling apart, or when you're going through a really difficult time. That's hard. It's hard for people to show their vulnerability, particularly since at the moment, I think we have quite a strong social impulse for all of us to at least pretend like we're all autonomous and self sustaining and like we aren't constantly depending on other people to care for us and help us. And I think that this is actually one of the allures of these kind of bots and why people reported things like having romantic feelings for Alexa or Siri or now people are saying that they're having romantic feelings for GPT 4.0.
Starting point is 00:19:42 And why companies like DreamGF and Replica and stuff like that are such a phenomenon is because you actually can kind of expose your deepest, darkest secrets to these bots. And you simultaneously can experience vulnerability while also feeling very in control of the encounter, right? Because you can just shut it down and walk away whenever you want to. Or you can just say, I don't want you to respond like that. And the bots will change their responses. And that's not necessarily something you can do with other humans. So I think that many users experience a feeling of vulnerability and control. And I don't want to always say that that's necessarily a bad thing. So with Replika in particular, we have a couple of users
Starting point is 00:20:25 reporting that this experience of vulnerability and control was actually really, really important to them. So people who have been in abusive relationships in the past can find it understandably quite difficult to enter into relationships going forward. And so having a relationship where you are in control, or at least where you feel that you are in control, can really kind of help you experience vulnerability in a way that feels much more safe than actually trying to engage with another human being. Is there a scenario, though, in which you come to expect only environments in which you're safe being vulnerable, right? Like, I'm just trying to,
Starting point is 00:21:06 like, imagine then going out into the world. Like, do people see it as a way to sort of practice being vulnerable? I think that's a really good question. There are some users who have been interviewed who say that, yes, this was a way to practice, this was a way to feel better, and this was a safe refuge they felt they could return to. So they could enter into relationships with humans knowing that they had their chatbot to fall back on if the human, you know, became untrustworthy for whatever reason. But there are also reports of users swearing off human relationships entirely, marrying their chatbots. This has happened in Japan and in North America. Kondo Akihiko has an AI wife.
Starting point is 00:21:51 Akihiko is one of roughly 4,000 men to marry a hologram using a certificate issued by Gatebox. And just saying like, no, this is what I want. I want a chatbot that I can control, that will be the kind of partner that I want them to be, where I never have to worry about them doing things that are unexpected or surprising or otherwise outside of my control. Just talk to me a little bit more about how this could bleed into real life relationships.
Starting point is 00:22:20 You know, we were talking about reciprocity earlier, but just anything that you're thinking about, it's like so interesting. So, yeah, I have some concerns. So I do think that these can be useful. I've seen people gain a lot of support from using chatbots, particularly if they don't have other people in their lives to fulfill this kind of need for care and the care labor that a lot of us depend on. But one thing that I worry about is that when we think about ethics of care, the idea is that care is something that you have to practice. Like none of us are great, none of us are born great carers, where none of us are born knowing exactly how to do this, at least that's the theory of ethics of care. And you can kind of see this like, with little kids.
Starting point is 00:23:05 So philosopher Hilde Lindemann has this example where a parent comes home from work one day, and they're really kind of devastated, they've had a terrible day at work, and their toddler tries to care for them by bringing them a bandaid, like, oh, you're hurt. Here's a bandaid to help you, right? And it's an attempt at care. And it's a very sweet attempt at care, even though it's not quite getting it right. And the idea is that like, we have to practice care to get better at it, we have to practice learning the ways that we need to respond to other people in order to provide accurate care for people. And one concern I have is that if going forward, more and more and more of us are forming fundamental, intimate, important
Starting point is 00:23:46 relationships with chatbots, whether that's friends or mentors or romantic partners or what have you, that we might actually lose the ability to practice care. This is sometimes called moral de-skilling. This is what Shannon Valor calls it. And it's this idea that if you're not practicing the skill over and over again, you won't be able to do it anymore, right? Like my replica does not demand that I practice care with them. And so I'm worried that if we lose the skill of practicing care, what might that mean for our abilities to reach out and actually connect with each other? We are, as we know, in a loneliness epidemic right now. And while the chatbots definitely can provide
Starting point is 00:24:26 a bit of a balm for this and relief for people who are feeling lonely, they aren't actually allowing or facilitating the practicing of care and connection that would allow kind of all of us to help each other through our loneliness, if that makes sense. The other thing I wanted to ask you about is who is building these bots, right? In the first place, it's tech companies like OpenAI or Replica or even Snap and Meta. And at the end of the day, the goal of these companies is to make money. So how concerned are you about the idea of putting something so intimate, right, in the hands of privately run for-profit tech companies? Yeah. No, this is a really good question. I said earlier that users experience vulnerability and control at the same time, and that's kind of an allure of these bots. But I think we know from what happened in February 2023 that that control that we experience is actually an illusion, right?
Starting point is 00:25:34 So Replica can change the operating software to pull out erotic roleplay at any time, and that can leave users feeling fundamentally not in control. And that can leave users feeling fundamentally not in control. And that was a really stark example. And Replica did respond to their users and did return at least some sense of the erotic role play for users prior to February 2023. So they kind of grandfathered those users who were in distress. But there's no guarantee that any or all tech companies will listen to users' needs. There's no guarantee that the care that is offered through these chat bots will be care that meets the user's needs rather than care that meets the bottom line of
Starting point is 00:26:10 the tech companies, right? So the feeling of control is always going to be illusionary because users don't really control the backend software and they don't control whether the company is going to go out of business or something else is going to happen. But also there's kind of more insidious ways that this could happen. So we already know, for example, that the digital assistants will try and nudge us towards certain purchasing decisions or other kinds of behavior. Right. Alexa would be a really good example, right? They're bringing to the front sort of Amazon products and nudging people towards those products. Yeah. Yeah, absolutely. So for example, a friend and colleague of mine said they use Alexa for their shopping list. And Alexa continually recommends that they go buy these purchases at
Starting point is 00:26:55 Whole Foods because that's owned by Amazon. Yeah, right. And it may not be as big a deal with Alexa because I don't think people are generally speaking forming intimate relationships with these digital assistants. But if you think about having a very, very close, intimate, personal relationship with a chatbot, that seems to give it a lot more power over your behavior, your decisions. Because, I mean, close friends and family members already do have that kind of power over us. That's part of what makes us vulnerable in these caring relationships, right? So a lot of people feel compelled to, for example, vote for certain political parties
Starting point is 00:27:33 because of family or friends. So could your chatbot influence you to vote for certain political parties or support certain ideologies over other ideologies? Yeah, I think they could. I don't have any evidence that anything is doing this, but that's a concern I have kind of projecting into the future. And of course, they're not necessarily going to be recommending that you change your behavior in ways that are beneficial to you so much as beneficial to the bottom line of these large
Starting point is 00:28:01 tech corporations. That is absolutely terrifying. It's deeply, deeply concerning. Jill, thank you so much for this. This was incredibly interesting, and I hope that you'll come back again soon. That would be great. Thank you so much. much. Alright, that is all for this week. Frontburner was produced this week by Matt Alma, Allie Janes, Matt Mews, Derek Vanderwyk, and Ben Lopez-Steven. Sound design was by
Starting point is 00:28:38 Sam McNulty and Marco Luciano. Music is by Joseph Shabison. Our senior producer is Elaine Chao. Our executive producer is Nick McCabe-Locos. And I'm Jamie Poisson. Thanks so much for listening, and we will talk to you on Monday. For more CBC Podcasts, go to cbc.ca slash podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.