Modern Wisdom - #287 - Sven Nyholm - Are Sex Robots And Self-Driving Cars Ethical?
Episode Date: February 25, 2021Sven Nyholm is an Assistant Professor of Philosophy and Ethics at Utrecht University. Robots are all around us. They perform actions, make decisions, collaborate with humans, be our friends, perhaps f...all in love, and potentially harm us. What does this mean for our relationship to them and with them? Expect to learn why robots might need to have rights, whether it's ethical for robots to be sex slaves, why self-driving cars are being programmed to drive with human mistakes, who is responsible if a self driving car kills someone and much more... Sponsors: Get 83% discount & 3 months free from Surfshark VPN at https://surfshark.deals/MODERNWISDOM (use code MODERNWISDOM) Extra Stuff: Buy Humans And Robots - https://amzn.to/3qw9vbp Follow Sven on Twitter - https://twitter.com/SvenNyholm Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hello wonderful people, welcome back. My guest today is Sven Nyholm,
he's an assistant professor of philosophy and ethics at the Eindhoven University of Technology,
and we are talking about whether sex robots and self-driving cars are ethical.
Robots are all around us. They perform actions, make decisions, collaborate with humans,
be our friends, perhaps fall in love, and potentially harm us. What does this mean
for our relationship to them and with them? Expect to learn why robots might need to have rights,
whether it's ethical for robots to be sex slaves, why self-driving cars are being programmed to drive
with human mistakes built in, who is responsible if a self-driving car kills someone and much more.
Being honest, I've thought long and hard about so many of the questions that I got to bring up with Sven today.
I find ethics particularly around emerging technologies so fascinating
and I really hope that you enjoy it as well.
I'm going to put a warning out there. We do get into some discussions around child sex robots and the ethics of that.
If that is the sort of topic which is going to make you uncomfortable, please go and enjoy one of the other 286 episodes which are available on the show.
I thought I would put that out there. It's a fascinating and slightly uncomfortable conversation to have.
It really challenges our principles around. What does it mean to
have sex? Can we divorce the act of sex from the meaning behind it? I would really be interested
to hear what you think. Leave a comment on the YouTube channel or just get at me at Chris
WillX. Also, please don't forget to press subscribe. It is the best way that you can support
this show. Not only does it mean that
I continue to find bigger and better guests to deliver to you, but it creates this family of
curious, radically sensible humans, all of whom are improving themselves by having conversations
with fascinating people. So open your podcast app and press that subscribe button.
and press that subscribe button. But for now, it's time for the wise and wonderful Sven Nigh Home, welcome to the show.
Thank you.
Pleasure to have you here, man.
Why is the relationship between humans and robots interesting?
Oh, okay.
Well, it's interesting for a bunch of different reasons.
I mean, one reason is that people are, in a certain sense,
not well prepared to redeem or respond to robots,
because our brains, our psychology, developed
before we had any robots, before we had any AI.
I mean, it developed during human evolution
for hundreds of thousands of years.
And then suddenly, we have this enormous jump
in technological development.
And now we have robots, we have AI,
we have all sorts of interesting technology.
And but we are responding to it with the brains,
the human psychology that developed during this long time.
So that sometimes means that we respond to things
that look or act like humans, which
some robots do. Sometimes in humorous ways, sometimes in ways that might be dangerous for
us, but very often in fascinating ways. And so that's one good reason, I think, to think
about the relationship between humans and robots.
So our current mental makeup is unsuited to interacting with robots. That's the basic sort of foundation.
Yeah, I mean, so we're basically primed to anything that moves seemingly on its own
accord in a apparently intelligent way.
Our brains will think, okay, this is some sort of agent.
It's an animal, it's a person, and our human social attitudes are triggered.
This can happen at the same time as we're thinking to ourselves.
It's just a robot, it's just a machine, doesn't have any feelings, doesn't like me or dislike
me, but nevertheless, emotionally, we respond to the entity as if it's another person,
I mean, not all of the time, but the interesting thing is that this is
not just true of lay people, but also experts.
They talk about robots as if they have a mind, as if they have desires, beliefs.
People will say about a self-driving car, for example, I'm thinking of a self-driving
car as a kind of robot, that it wants to go left or right,
it has to decide what to do. So we have this tendency to anthropomorphize, as people say.
I mean, so attribute human-like qualities to robots, to technologies.
I mean, you hear people, even before robots, people would describe, you know, your dad's car,
or your mum's car car and they'd say,
oh, she's a little bit cold this morning. It might take a little bit of time to start her up.
Like, we, we personify inanimate objects in that way. Don't we? Everything. Thomas the tank engine
has a face. He's a tank engine with a face. He doesn't need a face. He's a tank engine. But
less, we decide to do it. Yeah, I mean, the fake makes a difference.
I mean, just put to pair of eyes on something and then people will feel like they're being
watched.
And, you know, if that thing with the pair of eyes can also move in a functionally autonomous
way, can behave in a way that seems intelligent to us, then again, you're going to have
all your sort of responses that have been programmed into you by evolution over thousands hundreds of thousands of years.
They're going to be triggered even if you think to yourself, okay it's a robot,
someone painted eyes on it, they wanted me to respond in this way, but you can't help it.
And as I said it's not just it's not just lay people, it's experts too. I mean one of my
colleagues, Joanna Bryson, who argues that, I mean, in
her way of putting it, robots should be slaves, meaning that, you know, there are things that
we can buy and that we can own, and they're creative to be, you know, useful for us. And so,
you shouldn't treat any person like this, because that would be a slave. However, in the case of
a robot, I mean, you have this, on the one hand, you're responding to it, as if it be a slave. However, in the case of a robot, I mean, you have
this on the one hand, you're responding to it as if it's a person, but, you know, it's
something that you buy in solute. It's been created for human use. And so, I mean, what
she's arguing is that we should design robots in such a way that we don't have these responses.
The problem, of course, being that a robot doesn't have to look like a human, it doesn't have to look like an animal.
I mean, there are, there's interesting stories about military robots, but they look, some
of them look like vacuum cleaners or lawn mowers, but they're part of the team.
The soldiers become attached to them.
There was one robot that helped to find bombs in the battlefield.
And of course, eventually it got blown up by a landmine or something like that.
The soldiers wanted to fix that particular robot. They didn't want a replacement, not even a better one.
Eventually it was destroyed beyond repair. And then they gave it a military funeral, they wanted to give
it medals of honor, etc. So you can get very attached to a robot, even though it doesn't
look like a person, like a human. I mean, that robot didn't even act like an animal or
a human, but nevertheless they got really attached to it. So it's fascinating.
It didn't. Two other examples from your book. one is a robot got to meet the queen of
Denmark and then another one was given honorary citizenship in China or South Africa or something.
Yeah so the examples are right the country's wrong so the the one that got to meet the queen that
was here in the Netherlands where I am so this was was a robot called Amigo, which is, I should
perhaps say, it's a medical care robot. Again, this doesn't look like a person. I mean, it does have
a head and it has sort of arms. And so the former university I worked for was developing this robot.
And so when the queen came to visit, this robot
gave the queen a bouquet of flowers and asked the queen, what's your name? And the queen sort of immediately responded by accepting the flowers and saying her name.
And I mean none of the students at the university got to meet the queen only this robot.
The other one was Sophia the robot and the country was Saudi Arabia.
And so this I mean this has been quite controversial. So this is a robot that unlike the other one
that I just mentioned, it does look like a human, so it has a very human like face. The
back of the head of the robot is transparent so that you can see the electronics in there,
the ideas that you know one should be fooled,
that this is robot. But this is one where people really respond in anthropomorphizing ways.
So this one has appeared in front of political bodies, the UN, the Munich Security Council,
the British Parliament, I think, and has appeared on the tonight show with Jimmy Fallon,
I think and has appeared on the tonight show with Jimmy Fallon, took a selfie with Angela Merkel, the chancellor of Germany.
And yeah, I mean, why exactly?
Well, maybe it's the novelty.
It's a humanoid robot.
But yeah, it's been controversial.
It's a very fascinating example of people reacting in this way.
There's a couple of terms that you've used so far talking about it being an agent
it having agency. What are the premises that we need to understand before we can
get into this conversation about robot human ethics?
Yeah, yeah. Okay. So those are some technical terms indeed. So
agency is something that philosophers, such as myself, we love to talk about it all
the time. I mean, of course, one problem is that others, such as myself, we love to talk about it all the time.
I mean, of course, one problem is that others, such as software developers and I mean,
people developing medicines, I mean, sometimes to talk what's the active agent, reingregion
and so on.
So, on the most general level, an agent is something that can act or react to the environment
in a more or less predictable, intelligent, seeming,
interesting way.
Of course, there can then be different kinds of agents.
And so, I mean, a very simple, I don't know, insect
is sort of agent because it interacts with environment
in a goal-directed way.
But it's, it can't, you know, have a conversation, whereas, you know, you and I are agents that can have a conversation such as this one, you know, you do something and so let's say I
don't agree with it, I might try to hold you responsible, you might defend yourself,
thereby exercising a much more advanced form of
agent and insect, that maybe bites you, doesn't, you know, you might just, I don't know, just kill it
and immediately don't think that it deserves any kind of chance or opportunity to sort of explain
itself. I mean, in the class of human agents, you have anything from infants, I mean, they can't
really do anything, I mean, but they're learning.
Over time, they become more and more advanced agents.
And so one question that arises is, could a robot be some form of agent?
I mean, okay, maybe before we get into it too much, but that's the idea of agency.
Like, the ability to sort of interact with the environment in a goal directed way, perhaps being able to talk or converse with other agents,
perhaps being able to take responsibility for what one is doing, being able to make decisions,
make plans, and so on and so forth.
What are the other words that we might encounter over the next 50 minutes or so that need defining
before we get into it? Yeah, I mean, I already mentioned the word anthropomorphized, anthropomorphized,
something as to attributes of human-like qualities to it. We kind of already explained that,
but that's a key word. I mean, maybe we should say something about what a robot is.
I mean, maybe we should say something about what a robot is. I mean, obviously, it's a everyday term, and as it happens, it was introduced, the term
robot 100 years ago, in a play.
So this is a term, unlike artificial intelligence, that was, that is a term that scientists came
up with.
Robot, it's a word that comes from a check word, and I'm not going to get tempted to pronounce the check word, but it's a check word that's been sort that comes from a check word and I'm not going to get attempted to pronounce
the check word, but it's a check word that's been sort of changed into a noun.
It was a verb meaning, I don't know, to perform forced labor or something like that and then
turn into a noun in a play about robots, about artificial human beings that are created to serve
humans. That's the origin of the term. I think these days, I mean, when we think of the
term robot, we think of maybe like a metallic human-like shape, that would be the paradigmatic
robot. But if, however, we look at sort of real world robots that are useful to anyone, that people actually
are interested in in terms of buying and selling them, it would be maybe something like a room
vacuum cleaner robot, a self-driving car, I mean, that's a very hyped robot.
So a lot of these functional robots, I mean robots in logistics warehouses moving boxes around,
they don't look like humans, they don't look like the paradigmatic robot out of science
fiction like C3PO and Star Wars.
So those are two different kinds, I mean the silvery metallic one, the ones from real life
that look like boxes with arms, etc.
And then there's robots such as Sophia, the one that we already talked about,
I mean, made to look like a human,
made to act like a human.
Like, what are all these things having common, you might ask?
Well, okay, so here, people that work on this,
sometimes they don't even want to give a definition
because there's so many things that we mean
when we talk about robots.
But if one, worse, sort of of speaks forced to give a definition,
people sometimes say something like,
it's a machine with some degree of artificial intelligence
that has some, that of course, is already a technical term,
that also has some functional autonomy,
another technical term that means that the machine
can operate on its own for some period of time without direct human interventions.
And it's basically a machine that can do something that seems to be intelligent. That's often what people, I mean by robots.
I mean sometimes people talk about what they call the, let's see if I can remember the, the sense Plan Act definition of robots. It can sense the environment, it can plan a response,
and then it can carry out that response, take action.
That's another definition people sometimes use.
But, I mean, from my point of view,
I think it's better to just talk about different examples
of things that people call robots and then ask,
I mean, for example, do they have agency
of some interesting sort?
Do people answer for morphize them? Is that good? Is that bad? And so on. Yeah.
I mean, even if we roll the clock forward nanotechnology, probably over the next 100 years,
it's going to be big. And what do people say? Tiny little robots. That's what they call them,
right? So, yeah, from the grand scale to the smallest, those are the definition and the boundaries
of what it means
is going to continue to get blurred. So you lay out an argument quite early on in the opening
chapter, your argument. Would you be able to take us through that? Yeah, so this goes back a little
bit to what we were talking about early on about how people respond to robots. Again, we are
Again, we are equipped with a brain and a mind that developed over hundreds of thousands of years.
That's sort of a multipurpose organ or capacity that we have that to some extent has prepared
us to build robots.
I mean, we are creating ourselves, but it doesn't necessarily prepare us to respond well to robots. And sometimes we will become maybe too attached to robots. We will trust them too much.
One part of our mind will say it's just a machine. It's not intelligent, where the emotional part of
our mind will get attached to the robot, we'll trust it too much. We will build technologies that have robotic and other elements that will polarise us online, etc.
We have all these problematic responses to technology.
I mean, online polarisation, that's one thing, it doesn't have too much to do with robots,
but that's just another example of how we react in sometimes funny, sometimes
dangerous, sometimes not very nice ways to the technologies that
we have. And so we face the kind of choice I argue, either we
try to change the technologies so that they are better sort of
suited for us with our human nature. And typically that's
that's the right response,
because why should we change ourselves
to make ourselves more adaptive to technologies?
But in some cases,
it may actually be worth also asking,
should we somehow try to change the way that we behave
and the way that we think so that we are better adapted
to interact with technologies such as robots and AI.
I mean, for our sake, I mean, maybe in the future for the sake of the robots,
if they get intelligent enough, I mean, some people already talk about the idea of
maybe robots should have some sort of rights, if they're intelligent enough,
if they awaken sort of social responses and people.
So, the idea is that for our sake, we have to do something because we are exposing ourselves to so many risks
when we are creating these technologies.
And then two of the most obvious things we can do
is to try to change the robots,
to make them better adapted to us,
or somehow try to change ourselves,
to make ourselves better able to interact with robots.
But in order to avoid the risks we are creating for ourselves,
we have to do something or other.
So I'm exploring both options in this book that we're talking about.
Sometimes, as I said, the most obvious thing is that change to technology, so it's better
suited for humans, but that might also make us miss out on some of the benefits that we
want.
And I don't know if we want to jump into any particular examples at this point.
Yeah, yeah, throw some.
I mean, one that I spent quite a bit of time on and we already mentioned is the self-driving
car.
So this is, you know, why do we want or need self-driving cars?
The typical answer is that people are not super good at driving.
We drive sort of well enough that we don't, you know,
crash most of the time.
But a lot of the time we do crash, it's very dangerous.
We're also driving in energy, not inefficient ways.
We're using up much more resources than we need to,
because we're accelerating in a kind of quick way
and not breaking gently, et cetera.
So one can envision a more ideal and more optimal form of driving.
And that's where the self-driving car comes in.
It's supposed to be a car that's better at driving than a human drives in a more environmentally
friendly way, killing fewer people along the way, so to speak.
And so it's more optimal type of driving agent, if you will, to use the
terminology again. Now, the problem is that if you have self-driving cars and human driven cars
on the road at the same time, you get a kind of coordination problem because humans expect
cars, self-driving cars to behave like human driven cars. And so there have been a lot of, I mean, mostly minor crashes.
What typically happens is that people drive into self-driving cars because they think that
they're going to be accelerating more quickly, drive more aggressively, but the average self-driving
car today is sort of programmed to follow the rules, like to the letter, sort of speak,
like never speed, never drive aggressively,
et cetera, but humans do all of those things. And then some people have suggested, I mean,
I went to one conference with people advising the Dutch government about how to develop
our sort of self-driving cars, research program here in the Netherlands. Some one person said,
well, we need to adjust the self-driving cars so that they drive like human drivers
do.
So they need to inherit our bad driving habits.
They should speed, they should drive aggressively, etc.
But then in a way, you would take away all the benefits that are supposed to be there
with self-driving cars.
And there should be drive less aggressively, keep fall of the traffic rules and not drive like humans.
And so this seems to be one of these cases where you do have an interesting choice.
Should we try, like if we want to drive at the same time as this, also self driving cars
on the road, should we adapt ourselves in our driving to these robotic cars, self driving
cars, or should we make them adjust themselves to us? I mean, probably the best answer
some sort of compromise, where both are adjusted to each other, but on the other hand, it does seem
that it would be good if we drove in a more safe way. Of course, people differ a lot in how much,
so some people say, you say, within five years,
we're going to have fully automated super-save cars.
And we're just going to save hundreds of thousands of lives
that are killed by human drivers every year.
How to say, well, actually, it's really quite hard
to develop a fully self-driving car that
would work in all kind of traffic conditions,
in all weather that are able to, you know,
interact with all kinds of environments, anything crazy that people do on the road.
So the expert differ in how safe they're going to be.
But if we do accept the premise that eventually they are going to be safer than humans, then
it would be strange in a way to say, well, let's make them drive like humans so that we
can have them adjust themselves to us. So that seems to me to be one case where my
plan to investigate are there ways in which we can make humans drive more like robots?
I tell you an interesting example that I remember hearing last year on a podcast.
Someone was talking about the
last year on a podcast. Someone was talking about the social habits that were being learned from people using voice control on devices like Alexa and Siri, and they were saying that
young children, there was a fear that young children and older adults would begin becoming
less polite and using fewer social norms when talking to people,
because you don't say, hey Siri, please turn on the light. You say, hey Siri, turn on the light.
And when you then port that behavior back across into the social world, into the human world,
you actually end up with it being very misaligned.
Yes. So I mean, that's just one example of many where people worry about the robots would
sort of inspire certain behavior on the part of the humans interacting with them and then
that behavior would then carry over to humans. And I mean, another example which we might
talk about would be sex robot there. one of the biggest criticisms that have been raised against them
from a sort of feminist point of view is that people will start treating sex robots made to look
like humans in a very objectifying way, in a rude way, in a not at all, a sort of nice way that we
would like sex partners to treat each other. And then that attitude will be carried over to humans
even more than it is today, so that people will objectify each other even more that attitude will be carried over to humans even more than it is today
so that people will objectify each other even more than they're already doing.
So that's a different example than the Alexa or Siri or whatever, but it's the same sort of argument
that we will learn a certain behavior by interacting with these robots or other technologies
and then that behavior will sort of
carry over into our interaction with human beings. If a self-driving car kills someone, who's responsible?
Yes, that is another topic that in terms of, I mean, actually here too we have a bit of a
technical term, responsibility. Of course, it's an everyday term. We hold each other responsible. We ask who is responsible.
We sometimes don't know exactly what we mean, or at least we don't know what the conditions are that lead our intuition, or regard our intuition, intuitive judgments about these things.
A lot of philosophers say, if I'm to be held responsible for something, I first have to be
able to predict what's going to happen when I have to understand my environment, what I'm to be held responsible for something, I first have to be able to predict what's
going to happen when I act, I have to understand my environment, you know, what I'm doing.
If I don't know what I'm doing and I don't know what's going to happen, then in a way
I have an excuse to not doing the right thing.
So that's sort of one condition.
Another condition is I should be able to control what I'm doing.
If I lose control, someone puts, I come in there,
I put some sort of drug in your drink that you have there,
and you sort of go crazy and start doing strange things.
You can say that you lost control of yourself
because you were drugged or something like that.
So you should be able to know and understand
what you're doing and you should be able to have some control over it. That makes you responsible.
Now, if I am using a self-driving car and I am not a very technical person, let's say,
I don't really know how it works. I mean, I can say, please take me to the grocery store and
ask the cars do in that it's, you know, hits and kills someone. Am I responsible? Well, I didn't really understand what was going
on. I had no direct control of it, because I said, okay, take me to the grocery store,
but then everything that happened was done by the computers, the car, the artificial intelligence,
the whatever technologies are involved. So it would be strange to say that I'm responsible for what
happened. Now there are complications, of course. Let's say that I'm the owner of the car and I
maybe sign a contract saying that if something bad happens, then I should be responsible. Actually
Tesla, they have, they don't have fully self-driving cars, but they have something that's called autopilot,
which is sort of a certain form of automation.
And so they do have a contract with their customers and users of this automation that if you
engage the autopilot, then you are responsible.
Tesla does not take responsibility for anything that happens.
So this is one way of solving the problem. We just make a contract.
A lot of people have responded to this particular type of example by saying like,
well, actually, Tesla built the car. They say it's safe, and they are benefiting a lot from
having people buy, and actually it's pretty expensive to get this upgrade to the autopilot feature.
So since they are benefiting, maybe they should be held responsible.
So that's another kind of answer. The first answer was, you know, whoever signed up contract
agreeing to take responsibility. The another answer would be who benefits the most from
the existence of this technology? Maybe, maybe test flying this case. Another kind of answer
that maybe possibly would seem more fair. I mean, of course, these things might align.
So maybe I'll send a contract and I benefit the most.
But you could also ask, you know,
who has the ability to sort of update the technology
to monitor it and see what it's doing
to maybe stop using it if it turns out
that it's not very useful to begin with.
And you can create kind of a checklist.
And the problem is that sometimes
maybe one person can update it. Another is monitoring what it's actually doing. A third person
is in charge of stopping the program of using this technology to speak. So we do get what is
sometimes called responsibility gaps. That means that there's a feeling or a sense and intuition that someone should
be held responsible, but a lot of the conditions that we typically think that should be fulfilled
and you're for someone to be responsible, whether it's control, whether it's knowledge, whether
it's there's a contract, etc. Either they're not fulfilled or different people sort of live up to this criteria. And
so that's a bit of a problem.
Yeah, it's so interesting. I went on to the MIT website, the ethics, the car trolley problem
thing. Yeah, the moral machine website. Yeah, that's it. So anyone that wants to feel uncomfortable for 10 minutes,
go on to just Google the moral machine and a MIT website
will come up and you do a little quiz and you go,
you choose left or right, basically you choose who you want
to kill.
And you just consistently don't know what the best option
is for that.
Do you have any sense of why people are so uncomfortable around autonomous
cars, even if in the aggregate they would save lives? Okay, yeah, good question. So let's say that
they save thousands of lives every year, but they will kill a few people because any technology
will stop working sometimes. There will be some tree falling over.
And you can't have any technology that's moving fast
and is heavy, and that is 100% safe.
It's impossible.
So there will sometimes kill people.
But as you said, we can assume that they
would just save a lot of lives.
And so on aggregate, they might be
doing better than human drivers.
Even so, the idea of being killed by a machine
seems worse to people than being killed by a human.
Of course, it's not nice to be killed by a human.
However, we also have another impulse.
So I talked about, we want to hold people responsible.
Another sort of intuitive impulse or feeling
that we have is that we want to punish.
And of course, this goes back to what I talked about before.
Our brains developed over a long period of time and we developed these sort of attitudes
and emotional reactions that we have.
And whenever someone is harmed and it seems that could have had been avoided, we tend
to want to find someone to blame and punish. And you know, you,
you, maybe you could say, I mean, I actually have a colleague who says, let's punish the
self driving car, let's blame the car. But to the average person, this is not a very
satisfying idea because, well, another colleague of mine, Rob Sparrow, he argues that, well,
if you punish someone, you call suffering to them, to some extent,
you make it hard for them, but a car, a robot, they can't suffer, so you can't punish them in
the way that you want to hold someone accountable in the sense of giving them a hard time.
I mean, some people then come back and say, well, that's one function of punishment to make the bad people's
suffer.
Another function of punishment is to indicate to other people what not to do, to deter
them.
And maybe if I punish a self-driving car, I could maybe deter other people from acting
like the self-driving car, but it doesn't seem very plausible.
Maybe you could deter other companies
from developing cars that would behave in that way.
Yeah, but so we again have this problem
that who are going to punish, who are going to blame?
We want to blame someone.
We want to hold someone response, but we want to punish someone.
I mean, you might say, actually, these are not
very nice impulses that people have.
This fact that we have this sort of deep-seated, you know, deciphered to punish people who
call harm, maybe it would actually be better to try to remove that from our nature. And I mean,
I have other colleagues again. People, you know, we philosophy, we like to explore different
options. And so we always have sort of someone working in the job of you know you go and investigate in whether we can that would be a good idea
and so one of my friends argues that when actually we should try to take out this
retributed to this intuition so we want to punish that would be better.
I mean I guess I might respond to be that perhaps it would be better but also good luck.
I mean that's it's not so easy because it's very deeply ingrained into our nature.
So, I mean, I don't even know where you would begin.
Some people would say, well, we do find that some of the medicines
and drugs we take for other things that sometimes change our emotions
and our responses.
I mean, I believe that you talked with a friend of mine, Brian Arpe.
He's interested in exactly this topic.
And one of the things that got Brian interested in, whether we can actually use drugs to control
people's love lives and emotional lives, is that it's been noticed that drugs for other
things such as depression, etc.
They sometimes affect our feelings and our attitudes, how trusting we are, others.
And maybe if we discover some sort of side effects of some drugs that would make us less
willing to punish or eager to punish people, maybe we can use that after someone has a
crash with a self driving car or is killed by a military robot or something like that to
sort of think about this, you know, what happened in a way that doesn't involve wanting to find someone to punish.
Could be.
Could be one way of going.
I don't know.
Tesla is going to have to supply you with some supplements, an annual supply of tablets
so that you can do that as well.
You saw right the, um, the whole principle for why we have friendship and what it means
to have rivals and what it means to have reciprocal
altruism and kin selection and all of the it is it's the foundation of what makes us human
that social element right and yeah to deprogram that I think you're asking an awful lot
but on the flip side there is something that feels awfully unfair. I mean, it's unfair to be run over by anybody, but it feels oddly unfair to be run over by a car.
But that being said, I imagine that when cars first came out in the early 1900s,
there would have been complaints around, well, they're moving so quickly, look at how many people they're going to kill.
Horses would have been a much better solution.
We, the horses on the road, yeah, they make a mess. But they go slower. There's going to be fewer
accidents. You want to put these cars on the road that's going to cause more accidents.
Or before the tube and the London Underground were made, it's like, well, you know, if we
allow people to get on the tubes, some people are going to fall in front of the tube and
they're going to get killed, they're not going to get killed when they're walking on the
street, especially if there's only horses and carts upstairs. So I wonder how much of it is a status quo bias that's just simply people feeling uncomfortable
getting out of inertia and interchange. I feel like that probably is a lot, we're pretty,
but the reason that we're so good as a species is that we're adaptive, right? And we are
incredibly quick at adapting. So when the new thing happens, we'll probably end up adapting to it.
The beauty or the challenge, I suppose, that we have at the moment is that we can step into this
programming globally, the technological programming, the societal programming, the cognitive programming.
We can say, okay, we have the opportunity to choose what sort of a direction we would like to go
down right now before we actually get there and we just adapt to whatever the hell's going on
we can make the choice of the direction that we think would be optimal.
Yeah, I mean I do think that there's a bit of a development in this direction.
I mean so typically what has happened in the past is that technologies are developed, put it out into society and then you know later problems are fixed.
But then at that point in time, I mean it's typically hard to fix things then you know later problems are fixed. But then at that point in time, I mean
it's typically hard to fix things because you know the technologies they're getting ingrained
into everyday life and they sort of almost recede into the background and we don't even
think of them as technologies anymore. We just think of them as part of everyday life
and to even what we what's sort of called a technology, you know, a robot or artificial intelligence, it tends to be something that's new and unfamiliar.
But once it's taken on, that sort of part of that is sort of human landscape of like the
world we're moving, then it can be pretty hard to change it because I mean, just think
of course, I mean, our cities are now, you know, totally planned, you know, four, you
know, where people have to park, where they drive,
etc. and where you can walk and you cannot walk. I mean, that's changed over time. And so actually,
if you look at old pictures of, you know, when the cars and roads, the first came into the cities.
I mean, people were walking everywhere, biking random directions on the road. I'm actually,
well, even the Netherlands, they still do. But still, I mean, things change over time, but then it gets ingrained and part of this, or just a backdrop of our lives
and it's very hard to change. I do see more development now that there's so much discussion
about risks and fears related to AI and robots and things like that. So there's a move towards
trying to put the ethical reflection into the design process itself.
I mean, that's part of the reason why MIT has that website with a moral machine.
They're trying to find out people's attitudes about self-driving cars before we have a lot of them everywhere.
I mean, some people have responded that, well, you know, the self-driving cars are not going to be choosing between
killing two grandmothers or one grandfather and two dogs.
You know, that's the sort of dilemma that you get on that website that you've talked about before.
They're going to face very different kinds of challenges, for example, determining whether something is a person or a branch or something like that.
The image recognition should be good enough that a self-driving car could tell if something is,
I don't know, just a shape that looks human
from a distance or whether it's actually a human,
maybe it's some sort of heat camera or something like that.
So they need to know what their environment is like.
And so do they ever need to choose,
I'm gonna drive straight and run over to grandmothers
or goes right and drive over three, I don't know,
granddaughters or left and two grandfathers
or something like that. I mean, that might happen every now and then, not very often. But
nevertheless, it's all part of this idea that we have to think about these ethical issues
before we face them. And we have to somehow try to program some sort of ethical, I don't
know, the compass, if you will, into the self-driving car or into the
technology that we're going to be using.
It seems to me that you hit the nail and the technology tends to move quicker than the
legislation that catches up with it.
Governments of these big lumbering behemoths that take forever to do anything, whereas Silicon
Valley can just get a product out and see what happens.
We're seeing that with fun addiction
at the moment. Everybody uses their phone too much because the tactics that are used by
apps race to the bottom of the brainstem and they're able to manipulate you in ways that
perhaps if we'd known, if we were omnipotent and had known in advance, we would have said
actually, let's not have that feature. Let's not allow infinite scroll. Let's not allow
auto play. Not a lot. Let's not allow bings and bongs and tiktok, generally. But it's out there and now we need to play catch-up.
One of the things that makes me a little bit more hopeful, at least for the self-driving
car analogy, is that because the outcomes are so grave and newsworthy, I think that a lot
of the companies are going to err on the side of caution.
You do not want to be the company that's killed two people in the same city in the same
week.
You just don't, because it's going to be so bad for PR.
But the socialized costs or the externalized costs, should I say, of the technology being
wrong with regards to self-driving cars is so obvious and
newsworthy compared with the more slippy, difficult to define technologies like social
media and stuff like that.
Like you don't really see someone's mental health degree or their sense of self-worth get
worse over half a decade.
You know, like it's a lot harder to define and that person themselves, you know if you're dead or a lot,
well, you don't know if you're dead, you're just dead.
But you know if you're dead or alive.
Your family wouldn't know.
Precisely, but where is your family,
perhaps don't know the arrow of causation
between you spending too much time and tick-tock
and you wasting your life.
So I wanted to talk,
I wanted to talk about this for ages. Our sex robots ethical.
Okay, yeah, from one thing to another. Yeah, I mean just real quick, maybe about that case
of self-driving cars, killing people in the company that's one totally on kill too
in a week. I mean, it is interesting to compare it with space travel. The first time that
someone went to the moon, it was a world event.
Like everyone was in front of the TV watching very carefully. The second time,
smaller TV audience. The third, I think I don't remember how many times they've
been to the moon, but I think it's maybe less than ten. I mean five, maybe something
like that, but each time it was a less of a thing. And now when people travel to
space, I mean it's not even on the thing. And now when people travel to space,
I mean, it's not even on the news.
I mean, sometimes when Tesla has a new rocket
that they're trying out with crashes,
that's newsworthy, I'm afraid it could just happen today,
as we recorded this.
Anyway, but the same thing could happen.
I fear with self-driving cars crashing into and killing people
that are normalized.
It's going to be normal.
Okay, now it happened again.
So I mean, I would agree with you that at the moment
It's you know world news when it happens, but that's something that we're probably going to see the same sort of development
That it's going to be normalized and so they're going to have more leeway to sort of kill people
So we just need with all of this stuff. We need to get out ahead of it
Which is obviously why people like yourself who essentially just ask a million different permutations of the
same question.
Indeed.
That's why we require ethicists.
Indeed.
And that does take us back to the question that you're really wanting to discuss namely
our sex robots ethical or not.
Because this is a good example of a question where I mean there are prototypes that are being created and there are sex dolls that have some features
that they can move a little bit
and that they have a sort of chat function
so you can talk with it in the way that you can talk
with Siri or Alexa that we mentioned before.
But certainly the sex robot that we maybe one imagines
one thinks about that just that concept,
that's something that's very intelligent
seeming that can really behave in a very human-like way.
It doesn't really exist yet.
However, there are plenty of people trying to develop them.
I mean, is there a market?
Well, there's clearly a big enough market
that there are people that are developing
and then hoping to be able to sell all of them.
So there seems to be an attempt that supply
and there seems to be some amount of demand.
But they're not quite there yet.
And so here we have another opportunity
to sort of start talking about the ethical side of things
before it's a big real world problem.
I mean, there are, again, there are already prototypes.
You, for some of them there's even discussion
whether it's a real company
or sort of whether you can really buy them at all or not. I mean there's one. There's a website
that's called TrueCompanyon.com and they say on the website that you can buy a sex robot
called Roxy, spelled with three X's. That can be as the website says, a loving companion that can know your name, get to know you,
have orgasms, and so on and so forth. I think you can put down sort of a, you know, make
a payment and an order one, but it's one of those things where it's very hard to find
people to say, yeah, I bought one and I enjoy it and then you go, please go ahead and interview me about my experiences with my sex robot. So, that's not to say that there aren't people like that. I mean, there is.
One person that I discuss in my book and that I'm almost comes sort of friends with over time, I'm called calls himself Dave caps. He lives together with not sex robots, but with a few sex dolls.
And he has done a lot of media appearances, you know, and also to meet on TV, podcasts,
et cetera, where he talks about how he lives together with his sex dolls.
And so there are people like that that you can interview and talk about, you know,
their experiences with these products, but mostly it's hard to kind of get
their experiences with these products, but mostly it's hard to get beyond anecdotes and imagining what could happen. That is what we are. Like you said, et cetera, we're imagining also
just scenarios and asking which seems best from the point of view of values such as consent,
objectification, will this make people less good at interacting with human partners,
etc. etc. And like I said earlier, really the main issue that people worry about so far is that
sex robots will sort of inspire people to have strongly objectifying attitudes towards sex partners
to make their empathy go away because there's literally no mind or subjectivity there on the
other side for you to be sensitive to. One worry would be that one keeps interacting with his robot and then stops caring about
the feelings of the other, whether they consent to what you're doing, etc. and then that
you would sort of carry over that behavior to a human.
Interestingly though, if you take someone like Dave Kett, this person that I talk with again,
I mean, if you look at the way that he talks about his sex dolls, it seems to be very respectful.
He says in one of the clips that I've watched with him when he's been interviewed,
you know, I wouldn't just want to treat her as the sex doll like a thing.
I won't. It's a person. So certainly it's possible, on the other hand, to have people who would, and like
we said before, people have these social attitudes towards robots.
You could ask them, well, what if we could design sex robots in such a way that they would
actually not trigger sort of objectifying bad attitudes, but they would actually kind
of stimulate, train people to be maybe
even become better at interacting with other humans.
You could imagine someone who feels uncomfortable about their performance in that domain and
that who wants to train themselves, maybe get to know a human anatomy better, they're
embarrassed about doing that with another person,
but perhaps a sex robot could serve as a kind of educational tool for them, a sort of teacher.
I mean, this is something that others are also thinking about. Not only what are the bad possible
consequence and risks, but also what are the potential benefits. I mean, take another case. Let's say that you are,
someone is the victim of sexual assault or rape or something like that, and that they feel extremely
uncomfortable around human sex partners, but they want to get back into the sexual world, so to speak, maybe a robot that would do whatever they want could be a way of becoming
comfortable again with having sexual interactions with other agents, and that could be a sort of
stepping stone towards returning to having sex with humans. That's one possible thing that people
say would make. We'd be an argument in favor of having them. And now there would be, well, let's say that there's someone
who really can't find a sex partner.
Maybe people around them, they don't find them attractive.
They just have some sort of an impossible personality,
I don't know.
So it really can't find a sex partner,
but they still have a deep longing
to have sexual interactions.
Well, maybe for them, a sex robot would be better than nothing. That would be, that's
another argument that I've seen for why sex robots would be ethical rather than unethical
and actually ethically require that we should try to develop them. So it certainly seems
a bit something that there are arguments on both sides. To me, I struggle.
I haven't found a compelling argument that says it's unethical to use sex robots.
Not personally, I just haven't been convinced by any of them yet.
I understand that we don't want to train people to go out and behave in bad ways, but those
externalised sort of costs, I don't think that they're going to happen
all that much. I wouldn't be too concerned about it. And outside of that, I don't really see
anything that's that compelling to, to stop it from happening. However, there will be a lot of
people listening who may disagree with me. So I would be interested to see in the comments below
what everybody, what everybody thinks.
Well, let me give you a case that's maybe the most difficult case.
It me, that makes it. Yeah, yeah, well, let's see if you can stay excited.
So, sex robot are made to look like children. That would be the case where a lot of people feel
that it crosses the line, so they might say, okay, if it's a sex robot
that looks like an adult human being, well human being, that's another thing.
The two of the cases that people have been concerned about are ones that are made to look like,
I don't know, animals, a dog, let's say, or especially I think, I mean, more serious example is the sex robot made to look
like a child.
There too, though, you have people who argue, not implausible, I would say, that, well,
let's say that someone is a pedophile, and they do recognize that it's wrong, but they
think that it's wrong to have sex with children, and yet they can't control, I mean, they
can't, there's no conversion
therapy let's say. So the only physical outlet, so to speak, would P2 have sex with either
a child or a robot looking like a child. Maybe it's ethically good if someone takes the,
goes for the robot rather than the human child. So it's been suggested it could be a kind of therapy
tool. Some would then say, well, nevertheless, there seems there's something sort of inherent
or repugnant, there's a sort of moral taboo that we should respect, et cetera, et cetera.
So this would be the case where maybe, I don't know how you see about that case, but
that's one where.
Yeah, so I've only, I had this because can sound so odd. I had a three hour conversation about the ethics of sex robots.
And at least half of it was talking about this example as well,
but it was on a plane out to Dubai.
So there was only one person that could obviously hear
understand English in the vicinity and this poor girl must have been thinking,
what am I listening to for the entirety of this journey?
and this poor girl must have been thinking, what am I listening to for the entirety of this journey?
My mind, ever since I was seeing a girl at university
who I was doing medicine and was very, very much
into medical ethics,
and she completely changed my view of how I saw pedophilia,
the difference between pedophilia and child malestation,
and that's a distinction that I think a lot of,
sadly, because the way that we use those words,
they used interchangeably, but they're not the same.
The first sort of thing to understand
was that people do not control what they are attracted to.
They have no conscious control over that.
This has been shown in FMRIs and also in a rousal response,
we use show someone every sexual situation
under the sun
with not children, nothing happens,
you show them something with children
and everything happens and the reverse happens too.
People can't control what that happened.
Okay, so what that means is that there are some people
who are brought into this world cursed
knowing that their sexual proclivity
is they're disgusted by society,
no one, they're terrified to reach out
for help all of these sorts of things.
Now, if it would appear that,
if we find out that upon bringing in sex robots,
the externalities of someone using a sex robot
seems to bleed that behavior out into the real world,
then we have quite a big problem.
If the reverse happens, and it seems that by using the sex robot,
we actually get a decrease in that behavior
out into the real world.
And I don't know what is more likely.
I'm not a neurologist.
I'm not a behavioral psychologist.
I don't understand sort of what would tend to happen there.
But you can actually imagine a world
in which you would reduce human suffering
by basically
giving someone what is ostensibly a vibrator or some sort of, you know, what's the difference
between a, is an odd question to ask, but what's the difference between a very small vibrator
and a small child sex toy that would be used by a straight female, who had that sort of a preclivity.
Like, really, we're just getting back to anthropomorphizing,
we're bestowing some sense of agency onto this being,
especially if you're talking about a sex doll,
which literally has no inner workings at all.
It's a fascinating, like, I think about this far more
than I'd care to admit, because I just find it, these things
where the precipice on both sides is incredibly steep, I enjoy thinking about that, because
it makes me be very rigorous with my thinking, and I hope that everyone that's listening
kind of feels the same.
This isn't here to make us feel uncomfortable.
It's here because it's an interesting and rigorous discussion around something which obviously has some pretty grave ethical implications. What's your opinion? Do you have
a personal stance on this? Well, I mean, this is actually something that I'm thinking about at the
moment and working together with a colleague on an academic article about this. And so,
so I don't have a settled opinion about it yet because we're
grappling with this issue. I mean, one difference between the child sex robot and this small sex toy
that you were talking about that doesn't look like a human would be the symbolic difference in
terms of one symbolizes, you know, nothing maybe as the sex toy, let's say, whereas the other symbolizes a child.
And so, depending on how much importance one puts on symbolism, one might go different directions
here. So, if you think that it's somehow disrespectful towards human children to create and buy and sell,
a robot that looks like a child that people want to have sex with
if you think that I mean that's enough of a problem you know at other respect for human children you
shouldn't make money let's say off of this then you might have a problem with this
however let's let's say that it's not I mean it it's, it's some sort of therapy tool that's created not for profit.
And that it's, you know, highly regulated or something like that.
And so it is not sort of a commercial market for it.
Then that symbol is a argument of sort of making money off of this, maybe goes away a
little bit and it may, might become more acceptable.
But I think this is one of the things where, and you can imagine it being or less unethical. Let's say that, you know, you come up with
this idea, I want to make money, I'm going to start creating and selling for very big
prices as child sex robots. You do not, I mean, not you, but the person who would have that
idea, they seem, at the very insensitive have that idea, they seem at the very insensitive
in their attitudes.
They seem to be open to moral criticism.
If someone, however, had this idea that, well, maybe you can save one or more children from
being molested by creating a sort of therapy tool that can be offered to people with pedophilia, pedophilic desires that speak in a sort of controlled setting.
Well, then you seem less open to the same sort of criticism.
So we had a during my discussion with Brian Erp.
He was talking about people taking,
people who were pedophiles taking SSRIs to dampen down their libido.
The external effect that you get in both situations from that,
hopefully, if we can roll the clock forward and get the particular therapeutic tool use
out of a child sex robot that we want. The same externality occurs in both situations that you have less of a predation worry from this particular subgroup.
Right. The difference that I can see is in one of them, the person actually gets to proceed through
life normally. And I know the people that listen to the show are incredibly balanced normal,
but I know that there is that emotional, well, the freaks, there should just be locked up. up It's like, oh, like you're not as you haven't seriously ethically thought about this problem
And you don't understand like empathetically what's going on
I
I wonder what happens when you scale this sort of thing up to society wide because inevitably the loudest voices are the ones that
That I heard and some of those are going to be,
you know, if you were a survivor of child sexual abuse,
finding out that child sex dolls,
and maybe this one, maybe this one looks like you looked
when you were, I mean, we're getting into some
very uncomfortable water as we get through there.
So yeah, I, I wonder where...
But again, let's take that person. And if they know that this is done for the sake of not having
if this happened to other children, and it's not done to make big bucks, you know, like maybe it
becomes more acceptable to them, but nevertheless, I mean, of course, they're going to be emotionally,
you know, going in different directions, because maybe mean, of course, they're gonna be emotionally, you know,
going in different directions,
because maybe on the one hand,
they think that, well, if it can have someone avoid
having happened to them, what happened to me, that's good.
At the same time, it can might seem deeply offensive
and might seem as some sort of acceptance
and normalizing of something
that was really traumatic and bad for them.
So I can certainly imagine that.
So difficult, man. It's so different directions. So messy. You talk about robot rights and
we mentioned right at the top about making robots slaves. Should we should we make robot
slaves? Yeah. I mean, some people have responded to that thesis by saying that we shouldn't use, we shouldn't use that terminology because it brings up ideas about, you know,
people making some others into their slaves and that whole mentality is,
is should go out the window. I mean, nothing should be a slave, neither a robot nor a person.
But, you know, in defense of my colleague, Joanna Bryson, she never meant to sort of be enthusiastic
about this past slavery and anything like that.
The idea was just that since people are going to be owning buying and selling robots, and
I mean, that's one feature of slavery, like, you can buy and sell the slave, and the slave
is there to be useful to, you know, to the owner.
The robots are going to have these properties.
And so, her suggestion was,
it's best to create robots that wouldn't be morally ambiguous for people,
so that they wouldn't feel a sense of responsibility.
So like, you make the robot look like a box, you know,
it doesn't have eyes, it doesn't sort of generate a sense of responsibility.
That's the best situation, because then we don't have to have this worries about robot rights. However, for some purposes, it might be more efficient to have
a robot that actually does generate the social attitudes. There is one robot that is being developed
for treatment of autistic children, and so the idea is that for some children with autism,
they have trouble engaging with other humans because they find it overwhelming. So if you have
a sort of a robot that looks like a simplified human, this actually has in some experimental studies.
Been shown to possibly work, the child sort of opens up and then even turns to the experiment and points to the robot and say,
look at this, and that's already a nice step forward. For that therapy tool, you would need the robot
to look a little bit human-like, because that's part of the idea that it should look a little bit
like a human, et cetera. Take that robot and then let's say that after a day of experimentation,
you take the robot to another room and then you take out like a baseball bat and start hitting it or you do something else to it that doesn't seem to be sort of very fitting.
That can seem, I don't know, not again, if it's not directly wrong, it can seem like insensitive, let's say.
Something that's been developed for this purpose, I feel, of treating this children, it should
be treated maybe in a more respectful sort of way.
Is that to say that the robot should, the therapy robot should have some sort of rights?
Well, it's to say rather that, again, out of respect for those children that are being involved in this treatment, maybe one should maybe treat the robot in a, I don't know, dignified respectful way.
And then again, you can go to your example from, let's say that I have a robot that looks
like you.
And then I mistreat that robot.
I mean, in a way that can seem like some sort of attack on you.
So maybe, again, out of respect for you
I shouldn't either I don't make a robot that looks like you perhaps the best option or if I do for whatever reason
You know, there should be some maybe some limits
How I treat this robot out of respect for you
But this is still just a question of you know, how can we behave in a way that's respectful towards other humans?
The real question would be, would there be
any circumstances where, out of respect to the robot, you know, you should treat it well in some way?
Well, I mean, even today I saw on Twitter some video about some scientists to the claim that they
have created robots that can feel pain. There's another, not the same team, I think,
and the Japanese team led by Professor Asada,
I think this is named, who also tries to create robots
that can fill pleasure and pain,
because the idea is that they can learn
in the way that infants do.
Before we learn language, we learn in an emotional sort of way.
You might ask, does it even make sense to think that a machine, a robot could feel the
pleasure of pain?
If you believe that they are achieving this in their research, and I'm a self-emce
skeptical, then you do get an interesting situation because maybe the robot is not very
intelligent, but it has the capacity to feel something in some sense or other. Here too, you might
start thinking, well, better be safe than sorry, so let's not cause too much unnecessary pain to
this robot. Of course, it's dependent on whether you think it makes any sense to say that a machine
could feel pain. Yeah, I think that's where the slippage is at the moment, right? Because inevitably,
you have a reward function in most sorts of circuits.
There is an outcome that you want.
That's a, you know, alpha goes zero worked, that there was an outcome it wanted and it learned.
Essentially, it was like, if you've done this, then great.
This is the sort of direction that we want you to go in.
Now, if I just decide to recategorize that as pleasure and not that as pain, okay, but
there is no phenomenological experience that it is going through, that the machine is
going through, which causes the suffering, which causes the second order metacognison experience
of the suffering itself, to be able to say, I mean, to be able to say, as far as I'm concerned,
whoever it is that's on Twitter, we have created a robot which is able to feel pain is to say we have created consciousness.
Because I don't think that you can feel pain without consciousness.
Any, like, I whack a rock with a stick.
This is the rock and the stick both aren't in pain because neither of them have consciousness.
Yeah, I'm
unsure around that. For me personally, I think that treating robots
as if they are just a scaled up version of a MacBook makes the most sense at the moment. Now,
giving them, you talk about it at the beginning of the book, giving them citizenship and you slipped up
at the very, very beginning of Freudian slip to say,
who is, as opposed to it is.
And this gets into kind of, I guess, the through thread
that we're talking about here,
just how misaligned we are when we deal with
any sort of robot and the more that these robots
can look and act like humans,
the more and more our behavior is going to be modified
toward them, that's going to have some externalized
consequences to our interactions with other humans.
There's also concerns around whether or not
those robots themselves should have some sort of right,
some sort of sense of anything else.
For me, I don't think that that really is,
it might be a concern to think about for the future,
but right now I don't think that it is.
But yeah, it's going to be,
it must be for yourself in this industry right now,
it must feel like a very interesting
and exciting place to be.
You know, the next sort of 20 years or so
is we're gonna see some insane changes.
Yeah, I mean, part of this because, as I said, there are scientists who are saying that
they are creating robots that can feel pleasure and pain.
I mean, I share your skepticism about whether they have achieved that goal, but certainly
there are people at universities, they're trying to do this.
They claim that they can do it.
There are people developing sex robots, self-driving cars, all sorts of interesting and seemingly
science fiction-like things, but people are actually doing it, and it's very clear that there are
interesting philosophical ethical questions about it, so yeah, for people like me, it's great.
Sven, thank you very much for today. Humans and robots, ethics, agency, and anthropomorphism
will be linked in the show notes below. If people want to check out any more of your stuff,
where should they go? Well, I already mentioned Twitter and I think that's a good place. I mean,
whenever I do something like this, I appear on a podcast or I write a book or article, I always put it there to advertise it.
So that's a good place if you want to know what I'm doing.
Perfect.
And it's just like at Sven Nihon.
It'll be linked in the show notes below. Sven, thank you so much.
Oh, thank you.
Faire, Faire, Faire