StarTalk Radio - Cosmic Queries – Robot Ethics with Dr. Kate Darling
Episode Date: June 21, 2021Are robots going to take over? On this episode, Neil deGrasse Tyson & comic co-host Negin Farsad explore our future with artificial intelligence by looking at our past with animals with robot ethicist... and author of A New Breed, Dr. Kate Darling. NOTE: StarTalk+ Patrons can watch or listen to this entire episode commercial-free here: https://www.startalkradio.net/show/cosmic-queries-robot-ethics-with-dr-kate-darling/ Thanks to our Patrons Dino Vidić, Violetta + my mom, Izzy, Jeni Morrow, Sian Alam, Leonard Drikus Jansen Van Vuuren, Marc Wolff, LaylaNicoleXO, Eric Colombel, Jonathan Siebern, and Chris Beck for supporting us this week. Photo Credit: Photo: Harland Quarrington/MOD, OGL v1.0, via Wikimedia Commons Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.
Transcript
Discussion (0)
Welcome to StarTalk, your place in the universe where science and pop culture collide.
StarTalk begins right now.
This is StarTalk. I'm your host, Neil deGrasse Tyson, your personal astrophysicist. And this is a Cosmic Queries Star Talk all about AI and the ethics of AI and what it means and what are robots and their relationship with us and our relationship with animals and what does all that mean and how and why and are we all going to die?
Okay.
Megene, you're going to help us figure this one out?
Nagin, you're going to help me, help us figure this one out?
First of all, I'm glad that you started with, are we all going to die?
Because that's the first question on everybody's mind.
Like, let's just lay it out.
That's why we're all tuning in today, Neil.
Thank you.
Thank you. Nagin Farsad, this is your, I've lost count of how many times you've been my co-host,
and it's a delight to have you back.
And Nagin, you're the host of the podcast Fake the Nation,
which I was delighted to have been one of your guests,
but apparently only once.
That invitation never came back.
I don't know.
You gotta earn it, Neil.
You gotta earn it.
You're also a voice of someone
on a new animated adult swim
cartoon
what's the name of that
it's called Bird Girl
and it's really fun and really ridiculous
are you Bird Girl
I'm not Bird Girl
I am Meredith the Mind Taker
so I go into
people's minds
and I can tell what they're doing
and then I can also change their minds.
So I feel like it's a superpower that I use sparingly,
but I do use it, Neil, so be careful.
Crazy.
And one of my favorite things you've done is your book,
How to Make White People Laugh.
Did I get that title right?
Oh, yeah.
And you get to say that because you're from the Middle East or something, Did I get that title right? Oh, yeah.
And you get to say that because you're from the Middle East or something,
so you're allowed to address light-skinned people that way?
Yeah, the Iranian-American Muslims.
We get to say stuff like that sometimes.
You know what I mean?
Yeah.
So while I read a lot about AI, I claim no particular expertise.
For that, we had to go to the source.
So up to Cambridge, Massachusetts, and the one and only Kate Darling. Kate, welcome back to StarTalk.
Thanks for having me again.
Yeah, you're an expert in robot ethics. That's just a crazy thing that, like, I don't want to have to think about that, but you know we have to think about that, right? And, you know, human
robot interactions, tech policy, of course, policy is influenced by this. And you have a doctor of
sciences from the Swiss Federal Institute of Technology. Did I get that right? Yep, that's
correct. And I don't, I'm afraid I'm not even going to ask you about this. I'm just going to
read it and we'll move on. You're a caretaker of several domestic robots let's move on i don't want to know i don't know anyone know
your latest book came out in april 2021 the new breed what our history with animals reveals about
our future with robots and this is a cosmic, and so our whole fan base is ready. They're at the gates
at the start of the race trying to understand what all this is about. But I want to just lead
off with a few questions. What would you say was our single biggest ethical challenge with regard
to robots, other than they'll kill us all? Oh, it's interesting because I actually wrote the book because I don't think the question whether they will kill us all
is the single biggest ethical challenge,
even though it's the one we always focus on.
There's a bigger challenge than that.
Oh, oh, oh, Nagin, what?
The bigger challenge is, like, will they kill us all
but be really nice while they're doing it?
Is that the
bigger ethical challenge? Will they put the forks on the right side of the plate
in the process of killing? Like, what's their etiquette skill? Yeah, there you go. Yeah, that,
you know, I never thought about that one, but that might be the single biggest question.
I never thought about that one, but that might be the single biggest question.
You know, will they have good table manners?
No, no.
I think that there are a lot of ethical questions, actually, and we're at a very unique moment in time because robots have been around for many decades, but they've been kind of behind the scenes in factories and behind walls and in cages. And now they're coming into shared spaces, and we're kind of trying to figure out how to live
with them and what they can be used for and one of the things that I try to do with the book is
move away from this constant comparison we have of robots to humans and artificial intelligence
to human intelligence and these narratives we have about them taking over and replacing us
and I try to push a different analogy which is. And I look at the ways that we've
harnessed animals for work, for weaponry, for companionship, for millennia, and how we've
partnered with animals, not because they do what we do, but because their skill sets are so different
from ours. And so as we move into this future of artificial intelligence and robotics, we should
be thinking of these technologies as a partner in what we're trying to achieve. And if we can do that and get rid of some of the moral panic that we have
then we can start addressing some of the actual issues that I think are at play
that often have to do much less with the technology
than they have to do with humans making choices
against the backdrop of corporate capitalism or oppressive governments
and so it's really all up to us in the end
and not up to the robots to determine the future.
Okay, I have to push back.
I have to push back.
With your permission, may I push back?
Okay, so when we harness oxen to pull the plow
and horses to do other kinds of farming
and we have dogs other kinds of farming.
And we have dogs that sniff out whatever.
So we are using certain talents that each of these creatures possess.
At no time are we relying on any animal's intelligence,
really, not in the way we think of intelligence, right?
I don't go to my dog and say,
I'm having problems with this calculus question. Could you help me out here? No. Okay, stop licking
your butt and help me out. You know, this is funny. I'm just saying. Hold on, Neil. I take
offense on behalf of my Pomeranian who does excellent calculus theorems. Okay, continue.
So, oh, it's a Pomeranian.
So even if it gets the wrong answer, it'll do it cute, right?
Yeah, and shed a lot of fur while he's doing it.
It's fantastic.
So whereas AI in our dreams is smarter than us.
So to say let's partner with something smarter than us feels scary.
And I'm thinking they'll just really rather make us their pets.
And that's the animal robot analogy that you should be exploring.
What kind of pets would humans make for the robots?
See, I feel like that's been explored a lot. And I feel like that vision of the future relies on a very narrow definition of intelligence
and is kind of caught
up in this idea that the artificial intelligence we're creating is like us but smarter. It's only
a matter of time before it is smarter than us and can outsmart us. I don't think that's how
it works and I don't think that's how it's currently happening and that's not the trajectory
we're on because we already have machines that are much smarter than us. We have machines that can do calculus. We have machines that can
beat us at chess and at Go and at Jeopardy and do endless calculations and see patterns in data.
They're way better than us at so many things. And then there are many other areas where we are still
much, much, much smarter than the machines. It used to be that if you asked Apple's voice
assistant to Siri to call you an ambulance,
she would say, okay, from now on, I will call you an ambulance because she didn't understand
the context. And Apple probably had to fix that by hand because machines don't perceive the world
or learn about the world or understand the world the way humans do. But even if we could recreate our own intelligence and go along that
path of making it better and better, I just don't think that that's a very interesting path to
pursue. Because rather than recreate what we already have, why aren't we trying to create
something different that we can benefit from? So you say that we haven't used animal intelligence.
I don't think that's true. That's only true if you view intelligence as human intelligence,
because animals clearly have a very different skill set,
a different type of intelligence than humans do.
But they can perceive the world through their senses in ways that we cannot.
And that has been really useful to us for a long time.
And so we've partnered with them and we've used them to supplement our own ability rather
than partnering with them because they can do calculus. And also the point of the book is not
to say that animals and robots are the same or that we should treat them exactly the same.
Obviously, they have very different skill sets as well. But I'm just trying to open up our minds to
more opportunities and more possibilities than just recreating what we already have.
So, Nagin, I think the robots at the MIT lab created Kate Darling and made her say exactly this.
Wait, can I say something about...
Exactly what the robots want us to think about the robots.
Yeah. Who's controlling who, Kate Darling? The other thing I want to point out
is that, Kate Darling,
you have the perfect name
for a character in a movie
in which the robots
become self-aware and take over.
Exactly.
And you're like the expert
in the background
who's like,
I've been warning you guys all along
or whatever.
So, Nogin,
let's get straight to the questions.
These are all Patreon members.
We changed that rule now.
In order to ask a question, you have to be a member.
And that just keeps the wheels turning
of our entire operation.
So I just want to publicly thank Patreon members for this.
And here's your reward.
All right, Nagin, give it to us.
Okay, here we go.
Our first question comes from patron Sean Grossman,
who asks, could robots build a habitat for humans on Mars?
Could robots build a telescope on the moon?
What are the physical challenges of building a robot
that can operate in space?
It kind of bridges the Neil, Kate Darling divide
right there with that question.
We can tag team on this one.
So Kate, why don't you begin?
I mean, Dustin, and what a
perfect example for something that we should be using robots for, right? Anything that's difficult
for us to do, like head to Mars and hang out and build a habitat. We absolutely need machines that
can go to these places where we currently can't and do work for us. So that's a great use case of supplemental technology
in line with how we used to use animals to help us do things.
So, but in terms of the difficulty,
and I'm sure Neil will have a lot to say about this as well,
it is very, very difficult to create robots straight up.
And then to create robots that can go to space,
I am just in awe every day of NASA and people who have built robots
that can actually not only function in space,
but also, I mean, they have to get everything so exact and it has
to be so precise and nothing can go wrong. And, you know, even working at MIT, I see many,
many things go wrong all the time with the technology in the lab. So it's very impressive
what we've been able to do. And there are many
challenges with it. I don't even know where to begin. It's a very challenging job.
Something I've thought about a lot is I kept thinking when robots take over and they will
only keep two kinds of humans, the stand-up comedians, because robots don't know how to do
that, and I think construction workers,
because I can't picture construction workers being replaced by robots,
because they're doing such different things, carrying things,
they're making decisions on the spot, and I'm just curious.
So I would wonder how soon, if ever, we're going to send robots to Mars and have the robots build something.
I guess that's what I'm trying to think of here.
Yeah, I think that's right.
Robots can help build something.
They are usually good at helping people do their jobs, but they're not great at straight up replacing human workers.
So it'll probably be a while before we can have robots just autonomously do something.
I mean, we don't even have automated car factories yet.
Elon Musk tried to automate his Tesla factory.
And even in a space where everything's very predictable
and you would think we could have robots just do the assembly line,
he ended up tweeting that humans are underrated
because there's always something that can go wrong.
A screw can fall on the floor.
Something can happen, and robots don't know how to deal with that.
You need a human.
And so I think it's more than just construction workers.
I think they're going to need quite a few humans around for quite a while.
And I don't even think that the robots would want to get rid of us
because we, again, have skill sets that are so different from theirs
that they would probably want to keep us all around and i think they want to keep
nagin right i think they want to keep i hope so but the king has the court jester you know
right exactly they need to have some form of entertainment
i have a follow-up on the robots as construction workers,
which is that
if robots did become
construction workers,
would they also still
catcall female passers-by?
Well, that's...
Is that just built
into the job?
Only during the lunch break,
right?
Is that what they do?
Yeah.
That's how science fiction
movies would...
Like, for example, in the Jetsons, you know, the maid who's a robot was actually female, but it's a robot.
You know, it was hard to break out of these gender stereotypes that people were, you know, the maid didn't have to have any gender at all, but it was a female maid.
Yeah.
It was on wheels and it was quite the thing.
I talk about that in the book, too, about how the design of these robots,
a lot of our own biases flow into that.
So if you had construction workers build a robot, a construction working robot,
and they liked a cat call, they might make the robot cat call as well.
I mean, we do this constantly in less funny ways.
Yeah, because if we program it, it's got us in it,
whether we want it to or not.
That's right.
Maybe that's how we think about it.
Even when we think it doesn't, it does.
That's the more pernicious biases that filter in to what it is we do.
Let's go for another question.
Nagin.
Okay.
Gary Manaberg asks,
we know how to model cognitive intelligence
and probably even emotional behavior in machines.
How close are we to building machines
that can find an appropriate mate
and then produce offspring
and then teach the offspring behaviors
that are not encoded in the algorithms?
I love this question because I also
envision a world where there's like robot Tinder. You know what I mean? Kate, is that happening?
Well, let's hold on to that. Let's take a break. And when we come back,
we'll return to the subject of robot Tinder on StarTalk with Kate Darling. We'll see you in a moment.
Hi, I'm Chris Cohen from Haworth, New Jersey,
and I support StarTalk on Patreon.
Please enjoy this episode of StarTalk Radio with your and my favorite
personal astrophysicist,
Neil deGrasse Tyson.
We're back. StarTalk.
Talking about AI ethics.
Kate Darling has a new book out
comparing our relationship with animals
and how that might give us insight
to the future of our relationship with robots.
It's been out for several months.
Check it out.
She's at the center of all we need to be thinking about
with regard to robot ethics.
And we're all going to die.
I have to end every comment with that.
So, Nagin, you left off with a fun question.
Just tell us what that question is again.
Yeah, well, the question is about,
we know how to model cognitive intelligence
and probably even emotional behavior in machines.
How close are we to building machines
that can find an appropriate mate
and then produce offspring
and then teach the offspring behaviors
that are not encoded in the algorithm?
Okay, why would we do this?
I mean,
there's always this technological determinism
that because we can do something, we should do something,
or that it will happen.
I think that we have this total fascination
with recreating ourselves.
I think that for art and entertainment purposes,
we will always be chasing these particular goals.
I don't think we're as close to modeling
all of human cognition and emotional behavior
as some people may think we are. I actually think we're quite a ways away, and I don't really
see the purpose of creating a robot Tinder, as Nikin said, other than that that would be hilarious,
and I would love to swipe through that.
Which is reason alone, if you ask me, but continue.
I mean, that's fair, you know, and in that sense, you know, maybe it will happen soon.
Actually, I might go back to the lab and suggest that we create a robot Tinder,
just because I really want to see that now.
The flip side of that, or an additional element to that question was,
if robots are programmed to the way they can maybe learn emotions,
would a hybrid of those two robots have to be derivative of what those two robots were?
Or can what emerges from it acquire or self-program a brand new behavior that was not seen in what made it? I don't see any reason why we couldn't at some point do something along those lines.
don't think that anything that is encoded in us is not in some way that we couldn't somehow recreate it. So in theory, I agree that that's possible. I just don't see it happening anytime soon. I'm
kind of a crotchety skeptic in that sense. And I think there are plenty of other questions that we need to be focused on right now
before we even get to that one.
But I do love the theoretical question
of what could that look like?
And I am open to us getting there.
All right, all right.
Again, keep it coming.
And I do think just as someone,
I'm the dot, my mom is in real estate and my dad's a surgeon.
And then together they made a comedian.
So I feel like I do.
I see a future where robots just will come up with a third crazy thing.
We didn't even anticipate.
There you go.
There you go.
All right.
You got another one.
Yes.
From Dean Clunk.
We have the question.
Since we have a sort of narrow idea of what intelligence and knowledge is
due to humans only seeing humans as such,
do you think an alternate outcome of the evolvement of robotics and AI is possible,
where we don't necessarily make them leaps and bounds smarter than us,
but them becoming intelligent in ways we can't even foresee or understand,
given our current standings, understandings.
Yes. So this is what my book is all about, that that is actually the ideal future.
We don't want to recreate human intelligence.
We do want to recreate, we do want to create something new and something different, something supplemental.
The one thing that I would add, though, to that question is that we have a lot of agency in this.
We as humans can decide what technology we create, right?
So it's not just, you know, is something going to happen or is something possible?
It depends.
It depends on whether we read my book and agree that this is a direction we should go in or whether people just buy it and throw it out.
I don't know.
I think that we have so much agency and choice
in shaping the future.
It's not up to the machines.
So that's a warning shot, really.
What you're saying is you are offering a path to the future
where we don't all die,
but that it becomes a sensible invocation of robot technologies and robot intelligence.
And if we don't read your book, it's the end of civilization.
This is what I got from your book.
Yes, because let me be clear, we could all die.
But it wouldn't be the robot's fault.
It would be your fault for not reading my book.
I love it.
I love it.
That's the answer.
There it is.
All right, Nick Dean, give me some more.
Okay.
Hyperactive Jedi says,
I'm curious to know how far robots will get.
And I'm so sorry, Kate,
but this is, we're back to, we're all going to die.
Okay.
How far will robots get with warfare?
And how would long-term use of robots, such as drones or even a robot that wipes your butt,
due to the psychological standing of a person controlling slash using such machines?
I kind of, you lost me at the wiping the butt part.
Like, how did that connect to warfare?
Well, let me lead off here and say.
Yeah, please.
The original Star Trek, okay, from 1966, 67, there was an episode where a civilization got so advanced that they conducted warfare via computer.
they conducted warfare via computer.
And the computer
would log losses
on one side or the other
and all the people
who were killed in that war game
would then have to go into this chamber
and be destroyed.
And that's how they were fighting war.
The war reached
a level where it was just that organized.
And of course the Star Trek crew they're not supposed to
interfere with it but of course they do every single
episode and they say no
you have to know that war
is hell and war is bloodshed
and war is pain and war
is not just this
this machine you
walk into and just disappear because
the computer told you to.
So the death and bloodshed that comes with war
seems to be an important force
in making sure we don't fight wars in the future.
Has that really prevented us from fighting wars?
Because it feels like we still fight enough war.
No, but maybe you'll think twice.
I don't know. So getting back to the question and then landing in your lap,
is there, if we get better and
better at having machines wage our war, what is
a drone that fires missiles while someone is
with a joystick 2,000 miles away,
all right? That person doesn't hear or feel the bloodshed wrought by the drone, and the drone is
a computer. It's a robot. So where do you think this goes, Kate? Yeah, I mean, this is actually
a really important question because, you know, the use of
technology and warfare is really changing the nature of it. I will point out that we did try
to use animals as autonomous weapons for many, many years back to ancient times, which is like
an equally kind of setting an autonomous technology loose, a flaming pig in order to wreak havoc and destruction.
But obviously with the machines today, we can do much different things and more precise things.
And it is in some cases helping soldiers stay out of the battlefield and be out of harm's way.
stay out of the battlefield and be out of harm's way, but it's also allowing us to make kill decisions without the same cost of needing to put people in danger. So there is a lot of debate over
to what extent we should be allowing weapon systems that are autonomous or semi-autonomous
on battlefields. And there's even movements to ban autonomous weapon systems
before the UN.
And I think that the direction that this goes in
ultimately depends on where we want it to go, right?
I think we should be having these conversations.
I think these are the very important conversations to be having
rather than, are the robots going to come and kill us all?
No, like which countries are going to use robots in which way to harm people?
And to what extent does removing people from the battlefield actually lead to more harm because you don't have that as much of a cost to the people making the decisions. Forgive me for not remembering who said this,
but the first time someone was able to kill another person at a distance,
I don't remember if it was a bow and arrow or some military advance
where they're not right in front of you to kill them.
The person commented, this is the end of valor.
Interesting.
Are you brave by launching a missile over a wall?
Where is the bravery in that?
Where is the valor in that?
Where is the heroism in that?
Or put another way, where is the responsibility in that?
Like, do you feel as responsible for the harm that you've caused
if you just pressed a button and it happened many miles away?
That's way better put than I just said.
Exactly.
And can I also, I don't want us to forget about the other really important thread
in this question, which is the psychological impact of a robot wiping your butt.
That was the other.
Don't the Toto toilets basically do that?
Do you lose sense of your own valor in your own butt wiping?
No, no.
The Japanese toilets do that already.
Wipe your butt.
They do.
They rinse them off and everything and dry them.
They rinse everything.
Yes.
Oh, have you guys?
They're so exciting.
I love that.
And some people prefer that because if you can't wipe your own butt and you can choose to have,
you know,
a person do it or a robot,
some people would choose a robot.
I'm hoping everyone would choose the robot.
Yeah.
I'm going to say most people
are going to go robot on that one,
on butt wiping in particular.
I don't know.
It depends on how often they go haywire
and wreak havoc.
I don't,
yeah.
Right. all right.
All right, time for a couple more
before the end of the segment.
Nagin.
Yes.
So let's see.
Violetta and Mom and Izzy ask the following question.
I'm going out for my junior high's archery team
this coming school year,
so my question is inspired by that
and the robotic bow and arrow seen in the Hunger Games.
Will there be robotic or AI weaponry in the future
and how will it be used?
So a little of a sister question to the earlier one.
Yeah, I have bad news for you.
We already have, we don't have the,
that I don't believe we have the bow and arrow.
I'd have to, I read the Hunger Games,
but didn't watch the movies. So I'm not sure. And I don't believe we have the bow and arrow. I read The Hunger Games but didn't watch the movies,
and I don't really remember.
But we already have robots that can, for example, aim and shoot a weapon,
although currently people aren't allowed to let them just do this autonomously.
There always has to be a human in the loop making a decision.
We already have robots that could do it.
And so the question isn't when will we have those or will we?
The question is what are we going to do with them?
Which has been your mantra the whole time here.
Yes.
It's never the robot's fault.
It's your fault.
Someone made that robot.
Yeah, yeah.
All right. Nagin yeah, all right.
All right, Nagin, you're coming.
We have Pat Elvin comes in with a question.
As machines become more sophisticated,
will they become self-aware?
And here's the key. How do we protect them from abuse?
Which sounds like how do we protect the robots from abuse
rather than how do we protect humans from robot abuse?
Anyways, but both of those questions. Yeah, And I want to, like, put extra emphasis on the question,
achieve self-awareness, consciousness. I mean, that's, this seems to be the big turning point
in all plot lines. You know, when did Skynet achieve consciousness? And then you had Terminator,
right? So can you comment, because you haven't yet yet on achieving consciousness and self-awareness
yes so this is like you said the big plot line the thing that we're very interested in about
robots and ai and all of science fiction what happens when they become conscious and self-aware
and one of the things that i look at in my book wait and i And I realized we're out of time.
No, just for this segment.
When we come back to StarTalk, our third and final segment, we'll find out from Kate Darling what happens when robots achieve consciousness on StarTalk.
Time to acknowledge our Patreon patrons who support this show.
Dino Vidic, Violetta, and my mom Izzy, and Jenny Morrow.
Guys, thanks so much for what you do for us by giving us your support through Patreon. Without you, we couldn't do this show.
And for anyone else listening who would like their
very own personal Patreon shout-out,
please go to patreon.com
slash startalkradio and support
us.
We're back, StarTalk.
We're talking about AI.
We've got Kate Garling.
Not her first rodeo with us because this is a topic that comes up all the time.
And I've got Nagin Farsad.
Nagin, what's your social media handle?
Oh, at Nagin Farsad on all of the socials, including newly entered the world of TikTok.
Oh, welcome to TikTok.
Thank you very much. Welcome. And Nagin, N-E-G-I-N-N-F-A-R-S-A-D.
Yes, on all platforms.
How about you, Kate? Are you socially active?
I am mostly on Twitter, G-R-O-K underscore.
What?
Rock. Okay. The Kate Darling is silent in her Twitter account. what Brock okay
I told you she's a robot
I told you that
that is such a
robot's handle Kate come on
Brock underscore
two on the nose
Nikki
I told you
I don't even know who's on here right now.
She calls herself Kate.
All right.
So, Kate, we were trying to find out from you, from a question,
what happens when the robots achieve consciousness?
And has it already happened?
And if it hasn't, how soon will it happen?
And if it does happen, is that a watershed moment in civilization?
It's such a great question. According to our science fiction, it is going to be a watershed
moment. And no, it hasn't happened yet. Although we don't have a good definition of what consciousness
even is. So depending on how you would define it, maybe we have achieved that if you have a very low bar.
I actually love to compare this, though, to our history of animal rights and our history of not really caring that animals are conscious in Western society. Because when you look at how
we've treated other non-humans that have achieved consciousness, arguably,
we haven't really protected them or done anything about that has not really been a watershed moment.
The watershed moments have been more about the animals that we find very cute or that we relate
to in some way emotionally. Like Pomeranians? Yes, Pomeranians. Just as an example.
I just pulled that out.
I don't know why I said that.
Yeah, we might think that we care
that the Pomeranian is conscious
when really we care that it is a cute little fluff ball.
Those are the fluffy ones that look like balls, right?
Yes, they are.
Yes.
Yeah.
Yes.
But we haven't really,
it hasn't been a watershed moment in the animal kingdom, so why would it be for robots?
Can I, okay, so the thing that's really popular in movies is that the robots become self-aware and then they want to destroy us.
Is it possible that the robots become self-aware and it turns out they're super delightful and we just want to do like brunch with them all the time.
You know what I mean?
Like, why do we always assume the evil part?
You know, the guy, I forget his name,
the guy who runs Pinboard, he's like an entrepreneur guy.
He has said that what if when the robots become self-aware,
they are just crippled by existential angst
and they just sit around all day
worrying about artificial super, super, super intelligence.
Oh, yeah.
You don't-
Reading Kierkegaard and-
Like just because someone's smart
doesn't mean that they're not going to be like depressed.
What if they become drug addicts
because they can't handle the reality?
You don't know. It's not necessary. What is their drug? Like extra USB cords?
I know. That's what I'm trying to figure out. I need some more USB-C. I need that 5G. I need that 5G.
I need that 5G.
More bandwidth.
So that's an interesting point, Kate.
I mean, I don't want to undersell the point that you're making here, that there's a lot to be gleaned by studying our prior relationship with animals.
There's a lot of insights to come to that.
And not enough of us are taking advantage of those lessons.
And if we read your book, then we will know how to.
Yes, I think that it's an analogy that works very well,
that we're all familiar with.
And yet we somehow always are comparing robots to us
instead of to the other
non-humans that are autonomous that we've dealt with previously. All right. You know, we had as a
guest on StarTalk the actor who played C-3PO in Star Wars, and he said he's the only person in
the world who knows what it's like to be a robot. And they said, well, what do you mean?
You're an actor.
You played a robot.
I said, no, no, no.
He's there in the robot outfit,
and other people, humans, are talking to each other,
completely ignoring him
until the moment they need him to do something for them.
And so this is the robot servant, right,
not the autonomous robot. And so this is the robot servant, right? Not the autonomous robot.
And so he felt very lonely. I don't want to put words in his mouth, but he was describing this
feeling where he's only relevant when they deem it so. Otherwise, he's just there. And that's a
weird psychological state that I wonder, we might need robot psychologists.
I don't know, isn't that just called being an actor on set?
Do you have to be wearing the robot costume for that to happen?
That is true. Because when you're not needed, no one needs you, right?
Yeah, they're always telling you what to do.
It's also like being everyone's little sister, right? Isn't it just like
being a sibling? You don't want
it. Yes. Yeah, exactly. The younger sibling. Nobody wants you around until they want you around.
Until they need you, right. So, Nagin, give me some more. These are good.
All right. So, from Lorenzo and Elizabetta, we have the question, do you think that we will
ever create an artificial intelligence complex as much as our brain with emotions controlled by
electricity that mimics our biological hormones and are we going to have a digital conscious mind
that can think for itself that's a that's a lot of questions at once actually um so let's unpack
it so what's this about uploading your consciousness and then it's in a jar and it's electronics, so it's living a whole life in a jar, like the Matrix?
I mean, look, and I think I can answer all of these questions the same way, which is it's really hard to make predictions and I never say never, right?
A lot of these things could happen or something that no one has anticipated could happen as we keep playing with these technologies.
as anticipated could happen as we, you know, keep playing with these technologies.
I'm less interested in like uploading my consciousness to a jar than some
people are and more interested in how we're going to deal with robots as
entities in our lives. But yeah,
I would not say no to any of those things.
But what do you think, Neil?
What are your predictions for those?
I strongly align with so much of where you're coming from.
Among them is, just because it's something that may be even possible,
is anyone really going to want to do that?
And I know people talk the talk, but what are you accomplishing by that? And of all the advances as they come over, is that going to be to do that? And I know people talk the talk, but what are you accomplishing by that?
And of all the advances as they come over, is that going to be your highest priority? Or is it
going to be, I want it to make a better cup of coffee. I want to get to Detroit faster. I want
to, you know, there are other things that might just simply have higher priority in our lives.
And that's how I kind of think about it. And the science fiction writer bypasses all of the natural needs and
desires and priorities we might have and goes to an extreme one. We're all going to die.
And then they sell movie tickets. I think that's really what's driving it.
But also like the idea that biological hormones would be replicated somehow in robots. I just
want to say like, I hope we don't give them, like,
you know, menstrual cycles.
Like, we don't need to have
another race of thing
be set by menstrual cycles.
Like, we did it to women.
I don't know how to comment on that.
But what I do know is
that since men have been in charge
for most of civilization,
it was they who got to say how hormones are affecting women.
And whereas if women were in charge,
they would have gotten to say,
men, you're messing up the world with your testosterone.
Stop fighting, put the weapons down.
And so we don't have a self-awareness of it
because we're just the guys, right?
But the world is so messed up because of testosterone. and so we don't have a self-awareness of it because we're just the guys, right? True, true.
But the world is so messed up because of testosterone.
And by the way, I can say as a guy, I feel it.
I mean, I don't know, you know, there's the person.
I mean, here's the question.
And is this duplicatable in robots, I guess,
is the ultimate question here.
This duplicatable in robots, I guess, is the ultimate question here, is the rage a man feels when someone cuts them off at the red light or something or whatever, right?
The number of men putting their head out the window screaming is incalculably higher than the number of women who are reacting to that same incident the same way.
And why not just say, guys, put down your hormones. You're being hormonally influenced. But the men are in charge, so we don't get to say that.
Well, you know, some people have even said that this idea that artificial superintelligence
would want to kill us all is a straight up projection of this male dominance that is you know in in uh in our society and not
anything that we would build into machines so that's interesting yeah so because like you said
earlier there's a bias you're going to put in whether you even are self-aware of it or not
and so here are all the science fiction stories and all the horror stories and the apocalyptic stories, and they're all
having the robots behave as men would. But in fact, they're just robots.
Yes. We do a lot of projecting of human qualities onto these machines, and we could build them that
way, and we might if we don't stop to think about it,
but we don't have to.
And they don't even have to have gender.
Yeah.
And also, I just want to make a case
for projecting onto robots Pomeranian qualities.
So just do with that what you will, Kate Darling,
but that's what I'm pitching.
So, Kate, are you guys working on any fuzzy, cuddly robots?
We actually are. The lap robots oruddly robots? We actually are.
Lap robots or the lap robots?
Those actually exist.
I wouldn't want to try to completely replicate your Pomeranian because we could never get that right.
But any robot that looks kind of like something that people are familiar with but isn't quite, like, it doesn't have to be a dog, but it could be like a baby harp seal.
That one exists. There's a baby harp seal. That one exists.
There's a baby harp seal robot. That's very cuddly.
Oh, all right. So we're already there. Okay.
Yeah. Time for just a few more questions.
So we have from Abby Shake Mature.
I would love to hear Dr. Darling's thoughts on artificial general intelligence, AGI, and its future.
Are there any other minds working on AGI other than Dr. Ben Gortzel?
How promising do you think this approach is towards achieving strong AI?
Yeah, so first tell us what general AGI is.
Well, right now, basically anything that people are working on in artificial intelligence is very narrowly focused.
Machines can do a task or a thing within very narrow limitations.
But the ideal that some people are chasing is artificial general intelligence, which is something that's more like human intelligence.
Humans are able to do a lot of different things. I'm talking to you here right now,
but if one of these plants burst into flames behind me, I would be able to leap out of my
chair and do something about it because I have that contextual awareness and I can task switch
and I can do a lot of different things and I'm um but what the machines that we've created so far are
not able to do that at all and so some people are chasing this goal of trying to create more
general intelligence and machines unfortunately we have no clue how that can even happen uh we don't
even fully understand how human intelligence works so So it's very difficult to create machines that can do that.
And there's a couple different camps of people. Some people believe that that will be possible
soon. Many people that I work with don't believe that that's going to be possible. Or at the very
least, it's going to require so many smaller breakthroughs before we even get close to it that
we'll have a much better prediction of what that would even look like.
Because that also, we don't know what it would even look like.
Could it be so that you will never achieve
artificial general intelligence
because you will always have a targeted AI
that will do its task better than any AGI could possibly do it.
That almost has to be the case.
I mean, you know me.
I always say, why are we trying to recreate AGI when humans can do it?
Why don't we create something that's more useful that can do something that humans can't do?
We have great AGI already.
Right.
Okay.
I like that.
I like that.
This is the first, the most hopeful I've ever been.
You're a very pro-human
robot expert.
Yes, yes.
I'm feeling better. Thank you, Kate.
Yeah, I feel like
we don't need to end every sentence
with, we all might die.
Although I do want to stress that we could all die.
Again, yeah.
So, time for like one, maybe two more questions.
Alec asks, if you believe the brain is nothing more than the sum of its parts,
is it fair to say we can one day recreate not just artificial intelligence,
but artificial consciousness?
So that kind of goes back to the becoming self-aware thing.
Like, is it even in the cards technologically right now?
And can you make a circuit that's sufficiently complex?
There's been a lot of assumptions
that if it is sufficiently complex,
it is a natural next step to achieve consciousness,
whatever that is.
And is that a fair guess?
I mean, yeah, we would need to define consciousness first
in order to answer that question.
But I am very much in camp.
Yes, we are just a sum of all of our parts.
And in theory, you should be able to replicate the parts that we have.
We just have no idea how to do that right now.
All right.
So the sum of the parts thing,
there are things that come together and become more than the sum of the parts, right? Like you
can analyze a bird in great detail and have, and nowhere in there will you have an understanding
that a group of birds will flock together. That's a good point. Right? So that's an emergent phenomenon that, in fact, only exists in the group.
Because one bird cannot flock, right?
That makes no sense.
So how we've defined the word. emergent feature that wasn't built in from the beginning but sort of shows up as a natural
consequence of the evolution of neuro complexity yeah that i mean that makes sense to me but again
it depends on how we define consciousness right right right and you know how you know my best
evidence for why i know we don't understand consciousness what best evidence ready
people continue to write books about it
right so if you go to the shelf in the library and say where are the physics books it's like this wide on the shelf it's got like the newton physics the einstein and that's it right
say where the books on the conscious and the mind and the country it's shelf after shelf after shelf
We're the books on the conscious and the mind.
It's shelf after shelf after shelf.
And so I think if we really understood it,
we wouldn't have to keep writing books on it.
That's my measure.
We don't understand it, but we really want to.
We are obsessed with it. Really? Thank you.
But it's funny because I feel like, you know,
I was obsessed with it too.
Like all those books were like people who were in high school.
You know what I mean?
It's like I questioned consciousness
in high school and college.
And then I was just like, I'm good.
I'll ignore this question.
I'll ignore this life's great question.
Who cares?
You know what I mean?
So in high school,
you were contemplating your consciousness
in high school.
That's cool.
Like, yeah, I feel like I went through that phase, right? I was also goth, you know,
and I also went through like a punk gypsy phase. You know, there was a lot going on with me.
But I feel like I came to terms with like, I can't answer this question, so I will move on forever.
So you were punk, goth, Iranian, American. This is all of this?
Okay.
Yeah.
Neil, I don't even want to get into,
I did mime for a while, but I did.
So anyways, there was a lot going on.
No, that's it.
That's the last time.
Is that the last time I'm on this show?
That's the last time.
Do I need to see myself out?
We have a mime rule here.
Neil is creating a very convincing box for people who are listening to the podcast.
They don't see this.
He is excellent.
I can't speak because I'm miming a box.
So I think we got to call it quits there.
But Kate, it's great to have you back on the show.
And you have to promise us that when you do create a robot that achieves consciousness,
you call us first. Yes, or a robot that can mime.
All right. So again, it's been great to have you. We love this topic. It should be obvious, Kate.
And you know we're going to find you again. Thanks again, Nagin and Kate. Nagin, we can find her on
Adult Swim. Birdbrain? What's it called? Bird Girl. Bird Girl.
Bird Girl.
And your character really scares me
getting inside people's heads
and changing their mind.
That's scary.
I have to catch a few episodes
and get back to you on that.
And Kate,
keep it up.
Keep it going there
up at MIT.
We all love the Media Lab.
Such really cool things
come out of there.
And your work
is no exception to that.
So everyone check out Kate's book and
give me the full title so I don't mangle it. It's The New Breed, What Our History with Animals
Reveals About Our Future with Robots. And there's a lot of insights there that I think can benefit
us all so that every time I have this podcast, I don't have to say we're all going to die.
Thank you, Kate, for saving us from that thing.
Thank you.
So that's all we have time for.
I'm Neil deGrasse Tyson, your personal astrophysicist.
Keep looking up.