Lex Fridman Podcast - Ayanna Howard: Human-Robot Interaction and Ethics of Safety-Critical Systems
Episode Date: January 17, 2020Ayanna Howard is a roboticist and professor at Georgia Tech, director of Human-Automation Systems lab, with research interests in human-robot interaction, assistive robots in the home, therapy gaming ...apps, and remote robotic exploration of extreme environments. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 02:09 - Favorite robot 05:05 - Autonomous vehicles 08:43 - Tesla Autopilot 20:03 - Ethical responsibility of safety-critical algorithms 28:11 - Bias in robotics 38:20 - AI in politics and law 40:35 - Solutions to bias in algorithms 47:44 - HAL 9000 49:57 - Memories from working at NASA 51:53 - SpotMini and Bionic Woman 54:27 - Future of robots in space 57:11 - Human-robot interaction 1:02:38 - Trust 1:09:26 - AI in education 1:15:06 - Andrew Yang, automation, and job loss 1:17:17 - Love, AI, and the movie Her 1:25:01 - Why do so many robotics companies fail? 1:32:22 - Fear of robots 1:34:17 - Existential threats of AI 1:35:57 - Matrix 1:37:37 - Hang out for a day with a robot
Transcript
Discussion (0)
The following is a conversation with Ayanna Howard.
She's a roboticist, professor Georgia Tech, and director of the Human Automation Systems Lab.
With research interests in human robot interaction,
assist in robots in the home, therapy gaming apps, and remote robotic exploration of extreme environments.
Like me, in her work, she cares a lot about both robots and human beings, and so I really
enjoyed this conversation.
This is the Artificial Intelligence Podcast.
If you enjoy it, subscribe on YouTube, give it 5 stars and apple podcasts, follow on Spotify,
support it on Patreon, or simply connect with me on Twitter, a Lex Friedman spelled F-R-I-D-M-A-N.
I recently started doing ads at the end of the introduction.
I'll do one or two minutes after introducing the episode and never any ads in the middle
that can break the flow of the conversation.
I hope that works for you and doesn't hurt the listening experience.
This show is presented by CashApp, the number one finance app in the App Store.
I personally use CashApp to send money to friends, but you can also use it to buy, sell,
and deposit Bitcoin in just seconds. CashApp also has a new investing feature. You can buy
fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided
by CashApp investing, a subsidiary of Square and Member SIPC.
I'm excited to be working with CashApp to support one of my favorite organizations called
FIRST, best known for their first robotics and legal competitions.
They educate and inspire hundreds of thousands of students in over 110 countries and have
a perfect rating and charity navigator, which means that donated money is used to maximum effectiveness.
When you get cash out from the App Store, Google Play,
and use code Lex Podcast, you'll get $10,
and cash out will also do an $10 to first,
which again is an organization that I've personally seen inspire,
girls and boys, the dream of engineering a better world.
And now, here's my conversation with Ayanna Howard.
What are who is the most amazing robot you've ever met? Or perhaps had the biggest impact on your career?
I haven't met her, but I grew up with her, but of course Rosie.
And I think it's because also Rosie from the Jetsons.
She is all things to all people, right?
Think about it.
Like anything you want it, it was like magic.
It happened.
So people not only anthropomorphize,
but project whatever they wish for the robot to be onto.
Unto Rosie.
But also, I mean, think about it.
She was socially engaging.
She, every so often had an attitude, right?
She kept us honest.
She would push back sometimes when, you know,
George was doing some weird stuff.
But she cared about people, especially the kids.
She was like the perfect robot.
And you've said that people don't want
the robots to be perfect.
Can you elaborate on that?
What do you think that is?
Just like you said, Rosie pushed back a little bit every once in a while.
Yeah, so I think it's that...
So if you think about robotics in general, we want them because they enhance our quality
of life.
And usually that's linked to something that's functional, right?
Even if you think of self-driving cars,
why is there a fascination?
Because people really do hate to drive.
Like there's the Saturday driving where I can just speed,
but then there's the, I have to go to work every day
and I'm in traffic for an hour.
I mean, people really hate that.
And so robots are designed to basically enhance our ability
to increase our quality of life.
And so the perfection comes from this aspect of interaction.
If I think about how we drive, if we drove perfectly, we would never get anywhere.
Right?
So think about how many times you had to run past the light because you see the car behind you is about to crash
into you.
Or that little kid kind of runs into the street.
And so you have to cross on the other side because there's no cars, right?
Like if you think about it, we are not perfect drivers.
Some of it is because it's our world.
And so if you have a robot that is perfect in that sense of the word, they wouldn't
really be able to function with us.
Can you linger a little bit on the word perfection? So from the robotics perspective, what does
that word mean and how is sort of the optimal behavior as you're describing different than
what we think is perfection?
Yeah, so perfection, if you think about it in the more theoretical point of view, it's
really tied to accuracy, right?
So if I have a function, can I complete it at 100% accuracy with zero errors?
And so that's kind of, if you think about perfection, in the sense of the word.
And in a self-driving car realm, do you think from a robotics perspective,
we kind of think that perfection means
following the rules perfectly, sort of defining,
staying in the lane, changing lanes.
When there's a green light, you go,
when there's a red light, you stop.
And that's the, and be able to perfectly see
all the entities in the scene. That's the limit of what able to perfectly see all the entities
in the scene, that's the limit of what we think
of as perfection.
And I think that's where the problem comes is that
when people think about perfection for robotics,
the ones that are the most successful
are the ones that are quote unquote perfect,
like I said, Rosie is perfect,
but she actually wasn't perfect in terms of accuracy,
but she was perfect in terms of how she interacted and how she adapted.
And I think that's some of the disconnect is that we really want perfection with respect to
its ability to adapt to us. We don't really want perfection with respect to 100% accuracy
with respect to the rules that we just made up anyway. I think there's this disconnect sometimes between what we really want and what happens.
We see this all the time in my research.
The optimal, quote unquote, optimal interactions are when the robot is adapting based on the
person, not 100% following what's optimal based on the roles.
Just a link around autonomous vehicles for a second.
Just your thoughts, maybe off the top of head.
How hard is that problem?
Do you think based on what we just talked about?
There's a lot of folks in the automotive industry that are very confident from Elon Musk
to Waymo to all these companies.
How hard is it to solve
that last piece? The gap between the perfection and the human definition of how you actually function
in this world? Yes, so this is a moving target. So I remember when all the big companies started to
heavily invest in this. And there was a number of even roboticists as well as folks who were putting in the VCs and
corporations, Elon Musk being one of them that said, self-driving cars on the road with
people within five years.
That was a little while ago.
And now people are saying, five years, 10 years, 20 years, some
are saying never, right? I think if you look at some of the things that are being
successful is these basically fixed environments where you still have some
anomalies, right? You still have people walking, you still have stores, but you
don't have other drivers, right? Like other have people walking, you still have stores, but you don't have other drivers, right?
Like other human drivers are, is a dedicated space for the cars.
Because if you think about robotics in general, whereas always been successful,
I mean, you can say manufacturing, like way back in the day, right? It was a fixed environment.
Humans were not part of the equation. We're a lot better than that. But when we can carve out scenarios that are closer
to that space, then I think that it's where we are.
So a close campus where you don't have self-driving cars
and maybe some protection so that the students don't
get in front just because they want to see what happens.
Like having a little bit, I think that's where we're going to see the most success in the near future.
And be slow moving.
Right. Not, not, you know, 55, 60, 70 miles an hour, but the speed of a golf cart, right?
So that said, the most successful in the automotive industry robots operating today in the hands of real people
are ones that are traveling over 55 miles an hour and in Ancunstrain's environment which is Tesla
vehicles so Tesla autopilot. So I just I would love to hear sort of your just thoughts of two things.
So one I don't know if you've gotten to see you've heard about something called
smart summon. Would Tesla system, autopilot system, where the car drives zero occupancy, no driver,
in the parking lot slowly sort of tries to navigate the parking lot to find itself to you.
And there's some incredible amounts of videos and just hilarity that happens as it awkwardly tries to navigate this environment.
But it's a beautiful, not verbal communication
with you, machine and human that I think is a,
it's like, it's some of the work that you do
in this kind of interesting human robot interaction space.
So what are your thoughts in general about it?
So I do have that feature.
These are our tests.
I do. Mainly because I'm a gadget freak, right? So I say it's a gadget
that happens to have some wheels. And yeah, I've seen some of the videos.
But what's your experience like? I mean, you're you're a human robot interaction robot
assist, your legit sort of expert in the field. So what does it feel for a machine to come to you?
It's one of these very fascinating things, but also I am hyper hyper alert, right? Like
I'm hyper alert. Like my butt, my thumb is like, okay, I'm ready to take over. Even when
I'm in my car, I'm doing things like automated backing into,
so there's a feature where you can do this automated backing into our parking space,
or bring the car out of your garage, or even pseudo autopilot on the freeway.
I'm hypersensitive.
I can feel, as I'm navigating, like, yeah, that's an error right there.
I'm very aware of it, but I'm also fascinated by it.
And it does get better.
Like I look and see, it's learning from all of these people
who are cutting it on.
Like every time I cut it on, it's getting better, right?
And so I think that's what's amazing about it is that.
This nice dance of, you're still hyper vigilant.
So you're still not trusting it at all.
And yet you're using it on the highway
if I were to like what, as a robot,
it says, we'll talk about trust a little bit.
How do you explain it?
You still use it.
Is it the gadget freak part?
Like were you just enjoy exploring technology?
Or is that the right balance between robotics and humans is where you use it but don't trust
it and somehow there's this dance that ultimately is a positive?
Yeah, so I think I just don't necessarily trust technology, but I'm an early adopter.
Right?
So when it first comes out, I will use everything, but I will be very, very cautious of how I use it.
Do you read about it?
Do you explore it by just try it?
They do like, crudely to put a crudely.
Do you read the manual or do you learn through exploration?
I'm an explorer if I have to read the manual then you know I do design yeah then it's a bad user
in the face it's a failure Elon Musk is very confident that you kind of take it from where it is
now to full autonomy so from this human robot interaction we don't really trust and then you try and then you catch it when it fails to
It's going to incremental improve itself into full it full what you don't need to
participate. What's your sense of that trajectory? Is it feasible?
So the promise there is by the end of next year by the end of 2020, is the current, what's your sense about that journey
that tests us on?
So there's kind of three things going on now.
I think in terms of will people go,
like as a user, as a adopter, will you trust going to that point?
I think so, right?
Like there are some users,
and it's because what happens is
when you're hypersensitive at the beginning
and then the technology tends to work,
your apprehension slowly goes away.
And as people, we tend to swing to the other extreme right because like oh I was like
hyper hyper fearful or hypersensitive and it was awesome and we just tend to swing that's just
human nature and so you will have I mean I this is scary notion because most people are now extremely
untrusting of autopods they use it but they't trust it. And it's a scary notion that there's a certain point where you allow yourself to look at the smartphone for like 20 seconds.
And then there'll be this phase shift, where it'll be like 20 seconds, 30 seconds, one
minute, two minutes. It's a scary proposition.
But that's people, right? That's human.
That's humans. I mean, I think of even our use of, I mean, just everything on the internet, right?
Like, think about how reliant we are on certain apps and certain engines, right?
20 years ago, people have been like, oh yeah, that's stupid.
Like, that makes no sense.
Like, of course, that's false.
Like now is just like, oh, of course,
I've been using it.
It's been correct all this time.
Of course, aliens, I didn't think they existed,
but now it says they do.
Obviously.
100% Earth is flat.
So, okay, but you said three things.
So one is the human.
So one is the human.
And I think there will be a group of individuals that will swing,
right? I just...
Teenagers.
Teenage, I mean, it'll be, it'll be adults.
There's actually an age demographic that's optimal for a technology adoption, and you can
actually find them, and they're actually pretty easy to find.
Just based on their habits, based on...
So, someone like me who
wasn't a robot, would probably be the optimal kind of person, right? Early
adopter, okay with technology, very comfortable and not hypersensitive, right?
I'm just hypersensitive because I designed this stuff. So there is a target
demographic that will swing. The other one though is you still have these humans
that are on the road.
That one is a harder, harder thing to do.
And as long as we have people that are on the same streets,
that's gonna be the big issue.
And it's just because you can't possibly,
I was saying, you can't possibly map the, some of the silliness of human drivers, right?
Like as an example, when you're next to that car
that has that big sticker called student driver, right?
Like you are like, oh, either I am going to like go around.
Like we are, we know that that person
is just going to make mistakes that make no
sense.
How do you map that information?
Or if I am in a car and I look over and I see two fairly young looking individuals and
there's no student driver bumper and I see them chatted to each other, I'm like, oh,
that's an issue.
How do you get that kind of information and that experience into basically an autopilot?
Yeah.
And there's millions of cases like that where we take little hints to establish context.
I mean, you said kind of beautifully poetic human things, but there's probably subtle things about the environment, about it being maybe time for commuters to start going home from work, and therefore you
can make some kind of judgment about the group behavior of pedestrians, blah, blah, blah,
so on and so on.
Yes, yes.
Or even cities, right?
Yes.
Like, if you're in Boston, how people cross the street, like lights are not an issue,
versus other places where people will actually wait
for the crosswalk.
Yeah, or somewhere peaceful.
And but what I've also seen,
sort of just even in Boston that intersection
to intersection is different.
So every intersection has a personality of its own.
So that certain neighborhoods of Boston are different. So we kind of end up based on
different timing of day, at night. It's all, it's all, there's a dynamic to human behavior
that we kind of figure out ourselves. We're not, we're not able to introspect and figure
it out, but somehow our brain learns it.
We do.
And so you're saying, is there a shortcut?
Is there a shortcut, though, for everybody?
Is there something that could be done, you think, that what we humans do, it's just like
bird flight, right?
This example, they give for flight.
Do you necessarily need to build a bird that flies
or can you do an airplane?
Is there a shortcut to be?
So I think the shortcut is,
and I kind of, I talk about it as a fixed space,
where, so imagine that there is a neighborhood
that's a new smart city or a new neighborhood that says,
you know what, we are going to design this new city
based on supporting self-driving cars.
And then doing things, knowing that there's anomalies,
knowing that people are like this, right?
And designing it based on that assumption that, like,
we're gonna have this, that would be an example of a shortcut.
So you still have people, but you do very specific things to try to minimize the noise a
little bit as an example.
And the people themselves become accepting of the notion that there's autonomous cars,
right?
Right.
Like, they move into.
So right now, you have like a, you will have a self-selection bias, right?
Like individuals will move into this neighborhood knowing like this is part of like the real estate pitch, right? Like individuals will move into this neighborhood knowing like this is part of like the real estate pitch, right?
And so I think that's a way to do a shortcut. When it allows you to deploy, it allows you to collect
then data with these variances and anomalies because people are still people, but it's a safer space
and is more of an accepting space.
IE, when something in that space might happen because things do, because you already have
the self-selection, like people would be, I think, a little more forgiving than other places.
And you said three things.
Did we cover all of them?
Uh, the third is legal law, liability, which I don't really want to touch, but it's still
of concern.
And the Mishmash would like, would policy as well, sort of government, all that, that whole
that big ball of stuff.
Mess.
Yeah.
Got you.
So that's, so we're out of time now.
Do you think from a robotics perspective, you know, if you're kind of honest of what cars
do, they kind of threaten each other's life all the time.
So cars are very, I mean, in order to navigate intersections, there's assertiveness.
There's a risk taking.
And if you were to reduce it to an objective function, there's a probability of murder in that function,
meaning you killing another human being, and you're using that.
First of all, it has to be low enough to be acceptable to you on an ethical level, as
an individual human being, but it has to be high enough for people to respect you to
not take advantage of you completely and J-walk and
front of you and so on. So I mean, I don't think there's a right answer here, but what's,
how do we solve that? How do we solve that from a robotics perspective, one danger and human
life as it's taking? Yeah, as they say, cars don't kill people, people kill people. Right. So I think now robotic algorithms would be killing. Right. So it will be robotics
algorithms that are pro no it will be robotic algorithms don't kill people developers of
robotic algorithms kill people right. I mean one of the things is people are still in the loop
and at least in the near and midterm I I think people will still be in the loop.
At some point, even if it's the developer,
like we're not necessarily at the stage
where robots are programming autonomous robots
with different behaviors quite yet.
Not just scary notions are to interrupt
that a developer has some responsibility
in the death of a human being.
I mean, I think that's why the whole aspect of ethics in our community is so, so important.
Because it's true, if you think about it, you can basically say, I'm not going to work
on weaponized AI. People can say, that's not what I'm going to do.
But yet, you are programming algorithms that might be used in health care algorithms
that might decide whether this person should get this medication or not,
and they don't, and they die.
Okay, so that is your responsibility, right?
And if you're not conscious and aware that you do have that power when you're coding
and things like that, I think that's just not a good thing.
Like, we need to think about this responsibility as we program robots and computing devices
much more than we are.
Yeah, so it's not an option to not think about ethics.
I think it's a majority, I would say, of computer science.
Sort of, it's kind of a hot topic now, I think about bias and so on, but it's, and we'll
talk about it, but usually it's kind of, it's like a very particular group of people
that work on that.
And then people who do like robotics are like, well, I don't have to think about
that. There's other smart people thinking about it. It seems that everybody has to think about
it. It's not you can't escape the ethics, well, there's bias or just every aspect of ethics
that has to do with human beings. Everyone. So think about, I'm going to age myself. But
I remember when we didn't have like testers, right?
And so what did you do?
As a developer, you had to test your own code, right?
Like you had to go through all the cases and figure it out and, you know, and then they
realized that, you know, like we probably need to have testing because we're not getting
all the things.
And so from there, what happens is like most developers, they do, you know, a little bit
of testing, but as usually
like, okay, did my compiler bug out?
Let me look at the warnings.
Okay, is that acceptable or not?
Right?
That's how you typically think about as a developer, and you're just assumed that it's going
to go to another process, and they're going to test it out.
But I think we need to go back to those early days when you're a developer, you're developing.
There should be like this, say, okay, let me look at the ethical outcomes of this,
because there isn't a second testing ethical testers, right?
It's you.
We did it back in the early coding days.
I think that's where we are with respect to ethics.
Let's go back to what was good practice is,
not only because we were just developing the field.
Yeah, and it's, uh, I need to really have you burdened.
I've had to feel it recently in the last few months, but I think it's a,
it's a good one to feel like I've gotten a message more than one from people.
You know, I've unfortunately gotten some attention recently.
And I've gotten messages that say that I have blood on my hands
because of working on semi-autonomous vehicles.
So the idea that you have semi-autonomy means people will become, will lose vigilance and so on,
that's actually be humans as we describe.
And because of that, because of this idea that we're creating automation, there will be people be hurt because we describe. And because of that, because of this idea
that we're creating automation,
there'll be people be hurt because of it.
And I think that's a beautiful thing.
I mean, it's, you know, it's many nice
where I wasn't able to sleep because of this notion.
You know, you really do think about people that might die
because of this technology.
Of course, you can then start rationalizing and saying,
well, you know what, 40,000 people die in the United States every year, and we're trying to ultimately try to
save lives. But the reality is your code you've written might kill somebody. And that's
an important burden to carry with you as you design the code.
I don't even think of it as a burden if we train this concept correctly from the beginning. And I use, and not to say that
coding is like being a medical doctor, but think about it. Medical doctors, if they've been in
situations where their patient didn't survive, right? Do they give up and go away? No, every time they
come in, they know that there might be a possibility that this patient might not survive. And so when they approach every decision, like that's in their back of their head.
And so why isn't that we aren't teaching, and those are tools though, right?
They are given some of the tools to address that so that they don't go crazy.
But we don't give those tools so that it does feel like a burden versus something of,
I have a great gift and I can
do great, awesome good, but with it comes great responsibility.
I mean, that's what we teach in terms of, you think about the medical schools, right?
Great gift, great responsibility.
I think if we just change the messaging a little, great gift being a developer, great
responsibility.
And this is how you combine those.
But do you think, and this is really interesting?
It's outside, I actually have no friends
who are sort of surgeons or doctors.
I mean, what does it feel like to make a mistake
in a surgery and somebody to die because of that?
Like is that something you could be taught in medical school
sort of how to be accepting of that. Is that something you could be taught in medical school, how to
be accepting of that risk?
So because I do a lot of work with healthcare robotics, I have not lost a patient, for
example, the first one is always the hardest, but they really teach the value, right? So they teach responsibility, but they also teach the value.
Like you're saving 40,000.
But in order to really feel good about that,
when you come to a decision,
you have to be able to say at the end,
I did all that I could possibly do, right?
Versus a, well, I just picked the first widget, right?
So every decision is actually thought through.
It's not a habit, it's not a,
let me just take the best algorithm that my friend gave me,
right?
It's a, is this it?
This is the best.
Have I done my best to do good, right?
And so.
And I think burden is the wrong word.
If it's a gift, but you have to treat it extremely seriously correct
So I'm slightly related know
You in a recent paper dogly truth about ourselves and our robot creations
You you discuss you highlight
Some biases that may affect the function in various robotic systems
Can you talk through if you you remember, examples of some?
There's a lot of examples.
What is bias, first of all?
Yes.
And so bias, which is different than pressure.
So bias is that we all have these preconceived notions
about particular, everything from particular groups,
to habits to
identity, right? So we have these predispositions and so when we address a problem, we look at a problem make a decision, those preconceived notions might affect our outputs or outcomes.
So there the bias could be positive and negative and And then it's prejudice. The negative. Precious is the negative, right? So, prejudice is that not only are you aware of your bias,
but you are then take it and have a negative outcome, even though you are aware.
And that could be gray areas too. That's the challenging aspect of all
ethical questions. So, I always like, so there's a funny one.
And in fact, I think it might be in the people
because I think I talk about self-driving cars.
But think about this.
We, for teenagers, right?
Typically, we insurance companies charge quite a bit of money
if you have a teenage driver.
So you could say that's an age bias, right?
But no one will, I mean, parents will be grumpy, but no one really says that that's not
fair.
That's interesting.
We don't, that's right.
That's right.
It's everybody in human factors and safety research almost, I mean, it's quite ruthlessly critical
of teenagers.
And we don't question, is that okay?
Is that okay to be ageist in this kind of way?
It is, and it is age, right?
It's definitely age, there's no question about it.
And so, so these are, this is the gray area, right?
Because you know that teenagers are more likely
to be in accidents.
And so there's actually some data to it.
But then if you take that same example and you say,
well, I'm going to make the insurance higher
for an area of Boston because there's a lot of accidents.
And then they find out that that's correlated
with socioeconomics.
Well, then it becomes a problem, right?
Like that is not acceptable,
but yet the teenager, which is age,
it's against age, is, right?
So-
We figure that out as a society
by having conversations, by a discourse.
Let me throw out history,
the definition of what is ethical and not has changed and hopefully
always for the better.
Correct.
So in terms of bias or prejudice in robotics, in algorithms, what examples do you sometimes
think about?
So I think about quite a bit the medical domain,
just because historically, the healthcare domain
has had these biases, typically based on gender and ethnicity,
primarily, a little on age, but not so much.
Historically, if you think about FDA and drug trials, it's, you know, harder
to find a woman that, you know, aren't child-bearing and so you may not test on drugs at the
same level.
Right.
So there's these things.
And so if you think about robotics, right, something as simple as I like to design
an exoskeleton, right? What should the material be?
What should the weight be?
What should the form factor be?
Are you, who are you going to design it around?
I will say that in the US, you know, women average height and weight
is slightly different than guys.
So who are you going to choose?
Like if you're not thinking about it from the beginning as, you know, okay, I, when I
design this and I look at the algorithms and I design the control system and the forces
and the torques, if you're not thinking about, well, you have different types of body structure,
you're going to design to, you know, what you're used to.
Oh, this fits in my, all the folks in my lab, right?
So, think about it from the very beginning as important. What about algorithms that train on data,
kind of thing. Sadly, our society already has a lot of negative bias. And so, if we collect a lot of data,
even if it's a balanced way, it's going to contain the same bias that a society contains.
And so, yeah, is there things there that bother you?
Yeah, so you actually said something.
You had said how we have biases, but hopefully we learn from them and we become better, right?
And so that's where we are now, right?
So the data that we're collecting is historic.
It's so it's based on these things.
When we knew it was bad to discriminate,
but that's the data we have.
And we're trying to fix it now,
but we're fixing it based on the data
that was used in the first place to...
Exit and post.
Right.
And so the decisions, and you can look at everything
from the whole aspect of predictive policing,
criminal recidivism. There was a recent paper that had the health care algorithms, which had
kind of a sensational titles. I'm not pro sensationalism in titles. But you read it, right?
So you make sure you read it, but I'm like, really? Like, ah, you couldn't.
What's the topic of the sensationalism?
I mean, what's underneath it?
What's, if you could sort of educate me
and what kind of bias creeps into the healthcare space?
Yeah, so I mean, you already kind of mentioned.
Yeah, so this one was, the headline was,
racist AI algorithms.
Okay, like, okay, that's totally a clickbait title.
And so you looked at it and so there was data
that these researchers had collected.
I believe I want to say it was either science or nature.
It just was just published.
But they didn't have the sensational title.
It was like the media.
And so they had looked at demographics.
I believe between black and white women.
And they showed that there was a discrepancy in the outcomes.
And so it was tied to ethnicity, tied to race.
The peace that the researchers did actually
went through the whole analysis.
But of course, the journals with AI are
problematic across the board, let's say. And so this is a problem, right? And so there's this thing about, oh, AI, it has all these problems.
We're doing it on historical data and the outcomes are uneven based on gender or ethnicity or age,
and the outcomes are uneven based on gender or ethnicity or age.
But I am always saying is like, yes, we need to do better. We need to do better.
It is our duty to do better.
But the worst AI is still better than us.
Like you take the best of us
and we're still worse than the worst AI,
at least in terms of these things.
And that's actually not discussed, right? And so I think, and that's why the sensational AI, at least in terms of these things. And that's actually not discussed, right?
And so I think, and that's why the sensational title, right?
And it's so it's like, so then you can have individuals go,
like, oh, we don't need to use this AI.
I'm like, oh, no, no, no, no.
I went the AI instead of the doctors that provided that data
because it's still better than that, right?
I think it's really important to linger on.
The idea that this AI is racist, it's like, well, compared to what?
Sort of, I think we set, unfortunately, way too high of a bar for AI algorithms.
And in the ethical space, perfect is out, probably impossible.
Then if we set the bar of perfection essentially,
that has to be perfectly fair, whatever that means.
It means we're setting it up for failure.
But that's really important to say what you just said,
which is, well, it's still better than anything.
And one of the things I think that we don't get enough credit
for just in terms of as developers is that you can now poke at it.
So it's harder to say, is this hospital, is this city doing
something until someone brings in a civil case?
Well, with AI, it can process through all this data and say, hey, yes, there
was some, an issue here, but here it is. We've identified it. And then the next step is to
fix it. I mean, that's a nice feedback loop versus like waiting for someone to sue someone
else before it's fixed, right? And so I think that power, we need to capitalize on a little bit more, right? Instead of having the sensational titles, have the, okay, this is a problem.
And this is how we're fixing it.
And people are putting money to fix it because we can make it better.
I look at like facial recognition, how joy she basically called out a couple of companies
and said, hey, and most of them were like, oh, embarrassment.
And the next time it had been fixed, right?
It had been fixed better, right?
And then I was like, oh, here's some more issues.
And I think that conversation then moves that needle
to having much more fear and unbiased and ethical aspects.
As long as both sides, the developers are willing to say, okay, I hear you.
Yes, we are going to improve.
You have other developers who are like, hey, AI, it's wrong, but I love it, right?
Yes.
So speaking of this really nice notion that AI is maybe flawed with better than humans. So just me think of it, one example,
of flawed humans is our political system.
Do you think or you said judicial as well?
Do you have a hope for AI sort of being elected for president
or running our Congress or being able to be a powerful
representative of the people. So I mentioned and I truly believe that this whole world of AI
is in partnerships with people. And so what does that mean? I don't believe or maybe I just don't believe that we should have an AI for president,
but I do believe that a president should use AI as an advisor.
If you think about it, every president has a cabinet of individuals that have different
expertise that they should listen to.
That's what we do. And you put smart people
with smart expertise around certain issues. And you listen, I don't see why AI can't function as
one of those smart individuals giving input. So maybe there's an AI on healthcare, maybe there's
an AI on education. And right, like all these things that a human is processing, right?
Because at the end of the day, there's people that are human,
that are going to be at the end of the decision.
And I don't think as a world, as a culture, as a society, that we would totally believe.
And this is us, like this is some fallacy about us, but we need to see that leader, that person as human.
And most people don't realize that leaders have a whole lot of advice, right? When they
say something, it's not that they woke up, well, usually they don't wake up in the morning
and be like, I have a brilliant idea, right? It's usually a, okay, let me listen. I have
a brilliant idea, but let me get a little bit of feedback on this.
Like, okay.
And then it's a, yeah, that was an awesome idea.
Or it's like, yeah, let me go back.
We already talked through a bunch of them, but
are there some possible solutions to the bias as present in our algorithms?
Piano, what we just talked about.
So I think there's two paths.
One is to figure out how to systematically
do the feedback and corrections.
So right now it's ad hoc, right?
It's a researcher identify some outcomes
that are not, don't seem to be fair, right?
They publish it, they write about it,
and either the developer or the companies
that have adopted the algorithms may try to fix it, right?
And so it's really at Hawk,
and it's not systematic.
There's, it's just, it's kind of like,
I'm a researcher, that seems like an interesting problem,
which means that there's a whole lot out there
that's not being looked at,
right? Because it's kind of research-driven. And I don't necessarily have a solution,
but that process, I think, could be done a little bit better. One way is I'm going to poke a little
bit at some of the corporations, right?
Like, maybe the corporations, when they think about a product, they should, instead of,
in addition to hiring these, you know, bug, they give these, oh yeah, yeah, yeah.
Like awards when you find a bug, they can put security holes.
Yeah, security holes.
Yeah.
You know, let's put it, like, we we will give the whatever the award is that we give for the people who find these security holes
Find an ethics hole, right?
Like find an unfairness hole and we will pay you X for each one you find. I mean, why can't they do that?
One is a win-win they show that they're concerned about it that this is important and they don't have to necessarily
Dedicated their own like internal resources and it also don't have to necessarily dedicated their own internal resources.
And it also means that everyone who has their own bias lens, I'm interested in age, and so I'll
find the ones based on age, and I'm interested in gender, which means that you get all of these
different perspectives. But you think of it in a data-driven way. If we look at a company like Twitter, it gets, it's under a lot of
fire for discriminating against certain political beliefs.
Correct. And so there's a lot of people, this is the sad thing because I know how hard
the problem is and I know the Twitter folks are working really hard at it. Even Facebook
that everyone seems to hate are working really hard at this. You know, the kind of evidence that people bring
is basically anecdotal evidence.
Well, me or my friend, all we said is X
and for that we got banned.
And that's kind of a discussion of saying,
well, look, that's usually, first of all,
the whole thing is taken out of context.
So they're presented anecdotal evidence.
And how are you supposed to as a company in a healthy way
have a discourse about what is and isn't ethical?
What, how do we make algorithms ethical
when people are just blowing everything?
Like, they're outraged about a particular
anecdotal evidence piece of evidence.
It's very difficult to
sort of contextualize in the big data-driven way. Do you have a hope for companies like Twitter
and Facebook? Yeah, so I think there's a couple of things going on, right? First off,
the remember this whole aspect of we are becoming reliant on technology, we're also becoming reliant on a lot of these
the apps and the resources that are provided, right? So some of it is kind of anger like I need you, right?
And you're not working for me, right? Yeah, it's not working for me.
But I think
and so some of it and I wish that there was a little bit of change in
rethinking.
So some of it is like, oh, we'll fix it in house.
No, that's like, okay, I'm a fox, and I'm going to watch these hens, because I think it's
a problem that foxes eat hens.
No, right?
Like, use, like, be good citizens and say, look, we have a problem.
And we are willing to open ourselves up for others to come in and look at it.
And not try to fix it in house.
Because if you fix it in house, there's conflict of interest.
If I find something, I'm probably going to want to fix it.
And hopefully, the media won't pick it up, right?
And that then causes distrust because someone inside is going to be mad at you and go out
and talk about how, yeah, they can the resume survey because it's right like being us people.
Just say, look, we have this issue.
Community help us fix it and we will give you like, you know, the bug finder fee if you do.
Do you have a hope that the community us as a human civilization on the whole is good
and can be trusted to guide the future of our civilization into positive direction?
I think so. So I'm an optimist, right?
And, you know, there were some dark times in history always. I think now we're in one of those dark times. I truly do. In which aspect? The polarization. And it's not just US, right? So
if it was just US, I'd be like, yes, a US thing. But we're seeing it like worldwide, this polarization.
And so I worry about that. But I do fundamentally believe that at the end of the day, people are good.
And why do I say that?
Because anytime there's a scenario where people are in danger, and I will use,
so Atlanta, we had a snowmageddon, and people can laugh about that.
People at the time, so the city closed for, for you know little snow, but it was ice and the city closed down
But you had people opening up their homes and saying hey, you have nowhere to go come to my house, right?
Hotels were just saying like sleep on the floor like places like, you know the grocery stores were like hey
Here's food. There was no like oh,, how much are you going to pay me?
It was like this such a community.
And like people who didn't know each other strangers were just like, can I give you right
home?
And that was a point I was like, you know what?
Like, that, that, that reveals that the deeper thing is, is there is a compassionate love
that we all have within us.
It's just that when all of that is taken care of and get bored, we love drama.
Yeah.
And that's, I think almost like the division is a sign of the time is being good,
is that it's just entertaining on a some unpleasant
mammalian level to watch, to disagree with others.
And Twitter and Facebook are actually taking advantage of
that in the sense because it brings you back to the platform and they're advertised
are driven so they make a lot of money. So you go back and you sleep. Love doesn't sell
quite as well in terms of advertisement. It doesn't. So you've started your career in NASA Jet Propulsion Laboratory,
but before I'd ask a few questions there,
have you happened to have ever seen Space Odyssey 2001 Space Odyssey?
Yes.
Okay. Do you think Hal 9000,
so we're talking about ethics?
Do you think the right thing by taking the priority of the mission over the lives of astronauts?
Do you think hell is good or evil?
Easy questions.
Yeah.
How was misguided?
You're one of the people that would be in charge over now like hell.
How would you do better?
that would be in charge of an algorithm like how. So how would you do better?
If you think about what happened was there was no failsafe,
right?
So we, perfection, right?
Like what is that?
I'm gonna make something that I think is perfect,
but if my assumptions are wrong,
it'll be perfect based on the wrong assumptions, right?
That's something that you don't
know until you deploy and then you're like, oh yeah, messed up. But what that means is that when we
design software, such as in space Odyssey, when we put things out, that there has to be a failsafe.
There has to be the ability that, once it's that once is out there, we can grade it as an
F and it fails and it doesn't continue. There's some way that it can be brought in and
removed and that's aspect. Because that's what happened with how it was like assumptions
were wrong, it was perfectly correct based on those assumptions and there was no way to change it,
change their assumptions at all.
And the change the fallback would be to a human.
So you ultimately think like human should be, you know, it's not turtles or AI all the way down.
It's at some point there's a human that actually makes a difference.
I still think that, and again, because I do human robot interaction, I still think the
human needs to be part of the equation at some point.
So what, just looking back, what are some fascinating things in robotic space that NASA was working
at the time, or just in general, what have you gotten to play with and what are your memories
from working at NASA?
Yeah, so one of my first memories was they were working on a surgical robot system that
could do eye surgery, right?
And this was back in, oh my gosh, it must have been, oh, maybe 92, 93, 94.
So it's like almost like a remote operation over there.
Yeah, it was remote operation.
In fact, you can even find some old tech reports on it.
So think of it, like now we have DaVinci, right?
Like think of it, but these were like the late 90s, right?
And I remember going into the lab one day and I was like, what 90s, right? And I remember going into the lab one day
and I was like, what's that, right?
And of course, it wasn't pretty, right?
Cause the technology, but it was like functional
and you had this individual that could use version
of haptics to actually do the surgery
and they had this mockup of a human face
and like the eyeballs and you can see this little drill. And I was like,
oh, that is so cool. That one I vividly remember because it was so outside of my like possible
thoughts of what could be done. It's the kind of precision and, uh, I mean, what's the most
amazing of a thing like that? I think it was the precision.
It was the kind of first time that I had physically seen
this robot machine human interface, right, versus,
because manufacturing had been,
you saw those kind of big robots, right?
But this was like, oh, this is in a person.
There's a person in a robot, like in the same space.
The meeting them in person.
Like for me, it was a magical moment that I can't,
as life transforming, that I recently met spot mini
from Boston Dynamite.
Oh, see.
I don't know why, but on the human robot interaction,
for some reason, I realized how easy it is to anthropomorphize.
And it was, I don't know, it was almost like falling in love with this feeling of meeting.
And I've obviously seen these robots a lot on video and so on. But meeting in person,
just having that one on one time is different. So have you had a robot like that in your life that was made you maybe fall in love with robotics?
Like meeting in person.
I mean, I mean, I loved robotics.
That was a 12 year old, like I'm gonna be a robotist.
Actually, I called it cybernetics.
But so my motivation was bionic women.
I don't know if you know that.
And so, I mean, that was like a seminal moment, but I didn't know if you know that. And so I mean that was like a
seminal moment, but I didn't meet like that was TV, right? Like it wasn't like I was in the same
space and I meant I was like, oh my gosh, you're like real. Just laying on bionic women, which by the
way because I read that about you, I watched a bit of it and it's just so no offense terrible.
It's cheesy. It's cheesy. It's cheesy.
It's cheesy. Now I've seen a couple of reruns lately. But it's, but of course at the time
it's probably a cat's imagination. Especially when you're younger, it just cats you, but
what you asked back, did you think of it, you mentioned cybernetics, did you think of
it as robotics, or did you think of it as almost constructing
artificial beings?
Like, is it the intelligent part that captured your fascination
or was it the whole thing, like even just the limbs and just the-
So for me, it would have, in another world,
I probably would have been more of a biomedical engineer
because what fascinated me was the bionic, was the parts, like the bionic parts, the limbs,
those aspects of it. Are you especially drawn to humanoid or human-like robots?
I would say human-like, not human-noid, right? And when I say human-like, I think it's this aspect of
that interaction,
whether it's social and it's like a dog, right?
Like that's human light because it understand us. It interacts with us at that very social level to, you know,
human eyes are part of that.
But only if they interact with us as if we are human.
But just to linger on NASA for a little bit, what do you think maybe if you have other memories, but also what do you think is the future of robots in space?
We mentioned how, but there's incredible robots and NASA's working on in general thinking
about in our, as we venture out, human civilization ventures out into space.
What do you think the future robots is there?
Yeah, so I mean, there's the near term. For example, they just announced the rover that's going to the moon,
which, you know, that's kind of exciting, but that's like near term, you know my favorite favorite favorite
Series is Star Trek, right? You know
I really hope and even Star Trek like if I calculate the years I wouldn't be alive
but I would really
Really love to be in that world like even if it's just at the beginning like you know like
Voyage like adventure one. So basically living in space
Yeah with what robots what are robots?
Data what role the data would have to be even though that wasn't you know that was like later, but so data is a robot
That has human-like qualities.
Right.
Without the emotion chip.
Yeah.
You don't like emotion.
Well, so data with the emotion chip was kind of a mess, right?
It took a while for that into a doubt.
And so why was that an issue? The issue is is that emotions make us irrational
agents. That's the problem. And yet he could think through things even if it was based
on an emotional scenario, right, based on pros and cons. But as soon as you made him emotional,
one of the metrics he used for evaluation
was his own emotions.
Not people around him, right?
Like, and so...
We do that as children, right?
So we're very egocentric when we're young.
We are very egocentric.
And so, isn't that just an early version
of the emotion chip then?
I haven't watched much Star Trek.
Except I have also met adults.
Right?
And so that is a developmental process.
And I'm sure there's a bunch of psychologists that can go through, like you can have a six-year-old
adult who has the emotional maturity of a ten-year-old, right?
And so there's various phases that people should go through in order to evolve,
and sometimes you don't. So how much psychology do you think a topic that's rarely mentioned in robotics,
but how much does psychology come to play when you're talking about HRI, human robot interaction?
When you have to have robots that actually interact with you. Tons. So we like my group as well as I read a lot in the cognitive science literature as
well as the psychology literature because they understand a lot about human-human relations
and developmental milestones and things like that. And so we tend to look to see what's been done out there.
Sometimes what we'll do is we'll try to match that to see is that human-human relationship,
the same as human robot.
Sometimes it is and sometimes it's different.
And then when it's different, we have to, we try to figure out, okay, why is it different in this scenario?
But it's the same in the other scenario, right? And so we try to do that quite a bit.
Would you say that's if we're looking at the future of human or mobile interaction?
Would you say the psychology piece is the hardest? Like if it's, I mean, it's a funny notion for you as I don't know if you consider.
Yeah, I mean one way to ask it, do you consider yourself a roboticist or a psychologist?
Oh, I consider myself a roboticist that plays the act of a psychologist.
But if you were look at yourself sort of, you know, 20, 30 years from now,
do you see yourself more and more wearing the psychology hat?
yourself more and more wearing the psychology hat. Another way to put it is, are the hard problems in human robot interactions fundamentally psychology, or is it still robotics, the
perception manipulation, planning, all that kind of stuff?
It's actually neither. The hardest part is the adaptation and the interaction. So, it's the interface, it's the learning.
And so, if I think of, like, I've become much more
of a roboticist slash AI person,
then when I, like originally, again,
I was about the bionics.
I was a, I was a electrical engineer,
I was a control theory, right?
Like, and then I started realizing that my algorithms needed like human data, right? And
so that was like, okay, what is this human thing? How do I incorporate human data? And then
I realized that human perception had, that there was a lot in terms of how we perceive the
world and so trying to figure out how do I model human perception for my, and so I became
a HRI person, human robot interaction person, from being
a control theory and realizing that humans actually offered quite a bit. And then when you do
that, you become more of an artificial intelligence AI. And so I see myself evolving more in this
AI world under the lens of robotics having hard we're interacting with people.
So, you're a world-class expert researcher in robotics and yet others, you know, there's
a few, it's a small, but fierce community of people, but most of them don't take the journey into the age of HRI, into
the human.
So why did you brave into the interaction with humans?
It seems like a really hard problem.
It's a hard problem and it's very risky as an academic.
And I knew that when I started down that journey, that it was very risky as an academic
in this world that was nuanced,
it was just developing.
We didn't even have a conference, right, at the time.
Because it was the interesting problems.
That was what drove me.
It was the fact that I looked at what
interest me in terms of the application space and the problems,
and that pushed me into trying to figure out what people were and what humans were and how
to adapt to them.
If those problems weren't so interesting, I'd probably still be sending rovers to glaciers,
right?
But the problems were interesting, and the other thing was that to glaciers, right? But the problems were interesting.
And the other thing was that they were hard, right?
So it's, I like having to go into a room
and being like, I don't know what to do.
And then going back and saying, okay,
I'm gonna figure this out.
I do not, I'm not driven when I go in and like,
oh, there are no surprises.
Like, I don't find that satisfying.
If that was the case, I'd go someplace
and make a lot more money, right?
I think I stay in academic because,
and choose to do this, because I can go into a room
and like, that's hard.
Yeah, I think just for my perspective,
maybe you can correct me on it,
but if I just look at the field of AI broadly,
it seems that human robot interaction
has the most, one of the most number of open problems.
People, especially relative to how many people are willing to acknowledge that there are.
This, because most people are just afraid of the humans, so they don't even acknowledge
how many open problems are.
But it's in terms of difficult problems to solve exciting spaces, it seems to be incredible
for that.
It is.
And it's exciting.
You mentioned trust before.
What role does trust from interacting with autopilot to in the medical context, what role does
trust play in the human and human interaction space?
So some of the things I study in this domain is not just trust, but it really is
over trust. How do you think about over trust? Like what is first of what is
what is trust and what is over trust? Basically the way I look at it is trust is
not what you click on a survey.
Trust is about your behavior. So if you interact with the technology based on the decision
or the actions of the technology, as if you trust that decision, then you're trusting.
Right? And even in my group, we've done surveys that, you know, on the thing, do you trust robots?
Of course not. Would you follow this robot in an unburdening building? Of course not. Right?
And then you look at their actions and you're like, clearly, your behavior does not match what you
think, right, or what you think you would like to think. Right? And so I'm really concerned about
the behavior because that's really at the end of the day. When you're in the world,
that's what will impact others around you. It's not whether before you went on to the behavior, because that's really at the end of the day. When you're in the world, that's what will impact others around you.
It's not whether before you went on to the street, you clicked on, like,
I don't trust self-driving cars.
You know, that from an outsider perspective, it's always frustrating to me.
Well, I read a lot.
So I'm inside and in a certain philosophical sense.
The it's frustrating to me how often trusts is used in surveys
and how people say, make claims that if any kind of finding they make about somebody clicking
on answer. Because trust is a, yeah, behavior just, you said it beautiful. I mean, the action
your own behavior is what trust is.
I mean, that everything else is not even close.
It's almost like absurd comedic poetry that you weave around your actual behavior.
So some people can say they're trust.
I trust my wife, husband, or not, whatever, but the actions is what speaks of my life.
Right, you bugged their car.
Yeah.
You probably don't trust them.
I trust them, I'm just making sure.
No, no, that's, yeah.
It's like even if you think about cars,
I think is a beautiful case.
I came here at some point, I'm sure,
on either Uber or Lyft, right?
I remember when it first came out.
I bet if they had had a survey,
would you get in the car with a stranger and pay them?
Yes.
How many people do you think would have said,
like really?
Wait, even worse, would you get in the car with a stranger
at when I am in the morning
to have them drop you home as a single female?
Yeah.
Like how many people would say,
that's stupid? Yeah. And how many people would say, uh, that's stupid.
Yeah.
Yeah.
And now look at where we are.
I mean, people put kids like, right?
Like, oh, yeah, my, my child has to go to school.
And I, yeah, I'm going to put my kid in this car
with a stranger.
Yeah.
I mean, it's just a fascinating how like what we think we think
is not necessarily matching our behavior.
And certainly with robots for the time of vehicles and all the kinds of robots you work with,
that's, it's, yeah, it's the way you answer it, especially if you've never interacted with that robot
before. If you haven't had the experience, you being able to respond correctly on a survey
is impossible. But what role does trust play in the interaction do you think? I guess
it's a good to trust a robot. What does over trust mean? Is it good to kind of how you
feel about autopilot currently, which is like from a robot as perspective is like so very cautious.
Yeah, so this is still an open area of research, but basically what I would like
in a perfect world is that people trust the technology when it's working 100
percent and people will be hypersensitive and identify when it's not.
But of course we're not there.
That's the ideal world.
But we find is that people swing, right?
They tend to swing, which means that if my first,
and like we have some papers, like first impressions
in everything, right?
If my first instance with technology with robotics is positive, it mitigates any risk,
it correlates with like best outcomes.
It means that I'm more likely to either not see it when it makes a mistakes or faults
or I'm more likely to forgive it.
And so this is a problem because technology's
not 100% accurate, right?
It's not 100% accurate, although it may be perfect.
How do you get that first moment, right, do you think?
There's also an education about the capabilities
and limitations of the system.
Do you have a sense of how do you educate people correctly
in that first interaction?
Again, this is an open-ended problem. So one of the study that actually has given me some hope
that I were trying to figure out how to put in robotics. So there was a research study that
has showed for medical AI systems giving information to radiologists about, you know, here, you need to look at these
areas on the X-ray.
What they found was that when the system provided one choice, there was this aspect of either either no trust or over trust, right? Like, I'm not good.
I don't believe it at all or a yes, yes, yes, yes,
and they would miss things, right?
Instead, when the system gave them multiple choices,
like here are the three, even if it knew, like, you know,
it had estimated that the top area you need to look at was he,
you know, some place on the X-ray. If it gave like one plus others, the trust was maintained and the
accuracy of the entire population increased. Right? So basically, it was a, you're still trusting
the system, but you're also putting in a little bit of like your human expertise.
Like your human decision processing into the equation. So it helps to mitigate that over trust risk.
Yeah. So there's a fascinating balance at the strike.
Yeah. I haven't figured out again.
Right. So when it's still an open area research. Exactly.
So what are some exciting applications of human robot interaction?
You started a company, maybe you can talk about the exciting efforts there, but in general,
also what other space can robots interact with humans and help?
Yeah, so besides healthcare, because that's my bias lens.
My other bias lens is education. I think that, well, one, we definitely, in the US,
we're doing okay with teachers,
but there's a lot of school districts
that don't have enough teachers.
If you think about the teacher student ratio
for at least public education, in some districts,
it's crazy.
It's like, how can you have learning in that classroom, right?
Because you just don't have the human capital.
And so if you think about robotics, bringing that
in to classrooms as well as the after-school space,
where they offset some of this lack of resources
in certain communities, I think that's a good place.
And then turning on the other end is using these systems then
for workforce retraining and dealing with some of the things
that are going to come out later on of job loss,
like thinking about robots and NAI systems
for retraining and workforce development.
I think that's exciting areas that can be pushed even more and it would have a huge, huge impact.
What would you say are some of the open problems in education?
It's exciting.
So young kids and the older folks or just folks of all ages who need to be
retrained, we need to sort of open themselves up to a whole other area of work. What are
the problems to be solved there? How do you think robots can help?
We have the engagement aspect, right? So we can figure out the engagement. That's not a... What do you mean by engagement?
So identifying whether a person is focused
is like that we can figure out.
What we can figure out,
and there's some positive results in this,
is that personalized adaptation
based on any concepts, right?
So imagine I think about I have an agent and I'm working with a kid learning, I don't
know, algebra two.
Can that same agent then switch and teach some type of new coding skill
to a displaced mechanic.
Like, what does that actually look like, right?
Like, hardware might be the same.
Content is different to different target demographics
of engagement, like how do you do that?
How important do you think personalization
is in human robot interaction? And not just the mechanic or student but like literally to the individual
human being? I think personalization is really important but a caveat is that I think we'd be okay
if we can personalize to the group right and so if I can label you as,
along some certain dimensions,
then even though it may not be you specifically,
I can put you in this group.
So the sample size, this is how they best learn,
this is how they best engage.
Even at that level, it's really important.
And it's because, I mean, it's one of the reasons why educating in large classrooms is so hard.
You teach to the median, but there's these individuals that are struggling,
and then you have highly intelligent individuals,
and those are the ones that are usually left out.
Highly intelligent individuals may be disruptive
and those who are struggling might be disruptive
because they're both bored.
Yeah, and if you narrow the definition of the group
or in the size of the group enough,
you'll be able to address their individual.
Yes.
Individual needs, but really, most important group needs.
Right.
And that's kind of what a lot of successful recommender systems do, Spotify and so on. Say sad to believe, but I'm as a music listener,
probably some sort of large group. It's very, it's very sadly predictable. You have been labeled.
Yeah, I've been labeled and successfully so because they're able to recommend stuff that. Yeah,
but applying that to education, right? There's no reason why I can't be done.
Do you have a hope for our education system?
I have more hope for work for development.
Yeah.
And that's because I'm seeing investments.
Even if you look at VC investments in education,
the majority of it has lately been going
to workforce retraining.
I think that government investments is increasing.
There's a claim.
Some of us, based on fear,
like AI is going to come and take over all these jobs.
What are we going to do with all these non-paying taxes that aren't coming to us by our citizens?
I'm more hopeful for that.
Not so hopeful for early education,
because it's still a who's going to pay for it.
And you won't see the results for like 16 to 18 years.
It's hard for people to wrap their heads around that.
But on the retraining part, what are your thoughts?
There's a candidate, Andrew Yang running for president, saying that AI automation, robots,
universal basic income.
Universal basic income in order to support us as we kind of automation takes people's
jobs and allows you to explore and find
other means like do you have a concern of society transforming effects of automation and robots
and so on.
I do.
I do know that AI robotics will displace workers. Like we do know that, but there'll be other workers that will be defined new jobs.
What I worry about is, that's not what I worry about.
Like will all the jobs go away?
What I worry about is a type of jobs that will come out.
Right?
Like people who graduate from Georgia Tech, who'll be okay.
We give them the skills.
They will adapt even if their current job goes away.
I do worry about those that don't have that quality of an education, right?
Will they have the ability, the background to adapt to those new jobs?
That I don't know, that I worry about, which will create even more polarization in our society, internationally,
and everywhere.
I worry about that.
I also worry about not having equal access to all these wonderful things that AI can do
and robotics can do.
I worry about that.
People like me, from Georgia Tech, from say MIT, will be okay.
But that's such a small part of the population
that we need to think much more globally
of having access to the beautiful things,
whether it's AI and healthcare, AI and education,
AI and politics, right?
I worry about that.
And that's part of the thing that you were talking about is people that build a technology had to be thinking about ethics have to be thinking about access and all those things and not just a small small subset.
Let me ask some philosophical, slightly romantic questions. I'll listen to this. We'll be like, here he goes again. Okay.
Do you think one day we'll build an AI system that a person can fall in love with
and it would love them back?
Like in a movie, her, for example.
Oh, yeah.
Although she kind of didn't fall in love with him,
or she fell in love with like a million other people,
something like that.
So you're the jealous type, I see. humans are the jealous. Yes. So I do believe that we can
design systems where people would fall in love with their robot, with their AI partner.
That I do believe because it's actually and I don't like to use the word
manipulate, but as we see, there are certain individuals
that can be manipulated if you understand
the cognitive science about it, right?
Right, so I mean, if you could think of all close
relationship and love in general as a kind of mutual
manipulation that dance, the human dance,
I mean, manipulations just negative connotations.
And that's why I don't like to use that word particularly.
I guess another way to phrase is you're getting at it,
it could be algorithmatized or something.
It could be a...
The relationship building part can do.
I mean, just think about it.
We have, and I don't use dating sites,
but from what I heard,
there are some individuals that have been dating
that have never saw each other, right?
In fact, there's a show I think that tries to weed out
fake people, like there's a show that comes out, right?
Because people start faking.
Like, what's the difference of that person on the other end
being an AI agent, right?
And having a communication,
are you building a relationship remotely?
Like there's no reason why that can't happen.
In terms of human-robial interaction,
what role you've kind of mentioned with data,
emotion being can be problematic if not implemented well, I suppose.
What role does emotion some other human
like things, imperfect things coming to play here for good human-robotic interaction and something
like love? Yeah, so in this case, and you had asked can a AI agent love a human back? I think they
can emulate love back, right? And so what does that actually mean?
It just means that if you think about their programming,
they might put the other person's needs
in front of theirs in certain situations, right?
You look at, think about it as return on investment.
Like was my return on investment?
As part of that equation, that person's happiness,
you know, has some type of, you know,
algorithm waiting to it. And the reason why is because I care about them.
That's the only reason.
But if I care about them and I show that,
then my final objective function
is length of time of the engagement.
So you can think of how to do this, actually, quite easily.
And so.
But that's not love?
Well, so that's the thing.
I think it emulates love because we don't have a classical definition of love.
Right, but and we don't have the ability to look into each other's minds to see the algorithm.
And I guess what I'm getting at is, is it possible that, especially if that's learned,
especially if there's some mystery and black box nature to the system?
How is that, you know, how is it any different?
How is it any different?
And in terms of sort of, if the system says I'm conscious, I'm afraid of death. And it does indicate that it loves you.
Another way to sort of phrase it, be curious to see what you think.
Do you think there'll be a time when robots should have rights?
You've kind of phrased the robot in a very robot-assist way, in just a really good way, but saying,
okay, well, there's an objective function, And I can see how you can create a compelling human robot interaction experience
that makes you believe that the robot cares for your needs and even something like loves
you. But what if the robot says, please don't turn me off? What if the robot starts making
you feel like there's an entity of being a soul
there, right? Do you think there'll be a future? Hopefully you won't laugh too much at this,
but where there is they do ask for rights? So I can see a future if we don't address it in the near term where these agents as they adapt and learn could say,
hey, this should be something that's fundamental. I hopefully think that we would address it before
it gets to that point. You think that's a bad feature. Is that a negative thing where they ask or being discriminated against?
I guess it depends on what role have they
attained at that point, right?
And so if I think about now.
Careful what you say, because the robot's 50 years
from now, I'll be listening to this,
and you'll be on TV saying,
this is what robot is a cease to believe.
I do not write, and so this is my, and as I said,
I have a bias lens.
And my robot friends will understand that.
But so if you think about it, and I actually put this in kind
of the, as a robot, you don't necessarily
think of robots as human with human rights.
But you could think of them either in the category
of property, or you could think of them either in the category of property or you could
think of them in the category of animals, right?
And so both of those have different types of rights.
So animals have their own rights as a living being, but you know, they can't vote, they
can't write, they can be euthanized.
But as humans, if we abuse them, we go to jail.
So they do have some rights that protect them,
but don't give them the rights of citizenship.
And then if you think about property,
property the rights are associated with the person.
So if someone vandalizes your property or
steals your property, like there are some rights, but it's associated with the person who owns that.
If you think about it
back in the day, and if you remember we talked about, you know, how society has changed, Women were property, right?
They were not thought of as having rights.
They were thought of as property of,
like they're...
Yeah, assaulting a woman meant assaulting the property
of somebody else's position.
Exactly.
And so what I envision is is that we will establish
some type of norm at some point,
but that it might evolve, right?
Like if you look at women's rights now,
like there are still some countries that don't have
and the rest of the world is like,
why that makes no sense, right?
And so I do see a world where we do establish
some type of grounding.
It might be based on property rights,
it might be based on animal rights, it might be based on animal rights.
And if it evolves that way, I think we will have this conversation at that time, because
that's the way our society traditionally has evolved.
Beautiful.
Put just out of curiosity, Anki, Gibo, Mayfield Robotics, with the Robot Curious,
I-F-Out Works, we think robotics were all these amazing
robotics companies led created by incredible roboticists
and they all went out of business recently.
Why do you think they didn't last longer?
Why is it so hard to run a robotics company,
especially one like these, which are
fundamentally HR, our HRI human-robot interaction robots. Yeah, each one has a story, only one of
them I don't understand. And that was Anki. That's actually the only one I don't understand.
I don't understand either. It's a,. I mean, I look from the outside,
I've looked at their sheets,
the data that's...
Oh, you mean business-wise?
Yeah.
I've got you.
Yeah.
I look at that data,
and I'm like, they seem to have product market fit.
Yeah.
So that's the only one I don't understand.
The rest of it was product market fit.
What's product market fit?
If it's just that of the cut, do you think about it?
Yeah, so although we think robotics was getting there, right?
But I think it's just the timing.
It just, their clock just timed out.
I think if they had been given a couple more years,
they would have been okay.
But the other ones were still fairly early by the time they got into
the market. And so product market fit is I have a product that I want to sell at a certain price.
Are there enough people out there, the market, that are willing to buy the product at that market
price for me to be a functional, viable, profit-bearing company.
So product market fit.
If it costs you $1,000 and everyone wants it
and only is willing to pay a dollar,
you have no product market fit.
Even if you could sell it for, you know,
it's enough for a dollar, because you can't.
So hard is it for robots?
So maybe if you look at iRobot,
the company that makes
Rumba's, the vacuum cleaners, can you comment on did they
find the right product, a market product fit?
Like people willing to pay for robots is also another kind
of question.
So if you think about iRobot in their story, right?
Like when they first, they had enough of a runway, right? When they first started,
they weren't doing vacuum cleaners, right? They were a millit, they were contracts, primarily
government contracts, designing robots. Yeah, I mean, that's what they were. That's how
they started, right? And they still do a lot of incredible work there. But yeah, that was
the initial thing that gave them a lot of funding to put it. To then try to, the vacuum cleaner is what I've been told
was not like their first rendezvous
in terms of designing a product, right?
And so they were able to survive until they got to the point
that they found a product price market, right?
And even with, if look at the the Rumba
The price point now is different than when it was first released, right? It was an early adopter price
But they found enough people who were willing to to fund it and I mean
They're you know, I forgot what their lost profile was for the first couple of you know years
But they became profitable in sufficient time that they didn't have to close their doors
So they found the right,
there's still people willing to pay a large amount of money
so over $1,000 for vacuum cleaner.
Unfortunately for them,
now that they've proved everything out
and figured it all out, now there's some competitors.
Yeah, and so that's the next thing, right?
The competition, and they have quite a number,
even internationally, like there's some products out there there you can go to Europe and be like,
oh, I didn't even know this one existed.
So this is the thing though, like with any market.
I would, this is not a bad time, although as a robot, it's kind of depressing.
But I actually think about things like with the, I would say that all
of the companies that are now in the top five or six, they weren't the first
to the stage, right? Like Google was not the first search engine, sorry,
altivista, right? Facebook was not the first, sorry, my space.
Like, think about, they were not the first players.
Those first players, like, they're not in the top
five, 10 of Fortune 500 companies, right?
They proved, they started to prove out the market.
They started to get people interested.
They started the buzz, but they didn't
make it to that next level. But the second batch, right? The second batch, I think, might make
it to the next level. When do you think the Facebook of Robotic?
The Facebook of robotics. Sorry, I take that phrase back because people deeply for some reason,
well, I know why, but it's, I think, exaggerated distrust Facebook because of the privacy
concerns and so on. And with robotics, one of the things you have to make sure is all
the things we talked about is to be transparent and have people deeply trust you to let
it, a robot into their lives into their home. When do you think the second batch of robots will go?
Is it five, 10 years, 20 years,
that will have robots in our homes and robots in our hearts?
So if I think about,
because I try to follow the VC kind of space
in terms of robotic investments.
And right now, I don't know if they're going to be successful.
I don't know if this is the second batch, but there's only one batch that's focused on
like the first batch.
And then there's all these self-driving X's.
And so I don't know if they're a first batch of something or if, like, I don't know quite
where they fit in.
But there's a number of companies, the co-robots, I call them co-robots, that are still getting
VC investments.
Some of them have some of the flavor of like rethink robotics, some of them have some
of the flavor of like Curie.
What's a co-robot?
So basically a robot and human working in the same space. So some of the companies are focused on manufacturing.
So having a robot and human working together in a factory,
some of these co-robots are robots and humans working
in the home, working in clinics.
Like there's different versions of these companies
in terms of their products.
But they're all, so we think robotics would be
like one of the first at least well-known companies
focused on this space.
So I don't know if this is a second batch
or if this is still part of the first batch,
that I don't know, and then you have all these other companies
in this self-driving space,
and I don't know if that's a first batch
or again a second batch.
Yeah, so there's a lot of mystery about this now.
Of course, it's hard to say that this is the second batch
until it, you know, it proves out, right?
Correct.
Yeah, we need a unicorn.
Yeah, exactly.
The, why do you think people are so afraid,
at least in popular culture of legged robots like those work them Boston dynamics or just robotics in general If you were to psychoanalyze that fear what what do you make of it and should they be afraid?
Sorry, so should people be afraid? I don't think people should be afraid
But with a caveat. I don't think people should be afraid given that
most of us in this world understand that we need to change something, right?
So given that, now if things don't change, be very afraid.
What is the dimension of change that's needed?
So changing, thinking about the ramifications,
thinking about like the ethics, thinking about
like the conversation is going on, right?
It's no longer a, we're gonna deploy it
and forget that, you know, this is a car
that can kill pedestrians that are walking across the street,
right?
We're not in that state.
We're putting these roads out.
There are people out there.
A car could be a weapon. Like, people are now solutions aren't there yet,
but people are thinking about this as we need to be ethically responsible as we send these systems
out, robotics, medical, self-driving, and military, too. And military.
It's not as often talked about, but it's really where probably these robots will have a significant
impact as well.
Correct.
Right.
Making sure that they can think rationally, even having the conversations who should pull
the trigger, right?
But overall, you're saying if we start to think more and more as a community about these
ethical issues, people should not be afraid.
Yeah, I don't think people should be afraid. I think that the return on investment, the
impact, positive impact will outweigh any of the potentially negative impacts.
Do you have worries of existential threats of robots or AI that some people kind of
talk about and romanticize about? And then in the next decade and next few decades.
No, I don't.
Singularity would be an example.
So my concept is that, so remember, robots, AI, is designed by people.
Yes.
It has our values.
And I always correlate this with a parent and a child.
Right, so think about it.
As a parent, what do we want?
We want our kids to have a better life than us.
We want them to expand.
We want them to experience the world.
And then as we grow older, our kids think and know
they're smarter and better and more intelligent
and have better opportunities.
And they may even stop listening to us.
They don't go out and then kill us.
Like think about it, it's because it's instilled in them values.
We instilled in them this whole aspect of community.
And yes, even though you're maybe smarter and more have more money and da-da-da,
it's still about this love-carrying relationship.
And so that's what I believe.
So even if we've created the senior larity and some archaic system back in 1980, that
suddenly evolves, the fact is it might say, I am smarter, I am sentient.
These humans are really stupid, but I think it will be like, yeah, but I just can't destroy
them.
Yeah, percent of mental value.
It's still just for to come back for Thanksgiving dinner every once in a while.
Exactly.
This is so beautifully put.
You've also said that the Matrix may be one of your more favorite AI related movies.
Can you elaborate why?
Yeah, it is one of my favorite movies.
And it's because it represents kind of all the things I think about.
So there's a symbiotic relationship between robots and humans, right?
That symbiotic relationship is that they don't destroy us.
They enslave us, right?
But think about it. Even though they enslaved us, they needed us to be happy,
right? And in order to be happy, they had to create this crudey world that they then had to live in,
right? That's the whole premise. But then there were humans that had a choice, right? Like, you had
a choice to stay in this horrific, horrific world where it was your
fantasy of life with all of the anomalies, perfection, but not accurate, or you can choose
to be on your own and like have maybe no food for a couple of days, but you were totally
autonomous. And so I think of that as, and that's why.
So it's not necessarily us being enslaved,
but I think about us having the symbiotic relationship,
robots and AI, even if they become sentient,
they're still part of our society,
and they will suffer just as much as we.
That's with us.
And there will be some kind of equilibrium
that will have to find some symbiotic relationship.
And then you have the ethicists, the robotics folks that will like, no, this has
got to stop. I will take the other pill in order to make a difference. So if you could hang
out for a day with a robot, real or from science fiction, movies, books safely and get to pick
movies books safely and get to pick his or her their brain. Who would you pick?
Yeah, to say it's data. Data. I was gonna say Rosie, but I don't I'm not really interested in her brain.
I'm interested in data's brain. Data, pre or post-emotion chip? Pre.
But don't you think it'd be a more interesting conversation?
Post-emotion chip?
Yeah, it would be drama.
And I, you know, I'm human.
I deal with drama all the time.
But the reason why I want to pick data's brain is because I could have a conversation
with him and ask, for example, how can we fix this ethics problem, right?
And he could go through like the rational thinking and through that, he could also help
me think through it as well.
And so that's, there's like these questions, fundamental questions, I think I can ask
him that he would help me also learn from.
And that fascinates me.
I don't think there's a better place to end it.
Yeah, thank you so much for talking to us in honor.
Thank you. Thank you. This was fun.
Thanks for listening to this conversation.
And thank you to our presenting sponsor, Cash App.
Download it, use code Lex podcast. You'll get $10 and $10 will go to first.
A STEM education nonprofit that inspires hundreds of thousands of young minds to become future
leaders and innovators.
If you enjoy this podcast, subscribe to my YouTube, get 5 stars on Apple Podcast, follow
on Spotify, support it on Patreon, or simply connect with me on Twitter.
And now, let me leave you with some words of wisdom from Arthur C. Clark.
Whether we are based on carbon or on silicon, makes no fundamental difference.
We should each be treated with appropriate respect.
Thank you for listening and hope to see you next time.
you