Embedded - 207: I Love My Robot Monkey Head
Episode Date: July 5, 2017Professor Ayanna Howard of Georgia Tech joins us to talk about robotics including how androids interact with humans.  Some of her favorite robot include the Darwin, the Nao, and, for home-hac...king, the Darwin Mini. Ayanna has a profile on EngineerGirl.org, a site that lets young women ask questions of women in the engineering profession. Elecia has been working on a typing robot named Ty, documented on the Embedded.fm blog. It uses a MeArm, on sale in July 2017 at Hackaday.com, with coupon noted in show. (don't use PayPal to check out or you can't apply the coupon). Other robots for trying out robots: Lego Mindstorms (lots of books, project ideas, and incredible online tutorials!), Cozmobot, Dash and Dot. Some robotics competition leagues include Vex, Botball, and FIRST. Â
Transcript
Discussion (0)
Welcome to Embedded. I'm Elysia White. My co-host is Christopher White.
You know, I currently have a crush on all things robotics, right?
Imagine how pleased I am to have Professor Ayanna Howard with us to talk about that very subject. Before we talk to Ayana, I want to mention that
our friends at Hackaday, having heard about my robot arm obsession, are offering a discount for
the Mi Arm. That's the little cheap arm that I'm using for my typing robot. The coupon code is
awesome embedded all as one word. You can of course email me to get that if you don't remember.
But yes, let everybody make robot arms do incredible things.
Hello, Ayana. It's nice to talk to you today.
Thank you. I'm pretty excited about being here.
Could you tell us a bit about yourself?
So I'm Ayana Howard. At the end of the day, I call myself a roboticist.
My function right now is I'm a professor of School of Electrical Computer Engineering at Georgia Tech.
And what is your favorite class that you teach there?
So actually, my favorite class right now is bioethics and biotechnology, which is
totally different than my, my normal.
All right.
That sounds fascinating.
It does.
We should ask more about that.
Before we get down into the details, I want to play the game lightning round where we
ask you short questions and we want short answers.
And if we're behaving ourselves, we don't ask why and how and can you tell us more until the end.
Okay.
Favorite movie or book fiction, which you encountered for the first time last year or in the last year.
Oh, in the last year.
Favorite movie would probably be, oh, the last year. Oh, see, I'm not being quick on this that's okay no it's
it's not like there's actual rules here you won't be struck by lightning yeah because i've seen a
lot i mean i like the the last um okay i will say it the the last star trek movie but i don't know
if that came out in the last year it was was like July 4th, maybe. Close enough.
Okay.
Wait a minute.
This is Force Awakens, not Rogue One.
Star Trek.
Star Trek.
Oh, oh.
Jeez.
You're off the podcast.
Yeah, it came out like a year ago, like July 4th, I think.
Yeah. All right.
Okay.
Favorite fictional robot?
Rosie. Preferred voltage?
Oh, five volts
Wheeled robots or arms?
Arms
Oh, sorry
Are you playing, Chris?
It was such a lightning answer
I was expecting more
I responded quickly that time It caught you off guard It caught me off guard It was such a lightning answer that I was expecting more.
I responded quickly that time.
You did, you did. It caught you off guard.
It caught me off guard.
A technical tip you think everyone should know.
Coding is fun.
I don't know if that's a tip, but.
All right.
If you could only do one, would you choose research or teaching?
Research.
Okay.
Tell me about your research.
This doesn't have to be Research. Okay. Tell me about your research. This doesn't have to be short.
Okay.
So I guess my favorite body of research right now is designing robots to engage with children with special needs in the home environment, primarily for therapy and some with respect to education.
I love it because I'm a lifelong learner. And so one of the things I have to actually learn is about behaviors and people and, you know, how children learn and how they react to robots.
So I'm excited about that because it's just new territory that I haven't explored before.
So I'm learning as I'm going.
And then the reward of having a robot interacting with a child who might not have been exposed to this type of technology
before is just, it's so amazing. It's like, you know, opening up that Christmas gift, you know,
every year, it's like, oh, what is it? And so I get that kind of really psyched reaction when I'm
in doing this type of research. And are the children, are the children excited because
it's a robot, or is it because they have special needs and somebody is finally patient enough?
I think it's a combination.
I think, one, it's all kids are engaged by robots.
In fact, a lot of adults are too, right?
So that's like, oh, that's a give me.
But I think it's also having that access.
So what happens is a lot of technology, as I say, a lot of technology is not accessible.
It's not made for everyone. It's only made for a certain slot of folks. And so I think just having
access to something that's different, and it doesn't necessarily, it's not just because it's
a robot. It's just that it's something different that, hey, I've heard about this kind of thing
and this stuff and robotics. So I think it's that access as well. I can see that. Do the kids interact with it like they would? I've said this before,
Big Hero 6's Baymax, where it's a cuddly thing and they interact with it as though it was a person,
or is it, it's a robot and they're interacting with it, I don't know, as a coding or a logic sort of way?
No. So they're interacting with it as if it's a playmate, even though the robots we use are
not cuddly. I mean, they're definitely, they're still made out of metal, but they're interacting
with it as if it has a personality. I mean, because we program the robots with personality,
so maybe that's a given. But they act as if it's a playmate, like it's alive and
it understands them and it can react based on, you know, their current state. It's like a person,
it's like a live being. What did the robots that you're using for this research look like?
Are they anthropomorphic? They are. They're humanoids about the size of a toddler,
so not that big. We typically have them on the table next to the child.
They have arms, they have legs. We don't really make it walk necessarily. Most of our interaction
is with the hands or with the arms, with the feet, but more of like,
if you think about being happy, you kind of go up and down, like bounce. And so we use legs that way,
not necessarily walking, but to exhibit, you know, an emotional happy state, for example.
I don't know whether to go into the emotional happy state or all of these joints and how they
work together. Okay. Do you want to?
Let's go with the joints because, I mean, so that isn't easy. Getting a robot to move multiple
joints, I am learning, is a little difficult.
And not fall over?
Yes. No, there's been some falling over. But getting it all to work together so that it creates a physical embodiment of an
emotion, what are the technical things I would need to know to build something like that?
So one of the things we do before we even start coding the robot is we create a kinematic model
of the robot. So we look at, and if you think about the joints and the links
between joints, we pair them up. So think of it as, I would think of it as a stick figure. So
remember when you were young and you wrote, you drew these stick figures with like little balls
for the hands and little balls for the elbows and little balls for the shoulders. And you had these
sticks that would go in between. So that's basically a kinematic chain, but there's equations associated with it. So we create that kinematic
chain for the robotic system, and we compute the math in terms of, okay, we want to go from point A,
what does that look like in terms of your joint angles? We want to go to point B, what does that
look like for the joint angles? And so what is the trajectory to go from A to B without, say,
getting into a singularity or falling over or going into some kind of weird contraption? Like,
yeah, that just doesn't look right. So that's kind of the math behind it. But that doesn't
actually always work. Because then in the real world, you know, we don't incorporate things like,
you know, friction and the fact that, you know, the air actually applies some type of draft sometimes.
And so in the real world, we take that as the model is just more of a, I would say,
a very kind indication of what it should do. And then we start tweaking it based on the real world.
So what level of mathematics does someone need to know to understand that? Is this something that's,
okay, you've got to be really, really good in trigonometry, but you don't necessarily have to
know a lot of physics, or is it you've got to know all of that stuff?
No, I would say there's elements of it. If you had, and I would say if you had basic calculus,
because calculus has some of these elements that you see again in physics, as well as algebra two is trigonometry. So I would say basic understanding of calculus, calc one.
You can do this math.
If you get more advanced,
it becomes easier in terms of thinking about it.
And then you can add things like,
what happens if I add gravitational effects to it?
And what happens if I add frictional effects to it?
But at the basic level, calc one is good.
Yes, but you have to remember that the law of cosines is a thing
because I did not remember that.
I didn't get that.
It wasn't just Pythagorean theorem and trigonometry.
I had to have a little more.
I didn't get law of cosines until later, yeah.
Well, so the thing about this, and this is my,
I would say my beef about education a lot of times,
is that this math is really useful.
It really is.
But the whole thing is that you don't learn its application.
So it's like, what is this?
Why am I doing this?
Why am I inversing?
I understand nothing.
But then when you're doing it in the real world with robotics, you're like, oh, now
I understand why this doesn't quite work.
Now I understand why you can't divide by zero.
Oh, get it. Got it, I'm good.
So that's why I love robotics.
It really teaches you the value of all that stuff you learned
in terms of the theory world.
It really puts a lot of stuff together.
I mean, there's the software, there's the math,
there's the physics, the electronics, the mechanical.
It just, yes, robotics is, robotics is awesome.
And if you put cameras on, you've got optical.
And machine vision and machine learning.
Machine vision and learning and even cognitive science and the social sciences as well.
Kinematic models.
I have done a kinematic model for my arm.
Here comes the do my homework question.
All right. And I did it using the robot operating system, which I'm very new to, but it had a kinematic modeler.
And Chris is right.
I have a bug in my device because I don't understand transmission linkages.
And if anybody wants to solve that for me, I'll put it in the show notes, but I'll put a link in the show notes.
But what do you use for kinematic modeling?
We use something called MATLAB.
Oh, I see.
Really?
Okay.
Because we don't, so the way we code, we, again, the MATLAB is really to do, we call back of the envelope calculations.
We don't use the code that comes from that directly on the
robotic system. We then program our own methodologies. Sometimes we use ROS, depending on
what we want to do. Sometimes we just start from scratch. Sometimes if we're using a commercial
robot, for example, they'll have their own language already done. And so we'll use whatever
coding language they have in their libraries.
So Ross is one of them, but is one of many. Is it a good one?
Given that I know some of the folks that work and started that.
And this is public. I think the only answer is yes, it's great.
Yes. You know what, though? I would say the one thing that Ross did
that was amazing was that it is a language that a lot of people can use. And so I might not know
vision, for example, or like, oh, what am I supposed to do? How am I supposed to do vision?
Or how am I supposed to do the slam thing? And the modules are there. And so if you want to be
an expert in, you know, I want to be an expert in, you know,
I want to be an expert in controls. I don't really want to learn perception and vision processing. It's okay. And Ross allows you to do that. And so because it is so powerful, it allows you to
pick and choose. The learning curve is quite steep, but it allows you so much flexibility. And the community, the community now
is amazing in terms of the ability to share. Like if I do something, it's like, oh yeah,
I just did this. It was really hard. It took me six months to do, and I'm going to share it and
I'm going to publish it so that other people don't have to go through the pain that I did.
And so the community is amazing in terms of sharing and things like that. And so
all of those things make it an amazing package, an amazing framework and an infrastructure that,
you know, it was a while to get to, but those are the good things about it.
And the bad things mostly involve documentation and a huge pile of information that you can't
quite shove in fast enough. It's the documentation, it's the learning curve.
Yeah.
So which of your robots is your favorite?
So my favorite robot right now is, we're moving to it, it used to be Darwin, which is this
humanoid robot.
We're now starting to use the Now, which is also a robot by, it's a robot from aldebran who was bought by
soft bank um but it's a humanoid robot the reason why i like it is that its articulation is really
nice in terms of um it's its movements like when it moves it's like oh that's such a beautiful dance
that it has um which is why i like it it It just, you see it and you like, every time it moves, people just smile.
It's like, even people who don't like robots, they just kind of smile like,
oh, that's kind of beautiful.
Okay. So how does it, how do you get motion like that?
As opposed to the jerky motion that makes people look at it and go that,
that looks painful.
So that's the thing.
And I truly believe it's whatever motors that they're using, and I forgot which one, but it has much more of a continuous signature.
And then the ability to access those motor commands and program them, I think it's just the way that it's designed. It allows this fluidity.
What are you going to do with it? I mean, if it's got motors that move all nice and stuff,
and you already know how to make it so that robots can play with kids, what's left?
Well, that's what is being used.
The research is left.
Giant magic box in the middle. It's being used for playing with kids.
So it has the same function.
It's just now I can do a lot more nuances.
So the kids are happy with the behaviors that we've programmed in terms of its motions and things like that.
But I can now do a little bit of nuances.
So as an example,
the robot I was using before couldn't really cock its head. Do you know cocking your head
is actually a very, very nice motion? It says so much about what you're thinking.
So something that's like, you're like, what? He's like, yeah, something like that. Just one
thing like that means that all I have to do is cock my head, and I can either look curious or I can look angry, and I don't even have to move my body.
And the new robot does this, but the old robot does not.
What is the name of the new robot?
You said Now?
It's Now.
Yeah, Now.
So both of these have been out for a while.
I just inherited the Now.
It's a more expensive platform.
And so I just inherited it, and'm like really excited about it. And
just again, the actuation, it just has, it has more mortars. It has more IE joints to control.
How much does something like this cost?
Roughly they're going for, so the last I saw, you can get one for about 10 or 11K. I think they have deals
every so often that get it down to about 8K. That's a lot. I mean, that's not a lot for a
research team or a lab or even a college where multiple people will be using it.
Correct. If somebody's at home or in a small
team at a hackerspace, what kind of robots should they be looking at?
So depending on what their interest is, if they're just looking to do some hacking and they want a little bit of a robot like a humanoid, I like the Mini Darwin.
It's a kit.
It's fairly low cost. There's a community that puts like an Arduino on it so you can add in a bunch of different sensors. You can add in a camera, things like that. It doesn't come with it as its base kit, oh, okay, we get it. It does things like dance.
Like we use it for outreach.
So you can make it do things like, you know, dance and walk and kick balls and those kind of aspects.
You just have to, again, if you're a maker,
you just have to add in other components if you want it,
like a camera system, for example, or if you want additional sensors.
We talked a little bit about the kinematic modeling.
And then I said, well, what's between this and that?
Because I really do want to talk about all the other stuff.
And you just said the camera.
So as humans, we see things and that helps us maintain our balance
and it helps us decide what we're going to do next.
Correct.
But those are decisions that happen in my brain that I don't really think about.
How do you get a robot to do that?
So, and people as well as robots have different types of behaviors.
So there's reactive and there's deliberative.
So reactive are those things, I'm walking down the sidewalk, I trip, typically I
don't fall. Why? Because I'm in this reactive mode. My body identifies, it senses, it then goes into,
okay, this is not what I'm supposed to be doing. I'm starting to fall. What should I do? That's
all done in this reactive level. Basically memory, motor control, that aspect. Deliberative would be I'm at home and I want to go to the grocery store
and I know it's traffic.
And so I think about the best route I'm going to take in terms of my streets.
And so that's more deliberative.
I'm planning it based on what I remember the streets look like,
what I think is the shortest distance,
given that there's this traffic pattern
and things like that. So there's deliberative. We do it both. So our daily lives, we go through
both reactive and deliberative all the time without even really thinking that, oh, yeah,
I'm actually doing a planning routine versus something I'm not thinking about. Even deliberative,
how many times have you been in the car driving and you look up, you're like, oh, I'm already home. I didn't even really remember what I just saw. You're actually
going into more of a reactive mode because you've done it so often that is now stored in that part
of the brain where it's just more of a memory than a thinking process or a very deliberative
thinking process. Robots, same thing. When we design a robot, we use things like sensors in terms of we might use a sonar sensor
or an IR sensor to detect local obstacles.
So those would be reactive, i.e., if you see something close, stop.
Like, basically, that's it.
That is the routine.
You don't have to think about it as like an if-then rule.
Something close, stop.
And then you can start thinking about, okay, now, what is this object?
Should I go around it?
Should I go backwards?
And then you start in this deliberative, like thinking about what you should do.
And so we program robots in the same way, reactive as well as deliberative.
Depending on the application, we might have more reaction versus planning and vice versa.
Are these standard terms and you have a software section that is all reactive and a software section that's all deliberative? I probably do. I think it's more natural now. It's like, okay, these are my, yes, these are my reactive behaviors.
These are my deliberative.
These are my planning behaviors.
These are my obstacle avoidance behaviors.
I may actually label them specifically of what I'm trying to do.
This is my tripping behavior.
This is my do not hit the kid behavior.
So I had a question that's been percolating.
You work with kids.
Yes.
And they react to things very differently than adults.
And one of the things adults seem to have trouble with is kind of the uncanny valley
of this robot or this artificial person-like thing is close enough to being a human, but
I can tell there's something wrong.
And it makes them creepy yeah do kids have that same reaction to things or are they more open to oh this is
this is just a thing and and it's new to me and i will absorb whatever its behavior means no they
they have the creep effect as well um they're as an example i I have this one robot that's a monkey head. It creeps out the kids.
I'm creeped out already.
Right? It creeps out the kids. It's just, you know, and it actually adults are less creeped
out because they kind of look at a monkey and they like, yeah, we know it's some type of
animatronic, right? Kids are creeped out. Like I'll bring it in. And as soon as I,
and what happens is it has reactive behavior. So if you do things like cover its eyes, it'll like start like basically freaking out or you touch his head. It likes that. And so I'll let it go there.
I'll cut it on. It does nothing. And then I'll be like, here's my friend. And I was like, hi,
how are you doing? And I'll touch his head and he'll go like, oh.
And the kids are like, oh my gosh, that is such a weird thing. So yes, kids get creeped out as well.
Different things creep them out, but yes, they do. I like my monkey head.
Okay. If the kids are creeped out by the monkey head, which like Chris, I'm wondering what this looks like. I'm a little creeped out myself. What do you use it for? I mean, do they just get over it? Because after a few gosh, that's just really weird, then they start wanting to interact with it.
Like, okay, what else can it do? What can it do? What happens if I do this? What happens if I do
that? So they get over it very quickly. I think adults kind of keep it like as a grudge. I think
adults take longer to get over that creepiness. Okay. So this all has led to the question I wanted to ask based on one of your
recent papers. What is human robot trust? Oh, so this is actually a fascinating phenomenon.
We are designing robots, right, to interact with people. So that's really the holy grail is that,
you know, we have robots for every home, for different functions that we would like robots to do, which means that they have to interact with people in our home environments.
And so there's this aspect of trust, i.e., do I trust this robot to do what it's supposed to do when I want it to do it?
And will it do it safely and robustly every single time?
As an example, if there's a crime in my neighborhood, I trust that the police will come,
right? That's just the basic. If there was a fire and I call, I trust that the firemen will,
our firewomen will come as quickly as possible. And that's the way our system works. It's actually a lot of trust.
I give you money, I trust you're going to give me a service. So with robotic systems, which we all
know are faulty still, because they're based on programs, there's inaccuracies, we can't program
every single kind of scenario, they are going to make mistakes, guaranteed, at least now. Even in
the future, they won't be as bad, but they're going to make mistakes.
What we found is that people trust robots even when they make mistakes, which is kind of counterintuitive.
So if you have a friend, as an example, and they are always late, eventually your trust goes away.
You're like,
okay, I can't depend on this friend. So if I need to be somewhere on time, I'm not going to call this person. Our trust decreases based on you making mistakes. What we found with robots is
that a robot can make a mistake and that trust is not broken for some reason. And the scenario we did is we had a emergency evacuation scenario. So we had a
building, we invited people to come in to do a study. We didn't tell them what kind of study.
They would come in, this robot would guide them to the meeting room. And the robot was,
it could be faulty or not. So faulty would be as it was guiding, it would take you to the wrong room
or it would just get stuck. And we had, you know, we'd come out like, oh, guiding, it would take you to the wrong room, or it would just get stuck.
And we had, you know, we'd come out like, oh, sorry, it's broken. You know, as a human, come,
we'll lead you to the meeting room. And then we filled the entire building with smoke to simulate emergency. They didn't know. They were in the room. The door was closed. They were participating
in this research study because we had like a paper and they had to answer questions. The alarms go off. And of course, if we're all conditioned that if the alarms go off,
typically we always think it's like false alarm, but you know, we're conditioned to,
oh yeah, let me go out of the building. As soon as you open the door, you see this
smoke filled hallway and the alarms are going off. And so now there's this, okay, it's not necessarily a drill anymore.
There's smoke here and there's fire alarms. And so I need to go out. And we found is irrespective
of whether the robot made a mistake entering, leading them into the room, whether the robot
made a mistake as they were guiding. So you would come out of the smoke field room.
There was this robot basically is like, okay, follow, you know, follow me.
Let's go this way.
People would still follow this robot.
Even if, I mean, you,
and we have tons of scenarios that we documented where it's like,
there's an obvious that the robot's broken.
Now is obviously the information they're providing you now is wrong. And you're still trusting this robot as if it's smart, as if it knows what
it's doing. So why is this a problem? We're getting into these autonomous cars that are
coming out onto the road. There's a possibility that if I'm in a car, and we've seen a little bit of cases with, you know,
self-driving, like with Tesla, if I'm in a car, we now are pretty confident that people will just
say, my car knows what it's doing, which is why I can read the newspaper while my car is driving
by itself. Because my car is perfect. And we all know that that's not the case. But people are overtrusting these robots.
And I see why.
We actually have a Tesla.
And I drove up from where I live near the beach a long way to San Francisco, an hour and a half, two hour drive.
And I let the car do most of the driving, and I watch.
I mean, I pay attention, in part because the road I take has surfers that cross the road
and no respect for their life.
Okay.
And so I let the car take care of the road, and I check for surfers and occasionally for whales.
It's a pretty drive.
But when I finally took control, as I approached San Francisco,
I realized that I was swerving a little bit in the lane.
And the car is a better driver than I am.
Let's just accept the reality.
Well, at least for small.
Unless I'm really concentrating.
For small control kind of operations.
Yes.
I mean, it doesn't cut people off.
It's polite.
And I am in my own world.
But that's an illusion, right?
Because it's really good at certain control feedback kinds of things.
But it's not that car in particular right now is not going to make decisions about anything.
And it doesn't do much planning with I need to get off here or there.
But I think what you're saying is that illusion makes people overconfident in what the car can do.
Yes.
Because they can't classify what it's actually doing.
Right.
And I forget that it tends to be scared of shadows.
If it's really bright and really dark, the car will hop away from shadows, which is hilarious.
But I should be aware and think about these things.
And you are, but then you'll still forget because it'll be driving so nicely.
And there are whales and surfers.
And then you just kind of forget that I had this problem.
Yes.
But I also, with your test as you described it,
part of me is like, well, yeah,
if it led me to the wrong room,
that's the programmer's fault.
That's not the robot's fault.
Right.
Which is, yeah, but who do you think is making these robots?
More robots. It's robots all the way down. sentient being, but the robot is this intelligent learning creature. And therefore it must be better
than the programmer that programmed it. I expect more consistency from it.
I expect more consistency from it. Yes. And, and in certain scenarios,
it is very much more consistent, but that doesn't cover everything. And so this is a concern because
we're not at the stage where, you know, your self-driving car can do everything. As another
example, we did a survey of exoskeletons. So exoskeletons, which, you know, help,
there's both clinical and at home, but they basically help individuals walk.
And we did a survey about, you know, what would you let your child do wearing an exoskeleton?
These were parents that had children that used exoskeletons in the hospital. And it was amazing
what percentage of them said, oh, yeah, I would let them climb the stairs.
Oh yeah, I would let them try to jump.
I'm like, you know, there's this bold thing in the directions that says, you know,
this is not certified to allow individuals to X, Y, Z.
But because they become so comfortable with its use
and they're like, oh, it's helping.
It's gotta be able to do more than walk.
I worked on an exoskeleton for a short period of time. And when the operator who usually used it
to test software would walk around, it was so good and clear and obvious. And then he would
go up the stairs and it was perfect. And then he would go up the stairs and it was perfect. And then he would
go down the stairs and mostly trip and it was awful. But the jumping, the assisted jumping
looked like so much fun. And yet, of course, you're not supposed to do that for many reasons.
It's bad on so many aspects. Right. But yeah, but again, yeah, you're like, well, it's capable. I tried,
you know, I tried a little bit of a test and it worked. So obviously. I did a skip. Let's try the
high jump next. Yeah. And so again, I think it's just this belief that robots are so much more
capable than they are. And eventually, you know, we'll get to that point,
but then it's a moving target.
It's like, okay, now they're this capable.
And then people will be like, oh, yeah, now I must be able to do this.
And it's like, no, we're not there yet.
And then when we get there, it'll be like, oh, but it must be able to do.
It's a moving target.
Do you think the seemingly continuous and widespread security problems that we hear about will convince people that software is not perfect and shouldn't always be trusted?
Or do you think it's just so different?
I think, one, it's different.
I think, I mean, we haven't even talked about things like hacking hardware.
I mean, that's not even really been big in the news yet, but it'll at some point get there, which is kind of scary as well, because that also means you can hack robots if you can hack hardware.
And I think that's a disconnect.
I think people look at those as two different things. I think they look at the robot as this physical thing that has behaviors and it can do what it's doing and is really good at it.
And therefore, if I want to push it a little bit more, it'll be okay.
That's a disconnect from, you know, is it safe?
Is it effective?
Is it efficient?
Totally different viewpoints.
And because we interact with it in the physical world as opposed to cyberspace, it can hurt us.
I mean, it can kill us.
It can hurt us. Right. It can hurt us.
Yeah.
Okay. On to more cheery topics. The cognitive behavior side of this.
Yes.
You were talking about how to make things appear emotionally responsive.
Are you actually trying to make the robot a little emotion center or is it all play acting and pretend? So it's, and I would say it's play acting, but that's a caveat.
I would say play acting because there is no like data, there is no emotion chip. Well,
maybe someone will create it. But right now, you know, there's no emotion chip. Maybe I'll
program a little neural network to exhibit emotions, but there's no emotion chip. It does not exist. So because of that, it's me as a programmer, as a roboticist,
programming certain characteristics I think are necessary for exhibiting emotions. So as an
example, if I'm playing a video game and I lose, there's a typical emotional response and I can
model that. So I can have 10 folks come in and play a video game and I can typical, there's a typical emotional response and I can model that. So I
can have, you know, 10 folks come in and play a video game and, you know, I can make it so that
they all lose. And I can then figure out, you know, what is a typical emotional response when you lose
and then I can take that rage quitting, right? Throwing something. I can take that and then
code it into my robotic system. And so it's still there. The robot still reacts.
It still has that.
So it's real.
It's a program.
It's code.
So it's real, but it's programmed.
It's not, say, a behavior that's learned based on childhood and experiences,
personal experiences.
It's learned from the experiences of others.
And so as long as the
others are true, then I would say that, you know, well, it's learned from others and maybe the robot
doesn't feel that, but it's trying to mimic it. So that's basically the emotional aspect.
Are you mimicking it as a neural network or as a loss function, as an input to a neural network? Or are you
just recording the facial or body expressions and then playing that back when the robot loses a
game? No, no, no. So it is actually we use basically as an input to two methodologies,
either an SOM or a case-based reasoning system. So it is, there is some AI in
terms of the learning. So it is learning. But some of that input is looking at people's facial
expressions. We look at that. We look at their body language. We look at their expressions.
There's actually some research that's done in terms of vocalization of when you're angry,
you know, what happens to your voice. And so we extract that. And so that's done in terms of vocalization of when you're angry, you know, what happens to your voice.
And so we extract that.
And so that's how we represent what a happy emotion is or a sad emotion is in terms of what the person is doing.
So we can say, OK, I don't really know if the person's happy, but I'm looking at their vocalization.
I'm looking at their facial expressions.
And I have learned that that is designated as frustrated. And then frustrated is part of an input into an SOM or a case-based
reasoning that says, you know, if I'm interacting with a child at this stage and they're frustrated
or they're angry, then I need to provide this type of motivation. I.e., if a child is playing a game
and they've been losing three times in a row, it's probably not appropriate to say,
oh, that was wrong, right? It's more appropriate to have some type of motivation like, oh,
we can do it again. I find that hard too. But you have to kind of learn that we know that because it's years of experience.
And so we look at how clinicians and teachers, how they use feedback based on these characteristics
of state of the child state. You've said SOM in case-based. Oh, um, so, uh, yeah, I'm like,
this is like a whole class, isn't it?
Like those two words alone are a whole semester class.
Yeah, so case-based reasoning is basically you take examples from people doing things.
So that would be a case.
And you take a bunch of examples and you come up with basically a generalization so that if I see this case, i.e. I see the child doing something, I match it to this dictionary of cases.
And I find the closest match and then say, oh, this scenario, and you have a label, this scenario looks like this previous one that happened.
And this is what happened in terms of the clinician and the child.
And so that becomes a case. And so we collect a bunch of observations based on people and people
interacting. So that would be a case. Whereas an SOM is called a self-organizing map, is basically
a type of neural network. So it's basically a neural network. And there's a bunch of different
types of neural network. This is just an example of one. So we started with the physics, and now we're talking about psychology,
I mean, cognitive behaviors, and this is a whole subfield.
It is.
And physical responses to things.
You've mentioned neural nets, and with camera, of course,
you need some machine learning in there.
Yes.
The physics, the math, the software.
Tell us what you don't need to know.
Yeah.
Or really, how do we get started in this without anyway, but is that it's one of these methods
that you can use exploratory learning techniques. So as an example, very simple, I'll use Legos,
since Legos is so popular. So you bring a Lego system and you say, let's follow a line,
which is actually a fairly straightforward task. I put the robot together, you could talk about
sensors and lines and things like that. And so the robot together. You could talk about sensors and lines
and things like that. And so the robot goes on. And then you do something like, well, what happens
if I block your line? And you're like, child says, I don't know. Well, let's see what happens. And
the robot goes and you block the line. And depending on what the program is, the robot goes
into fits, right? So it starts to wandering because it can't
see a line anymore. You don't know what it does. So it's like, okay, let's figure out about obstacle
avoidance. Let's figure up. And so then you start building up. And then what happens is at some
point you're like, oh, well, there's a bunch of different lines. There's thick lines. There's
thin lines. How are we going to program this all? I don't know. Well, let's think about learning it.
Let's think about learning characteristics of a line.
How would you do that?
And so one of the things about robotics is you put it in the field and it's going to
mess up.
And that's where you go, oh, let's introduce some other aspect.
Let's introduce this.
Let's introduce kinematics.
Let's introduce, you know, X, Y, Z, because you see it.
You're like, okay, this is broken.
How do i fix this
and then you go to the next subject matter the next lesson the next okay let's look at a different
sensor let's look at a different methodology let's look at vision let's look at cameras
so you start building up and then you look up you're like 10 years later like oh my gosh i know
so much the 10 years later thing, yes.
I mean, I have been working on My Little Robot almost exclusively for a month.
And I have been through six different books in five different disciplines.
And every night I go to sleep and my brain is full. And there was the night I hallucinated and demanded that Christopher align my axes.
All right.
Yeah.
So what is the starting point?
If somebody has a small amount of money, a couple hundred bucks,
where can they really get started?
What's their age?
Well, you know, we didn't ask that in the survey.
Okay.
You know, people who are hobbyists, adults.
Hobbyists, adults, engineers, software engineers, hardware engineers,
well, not mechanical engineers because they have such an advantage.
Well, you know, it doesn't have to be adults,
but people who can handle a soldering iron.
Let's go with that.
Okay.
Where would I start?
So if they're adults and they're like makers, so they like to build stuff, they put stuff
together, they're comfortable with, say, an Arduino, you know, so they can buy some
type of motor controller on it.
I would actually go to SparkFun and just start putting parts together.
Honestly.
All right.
That was pretty much what I did.
Yeah.
Well, I started out with a very high-end board, especially for machine learning.
And then I went from there.
But the Raspberry Pi and BeagleBone boards are very capable as far as robotics go.
Yeah, the Raspberry Pi, both of them.
Arduinos are great, but you run out of space pretty quick.
No, you do.
They are pretty basic, but as soon as you get into anything more advanced, yeah, you're kind of like, oh.
Although there are some Arduinos that have more capabilities, but then you're adding in more costs.
And so then you're like, oh, why didn't I just buy a Raspberry Pi in the first place?
Yeah.
And listeners, I did get the note that you want to know how to get out of the Arduino
space and into the more professional spaces.
We'll do a show about that, I promise.
In the meantime, Arduinos are awesome.
Let's see.
Georgia Tech.
It is a pretty famous school. And the robotics part, I mean, I've heard about robotics and the software and hardware parts many times. But what's it like as a school? creative place to be as an engineer. What happens is students come there, they are very concerned
about the social impact that their work is. So it's not just, oh, I'm coming because I want to
be a great engineer. Yeah, they are. But they are also very interested in how can I change the world?
Whether it's, you know, in the healthcare space, whether it's, you know, in the people space, whether it's in
the, you know, world environment space, there's this concern about, I want to use my engineering
to change the world. So that's kind of the vibe there, which makes it really interesting because
you'll go there, you're like, yeah, this is Georgia Tech. We're all the geeks. It's like,
they're all here, but they're like real people. And because they have interests and they like to do things that are what you would consider more social.
So I think it's a great place.
It's not what people would think when the graduates know and the alums know.
But I don't think a lot of people would know that it's a real school.
I mean, it is a real school, even though it's a techie school.
People have lives there. That makes sense. Sounds like fun.
It is. I went to Harvey Mudd and there was a little bit of
application, understanding how your engineering affects the world was a big part of it. And I appreciated that. And actually, it makes sense knowing the Georgia Tech grads.
Yeah.
Cool.
Yeah.
Another random question.
You're on engineergirl.org.
Yes.
What is that?
So that was a project that was started many years ago that was K through 12.
If you're a young lady, young child, young student that wanted to
figure out what this engineering was, and you just wanted a question and you wanted to talk
to a mentor, it's a forum that allows you to do that. So as an example, I'll get a question
probably about once a month from a student that says, ask a question, like things as basic as
how many hours do you work a week to I'm really
interested in robotics you know which kind of engineer should I be and so you get general
questions but it's really a form and so it's not a lot of required time but you do provide some
service to help people to help young women who want to know more?
Correct. Correct. Yes.
That's pretty cool. I should look into it. Do you need a PhD?
No. Basically, the only requirement to be a mentor is to be an engineer. So
basically have not even graduated, but be an engineer. Because, you know, there are
some engineers that weren't classically trained. And so either an engineer graduated with an
engineering degree, doing engineering in some form or fashion, and do you think of robot wars?
So I think that things like robot wars give certain folks the ability to express themselves in ways that they enjoy. But I also think that in some regards, it gives robots a bad name, because it's pretty easy. I can show you five movies and four of the five have robots being
at war, angry, destroying, things like that. So to have that in real life it's like oh no robots are good it's like show
it to me it's like you know robots are good just trust me um so it gives it makes my life harder
trying to explain the good of robotics sometimes yeah i think it is an easy an easy way out for
for writers sometimes to make an enemy that oh it's okay if we kill robots because they're not they're not alive and um so that that happens a lot and so that that is kind of a subconscious
problem possibly going forward is if robots are constantly perceived as
nefarious or enemies or right or about to take over,
then as they get more and more advanced, more suspicion will creep in.
Right, right.
And so I have mixed views because, again, it's robots. So then it's like, oh, yeah, but it's like the only show that has robots in the title.
So, you know, rah, rah.
So that's the reason why I was a little hesitant on my answer, because my philosophy is not pro, but I just, I mean, we just need more robots in general out there anyway.
And it engages folks in terms of the makerspace and things like that.
That's fair.
I don't think they're really robots anyway.
Yeah.
I think it's a misnomer.
Well, they're robots anyway. Yeah. I think it's a misnomer. Well, they're not autonomous.
Yeah.
But I believe there's a whole podcast devoted to whether it's a robot or not.
I know.
So then the question is, okay, because I always think about this.
I wouldn't necessarily classify them as autonomous robots.
But if you have a robot body with the human brain is that a robot or not and then you
can think about robot wars you're like well that's a robot body with a human brain oh but the brain's
remote wow yes so you could have a whole conversation about that but maybe there's
cyborgs at that point then right all right where do you think robotics will be in 10 or 15 years
um so 15 is easy uh i will say that everyone will have a robot in some form or fashion just
like everyone pretty much has a smartphone ish um now what type of robot is a question but i think
everyone like right now,
if you meet anyone over the age of 15 that doesn't have a smartphone, it's like, hmm,
all right, there's something wrong with this. This is just, at least in the developed countries,
I think robotics will be the same. It might be your car, it might be your vacuum cleaner,
it might be, you know, that cashier at the restaurant. It's going to be some type of robots. Ten years, I think we'll be seeing that transition.
I think in 10 years, it'll be like, you know, when the iPhone first came out, it was like there was a very small population.
Everyone wanted one, but there was a very small population that actually got one.
I think in 10 years, it'll be like that. They'll be available. They'll still be a little costly. It'll only be a certain, you know, the early adopters kind of thing,
but people will know about it. Like, oh, they finally have the level, what is it? Five driving
car that's out, you know, that kind of thing. That's actually a good point because the iPhone celebrated its 10th anniversary just
a week ago or last week. And the change is remarkable. Having gotten a machine learning board
and started to learn a little bit about that and a little bit about robotics, I'm a little scared. I mean, these things can do a lot.
And as I look at things like robot operating system, which I don't love, but it's interesting
in that it is allowing an integration that wouldn't otherwise be possible. You get so far
into each field. And we've talked about how many different fields
robotics covers. And I can't be an expert in all of them. But if I can get parts from experts from
all of them and put them together, and you end up with a humanoid body that works smoothly and
beautiful, and machine intelligence that looks really kind of scary. And that's all today.
Right.
I don't want to say the word singularity, but I think I'm going to have to.
Well, you know, so this is my philosophy about the singularity. One, if it's going to happen,
it's going to happen. But the fact is, is when we create robots and we create these intelligent
creatures and things like that, they are learning from us and they're learning things like
emotionals and bonds and things like that. And so I truly
believe that if we are creating this sentient being,
the sentient being is part of our world, it's part of our environment.
So at the end, it's part of our family. And we all know there's
evil people, but most times you're family. And we all know there's evil people,
but most times you're like, yeah, my family member's acting up, but you're not going to go
out and just kill them. So my philosophy is like, yeah, maybe there might be, maybe it will come,
but they're part of us. They're our family, they're our environment. And so for them to even
think about destroying us means that they're thinking about destroying their family, which I don't think because they were growing up in our world. So that's just my
philosophy. So after the robot singularity, the robots are just going to spend all their time
watching cat videos? Exactly. Right. Because 15% of their knowledge is based on watching cat videos.
I keep thinking I want to ask you more details about my robot and what I should do with it and all of that.
But if we do that, this show is going to be another hour.
So I should ask Christopher, do you have anything before we start to close up? What are the kids that you're teaching most excited about?
The college kids.
Yeah, the college kids.
Oh, the college kids.
I'm like, oh, Mike.
What are the college adults that you're teaching?
I think that they're excited that they can actually have jobs in robotics.
I mean, at the end of the day, to be realistic.
So even 10 years ago, if you were interested in robotics, the job was sort of a robotics job, but not really.
You weren't going to be working on necessarily a robot.
Now, it's like kids are graduating.
It's like, yeah, I'm actually going to be a roboticist.
I'm working on a robot.
I think that's exciting for the kids. They're working on things that they have, you know,
grew up with in the science fiction that's now a reality. So we grew up in science fiction,
and we became adults, and it was still science fiction. They're growing up with the science
fiction, and they're becoming adults, and it's like, oh, I get to work on this stuff that was
science fiction. I think that's exciting. What do those robots look like and they're becoming adults. And it's like, oh, I get to work on this stuff that was science fiction. I think that's exciting.
What do those robots look like that they're working on? I mean, I'm trying to think,
okay, when I graduated from college, what would a robot be that you could get a job working on?
And it would be pretty much an industrial arm.
Exactly.
You know, at a factory. So what do those jobs look like now?
I mean, they're working on places like Waymo and the autonomous car.
They're working on healthcare robots like the one that delivers goods in the hotels and laundry.
They're working on surgical robotics like DaVinci.
I mean, they're working on real robots.
That's the thing. Not just industrial.
They're working on kind of the wealth of things that are going on.
Hi, Svek.
I know you also work on robots.
He works at iRobot.
Oh, see?
Okay, so kids.
Is it college students is what we should have said there?
Sorry.
What about kids?
What about the younger group?
What kind of robots are they working with?
What are they excited about that you see?
So with them, they're still working, I think in terms of educational robotic systems, they're still working on things like with the Lego, maybe Dash and Dot if they're even younger.
They're still working on those kind of platforms. So there's still a little bit of a disconnect between that and like working on like real robots, like vacuum cleaner robots. And so I think when they think about
robots, they are still in the fantasy land. Like they're still thinking about, you know,
the Rosies because they don't have as much touch to it. Like they're not in the schools
working on humanoids, for example, whereas the college kids are, you know, they're more likely
than not to get a robot kit. That's a humanoid robot, more likely than not, not necessarily in
the K through 12 space yet. And that's just a yet. And earlier I asked if somebody wanted to
get into robotics and, and then we adult, hobbyist, likely engineer.
What about for kids?
What if there's a high school student or a middle school student or even a parent wanting to engage with their high school, middle school students?
What kind of robotics should they look at? So, I mean, as much as I, well,
I still think that the Lego robotics kit,
because of the amount of curriculum that's been developed to support it,
is still a really good package.
And there's some platforms that are coming out that are more,
I say, like I said, the dash and dot robots.
There's the Cosmo robot or Cosmobot robot,
which is this little tiny platform that's come out maybe a year ago that's starting to develop
curriculum as well. So I'm excited to see some competitors in that space. But right now,
you need the curriculum. You need the lessons. You need basically the guidance, like documentation to get through figuring it all out.
So right now is Legos and FIRST Robotics and the VEX kit that are associated with some of the competitions.
And this is Lego Mindstorms.
Yeah, Lego Mindstorms.
Yeah, Lego Mindstorms as an independent. And then if your team, again, there are some teams like FIRST and
Botball and VEX, which are team-based as well. And the advantage with that, as you were saying,
was the curriculum, which is good for parents because you do reach the point of, okay, I've put
it together, what now? And the curriculum says, okay, now follow a line. Now that you can follow a line, you can have a deliver brownies from the kitchen.
Right, exactly.
That makes a lot of sense. Well, now I really have kept us over time.
And it has been great to talk to you. Do you have any thoughts you would like to leave our
audience with?
So I have this this quote that in fact is on my Web page, which is basically about being an engineer.
And I think one of the things and this is what the social impact I think as an engineer, we do have a responsibility.
We have the talent. We have the skill. We have the responsibility to make this world a much better place than the way we are in it now.
And I think that we have a power to change the world, make a positive impact, and we should use it for that.
I 100% agree, although you stole my final thought.
Oh, sorry!
Our guest has been Professor Hannah Howard.
Dr. Howard is Professor and Linda J. and Mark C. Smith Endowed Chair in Bioengineering.
We never got back to the bioethics.
Chair in Bioengineering in the School of Electrical and Computer Engineering at Georgia Institute of Technology.
Thank you so much for being with us.
Thank you. This was fun.
I want to thank Christopher for producing and co-host.
And thank Hackaday for their 30% off coupon.
I forgot to say that.
For the Miarm at their store.
I'm using the code awesome embedded. And of course, thank you for listening.
I have chosen a new quote to leave you with. And this one is sort of about singularity.
It's from Emily Dickinson.
Hope is the thing with feathers that purchase in the soul and sings the tune without the words and never stops at all.
Embedded is an independently produced radio show that focuses on the many aspects of engineering.
It is a production of Logical Elegance, an embedded software consulting company in California. If there are advertisements in the show, we did not put them there and do not receive money from them. L'Occitane.