ACM ByteCast - Robin Murphy - Episode 6
Episode Date: October 13, 2020In this episode of ACM ByteCast, host Rashmi Mohan is joined by roboticist Robin Murphy, ACM Fellow and recipient of the ACM Eugene L. Lawler Award for Humanitarian Contributions. Murphy is a Raytheon... Professor of Computer Science and Engineering at Texas A&M University and Director of the Humanitarian Robotics and AI Laboratory, formerly the Center for Robot-Assisted Search and Rescue (CRASAR). She helped found the fields of disaster robotics and human-robot interaction and has deployed robots to major disasters, including the 9/11 World Trade Center, Hurricane Katrina, and Fukushima. Murphy details some of the logistical and algorithmic challenges of getting valuable data during disaster response and acting on it at a distance. She also touches on the use of robots during COVID-19, such as providing support for hospital workers. Finally, Murphy shares inspiring advice for kids and younger technologists looking to make a difference with emerging AI methods, and her enthusiasm for the future of robotics.
Transcript
Discussion (0)
This is ACM ByteCast, a podcast series from the Association for Computing Machinery,
the world's largest educational and scientific computing society.
We talk to researchers, practitioners, and innovators
who are at the intersection of computing research and practice.
They share their experiences, the lessons they've learned,
and their own visions for the future of computing.
At some point in our lives, we've all wished for that robot that does our dishes or pulls weeds from our garden. Nice as those might be, have you thought about the impact that a robot
could have if it were built to help in calamities or in a pandemic like we're in right now?
Our guest today has dedicated her career to studying,
teaching, and building robots for disaster recovery. Robin Murphy is a roboticist and
the Raytheon Professor of Computer Science and Engineering at Texas A&M University.
Robin, welcome to ACM ByteCast. Well, thank you for having me here.
We're thrilled to have you here and have this conversation with you.
I'd love to start with a question that I ask all my guests.
If you could please introduce yourself and talk about what you're doing currently,
and also give us some insight into what drew you into this field of work.
Well, sure. I'm a professor of computer science and engineering at Texas A&M,
and I'm a founding member of the Center
for Robot Assisted Search and Rescue. I was trained in artificial intelligence for robotics,
but I've been doing field robotics and AI for disasters since 1995. And in 2001, I began the
first of 29 disasters, being in 29 disasters not of my own making,
where we've inserted robots, starting with the World Trade Center.
It includes hurricanes Katrina and Harvey, Fukushima, the Japanese tsunami,
Kilauea volcanic eruption, quite a few.
So is there something special about the application of robots in disaster
situations? How is it different from, you know, any other use of robots?
Oh dear, it is so different. And I think many people underestimate how different it is. Well,
first off, disasters are different from even regular field and service robot applications, which are really hard.
And then disasters are different than humanitarian relief.
So we focus on the response phase.
And that's when the government is doing things to maximize the public good.
They've got everybody.
They've tried to evacuate everybody. They've tried to do all this. They're stretched to their limit.
They're trying to make decisions with an incredibly rapid op tempo. Humanitarian relief is usually by
non-governmental organizations after the responses that's maximizing individual good.
And so the rules and regulations are different.
So when you're in the middle of a response, the environment is physically different. It's
so much harder than a regular robot because buildings are deconstructed, terrain is
deconstructed, you have lack of access. I mean, think about it. If you're trying to fly a drone
during a disaster and it's a flood, guess what?
There's no dry land to fly, you know, to land or take off from.
So then you've got cell coverage.
A hurricane is often knocked out and takes days to get to enough bandwidth to actually be useful.
There's also immediate physical risk to the people responding as well as to the robot. And another
thing people forget is that you can't have all 28 people from your lab just to run one robot. We try
to be as lean and as small as possible because they are trying to minimize the number of people
at risk. The op tempo is different. And this is something that's really been very interesting to
document and work with my colleagues Camille Perez and Ranjana Mehta here at Texas A&M.
The op tempo is really tough.
I mean, you're working long days.
You have a lot of fatigue that comes from disrupted sleep, from physiological stress.
You're in a different place.
You're not getting good sleep.
And there's the psychological stress of being at a disaster.
It's impossible not to be impacted by the disaster, by the suffering. And what they found
is by looking at the last three disasters we've been out, we're running by the first day,
and this was with the UAS pilots, the drone pilots, running blood alcohol level of 0.05%.
So that's a pretty heavy impairment.
You wouldn't be allowed to fly a plane with 0.04%.
And that's after the first day, and they showed we never recovered.
We never, you know, if you had good sleep, and in Hurricane Michael,
we had a very routine schedule, but it never came back to optimum.
So you can't sleep at all.
So that's another thing.
But the biggest thing I think for people don't realize about adding robots or applying robots to disaster is, you know, I hear this a lot, and particularly with COVID, something is better than nothing.
Well, no, that's actually
not true. You can make a disaster worse. There's been a couple of cases, I document them in my
book, Disaster Robotics, where they did, the robot failed, and of course, in the worst possible way.
I mean, because if it's going to fail, right, we all know that as roboticists, if anything's
going to go wrong, it's going to go big. And with my name being Murphy, I always just see the Murphy's law happening. So think of it this way. I tell
people to think about disasters and robots of inserting robots in disasters is the same way
as they would think of if you had a gun and you had a carry permit, you were trained to shoot it,
you were very good. You even won competitions for shooting. You were very good. You even won competitions for
shooting and you were very good. And you heard on the radio that there was an active shooter case
and here's the SWAT team is out dealing with an active shooter. You wouldn't think to walk up in
the middle of that event and say, hey, I can help. I'm a really good marksman. I'll just go over here to the left, and you can count on me,
okay? Of course, the police couldn't possibly deal with you. You've never worked with them.
They don't know whether to believe you. They're busy. You've just interrupted their train of
thought. And there are actually laws that say if you do things like that, then you can get arrested.
And again, there's been a case where a group tried to insert robots and they did get arrested because they were
interfering unintentionally. They were too enthusiastic in there. So it's a really different
world than the regular field and service robots. Gosh, just listening to you, just the impact of
the work you do, Robin, is mind-blowing to me, right? Because as software engineers or computer scientists, a lot of the time we are involved in building the technology.
But from what you're describing, you're not only building this, you're also operationally involved in actually conducting the rescue operations.
So the training that you need is not just in computer science, but actually deploying these and working alongside
the rescue teams out there? You know, for several years, I was a technical search specialist. I was
certified, but that's about 300 hours of training a year. And I just couldn't keep it up with the
academic job and the research. But I think that's something that we're beginning to see more and
more that now our field is saying, hey, you know, there are a lot of times where you
actually have to understand the work domain. And over the years, you can see how my research
has gone from, oh, let's build a really interesting robot body and software to, oh,
let's come up with new ways of documenting the work domain, particularly things like disasters, where they're very
few and far between, thankfully, right? So you don't have much knowledge and everyone's different.
So how do you categorize them? How do you say what's important and what's not? How do you make
the most learning you can, even though you're taking tried and true robots? Very rarely would
you put a new robot into there that you've never tried. true robots, very rarely would you put a new robot
into there that you've never tried. I mean, that's kind of crazy because you can make things worse,
right? And we always, roboticists, we're always wildly optimistic about what our robots can do.
And then, you know, the real world has a wonderful way of kicking us in the teeth.
So we spend a lot of time now working more on the cognitive work analysis in the human-robot
interaction and the overall human factor sort of side of it is where a lot of our work has been.
And then use that to guide and say, okay, people are stressed. They can't sleep. You know,
they're tired. They've got that huge fatigue load, right? All of these things. So the robot, who needs to be smarter in this
situation? Okay, the robot is going to have to be the smart one, because the person won't be able
to see this on this screen in the bright sunlight. So it's going to have to be better about alerting,
or they're going to be to this and they can't, they won't be able with the gloves won't be able
to physically do that. So it's gonna have to be much more reliable. So that's why, and then using that work analysis
to prioritize what we see as the important research questions, which go back to things,
really important things like software, like computer vision, how to apply those really
fantastic breakthroughs in deep learning. Wow. You know, as a follow up to that, you know,
in a situation like that, when it's high stress, and the risk is so high, and the cost of making
a mistake is so high, typically people work as a team. So how do you work with, you know,
law enforcement or rescue operations team without ever having, I mean, do you train with these
people on a regular basis prior to an incident like this?
Or when you go in there when a disaster has happened, how do you build that camaraderie and that trust between the various organizations that are working on this?
Yeah, that's a really good question.
And we train them.
We train with them.
We try to work with as many agencies.
CRASAR, the Center for Robotic Assistance Search and Rescue, has developed a positive reputation
over the years.
We've worked hard to maintain that.
We don't show up at a disaster.
You have to be invited.
Legally, you have to be invited.
Otherwise, they can arrest you because you've just shown up and wasted their time or distracted
them, even if it was the best robot in the world.
In retrospect, it would have been great. They can't deal with it that. Yeah, I would say the first 10 years that I
did this, I stayed in a constant state of about ready to throw up. Because fire rescue usually
are the people who run an incident. Normally, incident command authority defaults to them.
And I knew from
being a tech search specialist, if I screwed up, if my team screwed up, or the robot screwed up and
did something unpredictable, or that its capabilities had been oversold, then it would be
seven years. You look at diffusion of innovation before they would consider using a robot again. So my goal was to,
you know, not only help for this disaster, but I also had to be very careful to make sure I didn't
screw up in a way that would kill the field for a few years and retard any momentum we were getting,
and as well as engaging the researchers in it. So fortunately, there's now,
robots are so much more common now. After Fukushima, that was a big change in the ground
robots, the aerial robots, we started seeing after Harvey. And now I don't feel like every
time I go to the field, it's like, you know, and then of course, I'm a woman too. One of us doesn't
look like the others. You know, So I kind of have that kind of double
whammies. But now it's much easier. I will definitely tap into that thought.
But I wanted to touch on one more thing. So now we're in this COVID-19 pandemic situation, right?
I know you're doing work in this situation and very crucial work. What would you say is the
most surprising thing about using robots in this pandemic situation?
Well, I don't know that we're doing crucial work because we're not really inserting robots so much as we're documenting it.
So in a way, you can say we're sort of saving the world one PowerPoint slide at a time.
But no, we've looked at over, we're up to about 250 reports of how robots are being used in 21
countries that we've seen so far.
And what the surprising thing is, is that everybody's thinking of robots for clinical
care, right?
For replacing doctors and healthcare workers.
Well, that turns out to be one of the top four categories that we're seeing or the uses that we're seeing. The number
one most frequently cited, most countries, 16 countries are using drones for public safety,
public work, and non-clinical public health things like quarantining. So enforcing quarantines.
And they're taking agricultural drones and taking the sprayers
and spraying disinfectant. Now, in clinical care, the number one application is disinfecting
points of care. So this is, you'll see a lot of robots that are built to,
they're not very sexy robots. They are nothing like Westworld. They're not even like C-3PO or R2-D2.
They're just a robot cart with a big bank of UVC lights on them.
And they go into the patient's room and blast them out.
And these were developed right around the time of the Ebola outbreak.
So we didn't have robots during Ebola, but we first started talking about them. The White
House ran the OSTP Office of Science and Technology Policy ran a series of workshops, and we were one
of the co-hosts here at A&M about the possibilities of robots and what to look at them for infectious
diseases. And so that was one thing, decontaminating these areas where you don't have to, you know,
risk your life to go in and
clean off and take out the trash and those sorts of things. And so those robots took off after that,
and we now see them being used. They're now commonly used in hospitals because, you know,
sure, there's Ebola, but there's lots of hospital-acquired infections that you want to
disinfect. So now it's just like, okay, we need more of them, and we need to start using them 24-7.
It's a very interesting change from a rescue operation.
I'm imagining in a rescue operation, I mean, time is super critical, and you're also trying to gain as much data as possible.
Your robots are out there evaluating what the situation looks like and
getting that data back to you. And you're making real-time decisions on that data versus now you're
conducting, you know, you're performing actions that are helping, you know, sanitize or make sure
that the infection levels are lower. What has changed for you? I mean, did you have to adapt
very quickly to be able to sort of deal with this new situation?
Well, again, I'm commander, you, the boots on the ground responder, are trying to
see at a distance places that you can't get to or shouldn't get to, like at Fukushima.
So it's seeing at a distance. Now we're beginning to act at a distance, that we're beginning to
interact like disinfection or with the healthcare worker telepresence, which is
the fourth biggest application we're seeing, where you may not have much of an arm, but you can go up
and press a person lightly to get their temperature. You can apply a pulse ox sensor or
ultrasound remotely. So now we're seeing this interaction with the public space. Same thing
with some of the other applications that we're seeing for COVID, prescription
and meal dispensing in the hospitals.
But we're seeing a lot of delivery applications.
We're seeing a lot of hailing infectious materials with the laboratory and supply chain.
All of these things are being done that have so much more physical manipulation involved.
And so that's a really big change, I think.
You know, I'm wondering, I mean, as you're talking about this, in my head, I'm envisioning
this being done by humans in the past, right?
And to imagine that robots are doing this, and obviously it seems like a fantastic time
for robots to be involved because then you're reducing the risk of infection and spreading.
But you can't talk about AI and robotics and not mention jobs, right? The fear across the world,
I mean, at least the armchair philosophy is robots are going to take away all our jobs.
But here we're seeing robots as inventions that are augmenting what humans do and protecting us
in some ways. Could you talk a little bit more about that and your thoughts around the jobs question? Yeah, that's the one that we get all the time. And I think COVID's
really representative of robots in general. So first off, we're seeing that the majority of
the robots that we're documenting are teleoperated. So the exceptions are that those UV light robots
and the delivery bots. But we're talking about public safety.
You're talking about there's a wonderful set of reports that are emerging of how people are using
robots to continue to work. For instance, realtors have spontaneously said, okay, we're going to get
one of those iPads on a stick, and now we can start showing homes remotely. All sorts of things like that.
So that's not replacing people.
Now you start saying, well, yeah, but what about the janitorial robots?
Well, it turns out delivery, you know, going down and grabbing the food and the medication, cleaning stuff, they've never had.
Hospitals have never had enough staff.
They can't hire enough people it's
not that they're trying to be you know you know not get rid of those jobs they physically can't
fill those jobs they're always shorthanded for that and then you add a surge where you now need
to get that room turned over fast fast fast you've got to clean this stuff up fast, fast, fast.
It's helping with the surge and it's helping to maintain what's going on. Now, sure, they'll continue to be used and robots that have value will get inserted to healthcare. This may change
quite a bit of how realtors do some parts of the market. And, you know, so it will probably change how we do some
jobs, but it's not a replacement or a displacement. I personally have seen from what I can see going
through the literature are hard to call press reports and social media literature. Sorry about
that. But all of these reports, the healthcare, everything we're seeing is letting
the healthcare workers do higher value, direct patient care. Letting them do that safely,
minimizing, not replacing them at all, and not eliminating them, not being cold hearted,
and you're just going to be dealing with a creepy robot. No, no, no. It's
like when we go in that room, we suit up and we're protected, but we're not super tired because we're
having to do this for 15 people and back and forth. And it's, you know, I took my pee-pee off,
but now they want a meal and now I got to go do this. And now we got to go stop and go get this.
You know, it's helping and supporting.
That's possibly the best answer I've heard to that question, Robin.
So thank you for that.
And it sounds incredible.
And it makes me wonder, why don't we see a much wider adoption?
Do you see that to be the future?
What's stopping us from getting these robots in every hospital or getting these UAVs in every district and city to be able to maybe predict what is going
to come rather than be reactive to it? Well, the robots that get used at a disaster,
not just this one, have always been the ones that were already there in use. So the hospital
inquired infection disinfectant robots already existed. Already people knew about them. People
had them in hospitals. Hospitals had them. Now they're like, we want more. And if you're a hospital that didn't have one, apparently you're going, we want them. Please
put us on your list. So everybody's ramping up for those. The drones that are being used for public
safety. Law enforcement departments, fire rescue departments have had those. The last few years,
they've become very inexpensive. A $1,000 platform is good. So
these are all commercially available, all out there, ready to go. Now, adding manipulation
abilities, being able to act at a distance, doing certain things, that requires more work. It's not
a solved problem. But what we see is that they were already being adopted because you could be
using them for every day.
And think about it this way too. You want the robots to be used every day because let's imagine something bad happens, a car accident. You want to call 911 and they say, hey, no, no, you need to
stop and download a new app. What? Oh, and you need to do this and you need to do, no, I want to use my app.
I'm going to call 911 the way I always do. I want to use the apps I have. You're too stressed.
You've got too many immediate attention to learn or deal with something new. So taking a new robot
that you don't know how it works, you've heard maybe somebody else has used it successfully,
maybe. No, no, you don't have time, you don't have mental bandwidth to deal with it. So you're going to use what you're familiar with. And so getting it into the routine use
means that you're willing to use it for the non-routine use and willing to add, you know,
kind of Velcro something to it, you know, a fogger that does disinfectant.
Got it. So would you say that in terms of adoption, we see good numbers in the US,
but would you say that's the case internationally as well?
I would say it's actually better in the healthcare system. It appears, could be wrong,
but it appears from what we're seeing that the adoption has
actually been higher in Europe. And the use of drones in Asia and Europe is much higher
by public safety than it is public safety here in the United States.
I see. Any thoughts on why that might be?
Well, we're very conservative, right? And whenever you're
talking about public safety, public health, you want to err on the side of caution. I think in
the United States, we did kind of, and this is a fear that we have watching some of the public
safety agencies kind of aggressively use their robots they're aggressive, both aggressive and use them too
much for everything and hadn't, like, told anybody. So we saw initially several, what, about five
years ago, several cities, we have drones, you know, they're painted black and menacing, they're
very military looking, and we're going to fly them. And the city council goes, what? What are we doing? Like a blue thunder scenario? We're going to like start firing weaponized predators on our,
and it's like, no, they're not that. So making sure you've talked with people and say, yeah,
nobody, you know, during Harvey, they had already been using Fort Bend County,
the Western suburb of Houston had been using drones quite a bit. And they had already gone
through a couple of floods, federally declared floods. And they had already a policy that says, hey,
here's what we're flying. It's flown by us. We're going to put the any data we get, we're going to
post to YouTube, we're gonna make sure there's no privacy violations, you'll see it as fast as we
see it. And you can see if your home's going to be flooded. You can see if, you know,
and they were actually their 911 operators were getting calls about people. There was
one particular part of Fort Bend that had a right by the river, really pretty assisted living. And
so everybody's worried about their grandparents, you know, is grandma in danger? Do we need to
come down? They said, no, no, the 911 operator said, well, go to the website. You can see the water's nowhere near there.
It'd take a lot to flood that area.
And so it was a very positive.
It was a consensus.
So nobody had any issues with the use of the drones because they had done it the right way.
A group coming in, never used drones before, and then all of a sudden they're using them like in Italy to yell, literally yell insults, drop an F-bomb.
Apparently, one of the Italian mayors was probably saying, I got on the robot and I used its loudspeaker and I used the F-word to tell them they were crazy.
And why were they not social distancing?
That probably wasn't the best form of public health communication we've ever seen.
And we suspect that those
provinces are going to say, no, we're going to just kill the drone program for a few years,
time out, you know, and we'll get back to you in a few years when you calm down.
Got it. So do you and your team, or do you spend time when people are evaluating,
like say a district is evaluating the use of drones, right? Are you called in to maybe do that,
you know, preliminary investigation for them saying, hey, is this a good fit for you? Or
even in terms of training, etc? Is that something that you do?
Yes, we've done that a couple of times. Austin called us over and said, you know,
please talk to our city council. And I gave a presentation. I said, so here, you know, I bet you're thinking scary predators. Here it is. Let me pull it literally out of my pocket,
this drone. This is what they're talking about flying. Let me show you how everybody else is
using them. And sure, there's problems, but you know, these are things that you want to be on
the forefront because the cities and agencies that are early adopters get to tell the companies what they need to develop more.
They get a little more clout.
And so Austin was like two thumbs up and have a lovely program.
Super.
Changing my line of questioning a little bit, Robin, for our, you know, we have a bunch of practitioners who will be listening to this podcast. What would you say are the, from a computing perspective, what are the
unique challenges that you've encountered as you've worked in this field? Like, what are the
things that really excite you about the problems that you're trying to solve? Well, they're the,
you know, from a computing perspective, I think everyone, I mean, just the real world, all the new situations, it's what we call formative domains.
There haven't been enough that you've kind of lined out.
You know, what does a flood look like?
Well, it depends on what part of the country you're in.
And are you urban or rural?
Are you suburban?
You know, there's no model.
And I think in particular, this is something we've run into
with deep learning. I really enjoyed our work with University of Maryland and with University
of California, Berkeley. So deep learning, you know, you're starting to recognize things. What
we find is algorithms that work fantastic for a missing person search after a flood in an area of Texas.
In a very similar area of Texas, a different time of year fell apart, right?
Just the difference in foliage.
And so what's frustrating to the responders and to me,
because I kind of live in halfway between both worlds,
is not having computer vision. The
algorithm can't tell us, take a few sample images and say, yeah, no, this isn't going to work.
This is going to have too big of a high positive, high negative rate. Don't bother. And that's fair,
right? Sometimes the answer is no. You would rather know that it's not going to work than
waste eight hours to discover that it's not working. Because they knew how, responders knew how to do the job without all this technology.
Nobody's going to yell at them for doing it the same way.
But if they divert manpower, they make a mistake because you were trying something new and crazy,
all of a sudden you look pretty darn stupid to your city council and to your agency and to, you know, the public
that you serve. And, you know, that's not good. So does your group also build the hardware for
the robots, Robin? Are you mostly looking at the software challenges associated or you do both?
Well, I'm an AI person. We primarily try to do the software and work with the hardware manufacturers, the companies that make them.
My basic assessment is that many of us should not be allowed to touch hardware.
And it's really hard to make things that are reliable.
I mean, you know, you can 3D print, but then, you know, after 10 fatigue cycles, it breaks, you know, it's like, and I think that's one of the downside to graduate
school, the way we do it is we focus on getting one little thing that's really new and innovative
to work. And we never teach about how, well, that's great. And that's wonderful. But at some
point, you can have the whole robot work. I mean, you think about the iPhone, you know, which really
just totally destroyed the Palm Pilot, you know, which really just totally destroyed the Palm Pilot.
You know, who cared if the processor was better or faster than a Palm Pilot?
The user interface was so much better.
Oh my gosh.
When you added something, it didn't break.
And who, you know, so it didn't matter how much time was spent by anybody developing
and who was this.
It just doesn't work or not work.
It's kind of like Yoda, you know, do or not do.
You know, those are your try or not try.
Those are your options here, you know.
And that's hard.
That is really hard to make workable systems.
Right.
And it also sounds like then there's a really tight collaboration between your academic world and industry and I have done this, but I watch this happen
quite a bit, particularly with COVID, is we're all trained to be the smartest people on the planet,
right? And then we have this tendency to take our solutions to an agency or hospital without
ever asking what their problems are. And that's kind of insulting.
And we probably don't know what there is. And one of my colleagues who leads the engineering
medicine program at Houston Methodist Hospital calls it the pet project syndrome, is that we
need to be engineers, not researchers. We need to take off that researcher thing and say, hey, I'm here to help.
If I can, what are your problems?
And let me see if my group's expertise can help.
And if the answer is, hey, I've got this great robot and it will solve all of your problems,
it's probably not going to be a great collaboration.
So asking questions and learning what their actual work domain is, what their constraints, what their goals,
those sorts of things are important. That's right on. Yeah. I mean, I think as engineers,
we're also sort of, you know, we think about building something before understanding what
is needed to be built. We like to throw technology as a solution to most things because we're so
excited by it. But I think what you bring up is spot on, which is really try and understand the problem
and the domain and see if there is a solution for it that you can provide. So thank you. Thank you
for that. I want to switch back to another question, which is from a CS perspective. What
does it take to be a roboticist? If I have a young engineer who's studying computer science,
what expertise do they need to develop to sort of break into this field?
Well, you know, there's a lot of good programs in robotics. Most of them focus on hardware and on control theory. And what I find is they don't understand artificial intelligence,
or they think of artificial intelligence as only more cognitive
intelligence, you know, these decision making. Well, since the late 1980s, there's this whole
body of work of, hey, wait, you can make robots like animals, you can organize their intelligence
in layers that sort of reflect how we, you know, our bodies are organized with a
central nervous system and the different parts of the brain. Turns out there's a lot of advantages,
the modularity, the way it goes together. And there's all these hard problems. Instead,
I get a lot of students who come in and it's like, well, I'm an engineering student. I want
to do robotics. I built a robot. And now I want to do machine learning. And it's like, well, why do you want to do machine learning for that particular problem?
I mean, animals come out knowing how to walk or, you know, within 15 minutes, animals that are
heavily predated, they're using things like mechanisms like pattern gate generators to learn
to walk very quickly because they got to be on the move or else they're going to be so yeah i mean we don't why do we need to have 3 000 days worth of data to learn something
that you can just program don't hit things you know or you know try to keep balanced those things
so i think that's uh being global having a global perspective one of the things that i love about
artificial intelligence for robotics and written a textbook it's in second edition, is it's so big and broad,
and it brings in more than just engineering concepts. There's cognitive concepts,
there's biological concepts, there's all of that together. And it's just so fascinating.
So I think that there's so many, if you want to get into
robotics, don't let yourself be put into a trap of a box that you can only do it by learning how
to do the kinematics and dynamics, or you can only do it by doing reinforcement learning, or you're
going to enjoy all of it, explore all of it, and then find the niche where you can make a difference,
where your ideas resonate and can help move the field along.
That's a great answer, because you also hear, I mean, I have kids who are in high school now.
You see, I mean, the number of robotics clubs in schools, you know, from middle school onwards is
prolific, right? There's just so many competitions that people participate in. There's a lot of
interest in starting a robotics club and being sort of, you know, participating in it. Every school seems to have one of those very robust clubs.
So from that very strong beginning where you see, you know, great participation from young students,
how do you see those numbers sort of pan out as, you know, people move into college and beyond?
And sort of going back to your previous comment, what does that diversity look like as well in terms of gender? Like how many women do you see pursuing this field?
Pretty much none. I mean, you know, we all know there's a drop off of women in middle school and in the robotics clubs, you're seeing mostly guys, right? I mean, you get a women's all women's group and that, but the numbers are really low. So let's look at talking about drop-off.
Go look at the pictures of all of the DARPA robotics challenges. See any women on the team?
Maybe one or two. See any women lead the team? Any women leading the team versus just being on
the team? See any women being the PI or the faculty advisor for the team. We're talking at best a 1 to 30 ratio.
We're not even talking about the 20% that we see women in computing. So something is going wrong.
We are not engaging everybody that should be engaged. We don't have a lot of role models.
We're not diverse. All of these things, we're not seeing
the diversity. Right. And I think you hit upon the absolutely right thing, not enough role models. I
mean, I can, you know, a lot of the times I feel like even, you know, when I was in school and
college, I would always look to see, okay, where do I see my career going? And usually having somebody
who's been in a position who's inspiring to me is always a great way to sort of, you know, for me to
say, okay, that's the kind of life I want to lead. And that person seems to be able to do it. And so it would
be, you know, it's possible even for me, I just that possibility is such an empowering feeling.
But yeah, that's, that's a much sort of larger discussion around, you know, how do we address
this issue of not enough role models of encouraging women to sort of participate more?
I don't know if you have any more thoughts around it.
I mean, this can be a separate podcast in and of itself.
Well, I also hate competitions.
I mean, it's like, oh, we're just getting this.
Tufts did some beautiful work several years ago,
and they found that, you know, in middle school,
the trade to do with robots were there.
They didn't like the smash up competitions they didn't like the mono on mono da da da stuff uh the the girls
the young women like collaborative okay you're uh they said that one of the best received of the
projects and the schools they were working in was everybody is going to make a robot that is part of a circus that we're going
to have everybody go through.
And so without of the Lego mind storm kits, one, one,
one girl, two girls did a Ferris wheel.
Another one did a tilt a wheel, everything they're doing.
It all had to move and all that.
And they loved it because they weren't competing.
Everybody could be creative in their own way.
And they didn't, there was, you know, you see this in competitions,
DARPA challenges, like, oh, we're, you know,
look how many sleepless nights they've spent.
Look at all of this camaraderie and guy stuff.
You know, you kind of get a kind of, you know, a couple of degrees away from bro culture.
And that just doesn't always fit for women.
And I don't think it always fits for guys either.
But it kind of can, you know, can kind of give the wrong impression of what our field's
like.
I mean, we're the, you know, you think about NASA, you think the slow and steady, you know,
being the good engineers, being realistic, not the heroic, we're going to spend all night, slam it together, and then it'll work and will be, you know, the next day, it'll all work perfectly.
And I think we have inadvertently created that culture.
Absolutely right.
I mean, collaboration, listening, I can't imagine that those skills are unique to any one gender.
I think we all need it to be happy and successful in our careers and lives. This has been an amazing conversation,
Robin. We're running out of time. So for our final bite, I'd love to hear from you. What is it that
you're most excited about in the field of robotics or computer science over the next five years?
You know, the most exciting thing is that finally, finally, finally,
disaster robotics is becoming commonplace.
I mean, look how many different ways robots are being used all over the place.
They're finally, it's not that big of a deal anymore.
So I am so excited to see that, and I think that will grow,
and it will also encourage researchers to come up with new ways, new capabilities, new software, making things the right balance, synergistic intelligence, that joint cognitive system.
And it will really make a difference.
And so one day we're going to get to a point when an emergency isn't an emergency anymore. It's unfortunate,
it requires attention, but it's not an emergency because we've got this. Yeah, this has been an
eye-opening and tremendously engaging conversation. Thank you, Robin, for taking the time to speak
with us at ACM ByteCast. Thank you. ACM ByteCast is a production of the Association for Computing Machinery's Practitioners Board.
To learn more about ACM and its activities, visit acm.org.
For more information about this and other episodes, please visit our website at acm.org slash bytecast. That's acm.org slash b-y-t-e-c-a-s-t.