ACM ByteCast - Ayanna Howard - Episode 19

Episode Date: August 24, 2021

In this episode of ACM ByteCast, Rashmi Mohan hosts 2021-2022 ACM Athena Lecturer Ayanna Howard, Dean of the College of Engineering at The Ohio State University and founder and President of the Board ...of Directors of Zyrobotics. Previously she was chair of the Georgia Institute of Technology School of Interactive Computing in the College of Computing, where she founded and led the Human-Automation Systems Lab (HumAnS). Before that, she worked at NASA’s Jet Propulsion Laboratory (JPL). She is a Fellow of AAAI and IEEE. Among her many honors, Howard received the Computer Research Association’s A. Nico Habermann Award and the Richard A. Tapia Achievement Award. Forbes named her to its America's Top 50 Women in Tech list. In the interview Ayanna looks back on her early love of robotics, inspired by science fiction, teaching herself how to program, and working a high school job at the California Institute of Technology. She shares some of her favorite research projects at JPL, where she designed expert systems, and describes the transition from government/industrial work to academia. She also talks about AI challenges relating to training models and large-scale deployment of lab-tested algorithms—offering warnings for technologists—as well as some potential solutions from her research. Rashmi and Ayanna also touch on her company, Zyrobotics, which develops mobile therapy and educational products for children with special needs, and her book, Sex, Race, and Robots: How to Be Human in the Age of AI.

Transcript
Discussion (0)
Starting point is 00:00:00 This is ACM ByteCast, a podcast series from the Association for Computing Machinery, the world's largest educational and scientific computing society. We talk to researchers, practitioners, and innovators who are at the intersection of computing research and practice. They share their experiences, the lessons they've learned, and their own visions for the future of computing. It's not often that one gets to talk about their job being rocket science and truly mean it. Yet, our next guest did exactly that when she first started her career. She has since moved on to studying about and building robots with emotional intelligence to better interact with humans and greatly improve our quality of life.
Starting point is 00:00:49 Dr. Ayanna Howard is a roboticist, educator, author, and entrepreneur. She is the Dean of the College of Engineering at The Ohio State University and the first woman to hold that position. She has a slew of awards and accolades, from being featured in the MIT Tech Review Top 100 Innovators to being named in Forbes America's Top 50 Women in Technology, and has over 250 peer-reviewed publications. Ayana, welcome to ACM ByteCast. Thank you. I'm excited about this conversation. As excited as we are, I'm sure. I'd love to lead with a simple question that I ask all my guests, Ayana. If you could please introduce yourself and talk about what you currently do, as well as give us some insight into what drew you into the field of computing. So at the end of the day, I consider myself a roboticist. I think irrespective of my quote unquote position or title,
Starting point is 00:01:45 I will always be a roboticist. So right now as the Dean of the College of Engineering at Ohio State University, and in that regard, looking at how do we engage students for the next generation of engineering and computer science and do it so we do it responsibly. In my own personal work, I focus on primarily looking at robotics, both in healthcare, but also looking at the bias that we have embedded in our AI robotic systems and how do we mitigate that, which has been exciting, especially nowadays, even though when I started, it was because of the healthcare robotics field, and now it becomes much more important around that. So a little bit about how I got into, evolved in this. As a roboticist, I wanted to be a roboticist since the sixth grade, which has been a long, long, long time ago, like maybe two years ago.
Starting point is 00:02:37 But it was because I was fascinated by science fiction, science fantasy, and I wanted to build the things that I saw in the movies and the television shows. Specifically, I wanted to build a bionic woman, which was my dream goal, you know, as a 12-year-old child. And so that was sort of my path was how do we design robots to really interact with people? And of course, that's evolved of what that really means. But fundamentally, it's about designing robots for really enhancing our quality of life and pushing that forward so that we are better because of the technology versus despite the technology. It's amazing that you say that. I don't remember watching Bionic Woman, but I do remember watching a show called Small Wonder, where a little girl was a robot, I think, and integrated into that family. And I was fascinated by it. Of course,
Starting point is 00:03:29 I didn't end up going down the path of pursuing robotics as you did. But still, as a young child, it felt like it was a different world altogether. But I'd love to understand, Iyana, who were sort of your influences to even get into computing? You know, was that a common thing in the, you know, in the school that you were in? You know, how did you get inspired? No, it was not a common thing. I was a product of, honestly, a public school system. So computing was not like a thing. But my dad was, is an engineer. And so very early on, I remember in the third grade, he brought home a computer that they were given away from his job. I think it was an old something. And he brought it home and he was like, hey, I want to teach you this kind of stuff. And he
Starting point is 00:04:16 brought me a book. It was basic, basic as the language, not basic as a computer. And I remember just sitting there kind of going through teaching myself how to program. And I didn't think it was something novel. It was just something, you know, my dad brought home. I was again into these gadgets, even at that young age. And I just basically taught myself how to program basic from the book, from the manual. And so I never really thought of myself as a computer scientist until, you know, I was much older. I programmed. It was like one of my tool sets. I was good at it, but I didn't realize that I was computational and that that was a label. It was just what I did. It was like reading. You just did it. It's like, oh yeah, here's another language. Okay. What can I do with this?
Starting point is 00:04:59 Oh, that was fascinating. Okay. What's the next language? So it was more like a language, like reading than anything that I would have defined as like, oh, you're a computer scientist. And how did that translate into getting into sort of robotics? I know your first job was literally being a rocket scientist at NASA JPL. What was that experience like? Oh, that was fascinating. But, you know, I will tell you, my real first job was as a database administrator back in the day at Caltech. Because I was really good at computers, when I was in high school, this was after my senior year, you know, I was, you know, I was looking for a summer job. There was a, the county basically had just brought in these mainframes computers. And they were like, oh, this is interesting. Is there anyone out there that can like help us do this translation of all of our spreadsheets into this computing platform? And one of my teachers recommended me.
Starting point is 00:05:55 And that was actually my very first job was like a basically a back end database administrator for an accounting unit at Caltech. And the way I got to JPL and NASA was my supervisor at Caltech, when I was like, I want a real robotics job. And when I went to college, introduced me to a manager at JPL. And that's how I got to get involved in, you know, the robotics world at NASA from very early on. My first summer position was, you know, after my freshman year in college, where I was working on, believe it or not, neural networks to do some planning algorithms around, you know, things, putting it on areas and terrain and things like that. So it was fascinating being around really, really, really smart people that just got it.
Starting point is 00:06:45 You know, you would call us nerds or geeks, but when you're a nerd and geek and when you're around nerd and geeks, it's just like feeling like family. You sound like an absolute trailblazer, Ayana. I mean, going into a place like NASA JPL, was it intimidating at all? It wasn't. And, you know, I think a lot of this is hindsight. But back then, you were valued for your intelligence. And because I was pretty well versed in computer science when, you know, it wasn't a thing. Like computer science was really not a thing when I was going through, not as an undergrad. And I was good at it. And, you know, by the time I started at NASA, not as an undergrad. And I was good at it. And by the time
Starting point is 00:07:25 I started at NASA, I had Pascal under my belt. I had BASIC under my belt. I had Fortran. These are languages that like, oh, you know, all three languages. And again, I learned them like you would learn a new language, like a written and physical and verbal language. And so people just were like, oh, you're a smart kid. So I never really felt intimidated until I was older and in my classes, but in the work environment, which is why I always encourage students, like get some job experience like early on, because when you're working, you are valued and you are evaluated by your output of how well you do your task and your job.
Starting point is 00:08:06 You're not necessarily evaluated on how well you do on a test, which, you know, people can game test. And so I didn't feel intimidated when I was at NASA. I felt intimidated in my classrooms and in my engineering classrooms, for sure, but not in the work, not at NASA. I think that's wonderful to hear that. And like you said, I think irrespective of our years of experience, we each bring something into the workplace, and we are peers. And so that helps empower us in some ways. I mean, there may be things that we don't know, but we certainly add value with the things that we do. What was maybe, you know, wondering if you could talk a little bit about, you know, a memorable project from those times that still stays with you and you still look back on with pride? So I will tell you about,
Starting point is 00:08:49 actually, I'm going to tell you about two. So one was the first research robotics project that I worked on. It was actually as an undergrad student. I had found a faculty member that did robotics and was like, hey, I know how to do stuff because I've been at NASA. And he was like, sure. And my task at the time was to figure out how do I get a manipulator, a robot arm to basically identify an object. And so what I thought about was through touch. And so I remember programming the robot
Starting point is 00:09:26 so that it would approach, and it had these sensors, the robot arm would approach an object and gently touch it. And then it would trace the outline using the sensor input. So it was like this whole feedback loop. And why I remember that is because one, the robot arm was huge, right? And I remember it would take so long to get this robot to move because it had to be so gentle and just so compliant. And why I remember that is when I went and started my PhD and my thesis,
Starting point is 00:09:56 it was dealing with robot manipulators and deformable objects. And I think the reason why I was attracted to that as a project was because of that undergrad experience and working with the faculty member and the grad students in a research lab where I was like, oh, this. And that was looking at, so at the time, we had already launched the first rover, Sojourner. So Sojourner had happened, which was in early 90s. And the whole thought was, how do we do longer range traversal on Mars? And so I was leading a research team to think about, you know, the possible. And the way that I approached it was to try to get the humans, so the human scientists, to try to figure out how do we take their knowledge and take their experience in the field and translate that to the rovers, right? So how do you design an expert system from the experts? And it was my first foray, at least with respect to rovers and human robot interaction, although that wasn't a word back then, on this blending of the human power and the human strength, human expertise with the robotic expertise. And I remember that because it allowed me to, one, explore the power that you have when you blend in the human and the robot and capitalize
Starting point is 00:11:35 on it as a system that's together versus a, okay, the robot's going to do something separate and the human's going to do something separate and the human's just the user, but really thinking about it as it's a system and this symbiotic system of humans and robots working together. And that was, I think, the first time I realized that this could be done in the real world. It almost sounds like that was a pivotal moment for you to sort of move into the next part of your career, which is really around adding human cognition into systems. But the other thing I was interested in finding out also was you also made the switch from being in an industry setting into academia. What motivated that? Yes. So that was because of the world and what happened. I thought I was going to be a lifer. So, you know, one of the things traditionally, when you go into government work, most people are lifers, you know, they start and they retire. And I really did think I was that almost immediately funding for research was dried
Starting point is 00:12:48 up. It was basically halted. It was stopped. And because of that, so back then you didn't go on furloughs and things like that and, you know, stop, but you would come to work and like the projects were basically like, okay, yeah, you can, you come to work, but yeah, you're really not doing anything because, you know, these missions may not happen. And you may not actually get this to be done and financed and funded because everything was halted. And so at the time I was like, okay, do I weather this through? Do I just wait? Do I sit? Or do I see what's out there?
Starting point is 00:13:20 And I decided that it was time to see what else was out there. And I decided that it was time to see what else was out there. And unfortunately, there wasn't a lot going on in terms of industry research. So I was like, well, can't go to industry. Like AT&T wasn't a thing anymore. The only place you could still do really good robotics research was in academia. So I decided, oh, well, let me see what happens. Let me go out, put out my CV, talk to a couple people, and maybe that's something I could do. And I thought I was going to do it temporarily, like, oh, you know, I'll do it for a few years until, you know, NASA comes back in terms of doing robotics research. And that was 16 years ago. That's great. Certainly NASA's lesson. I believe that was your switch and you moved to
Starting point is 00:14:06 Georgia Tech at that point? I did. I moved to Georgia Tech and Georgia Tech because one, they were starting to think about growing robotics as really as a discipline across multiple departments. They had been talking about, you know, we think we want to do a PhD program. We think we want to do a robotics institute. And the conversations were happening. And I was like, oh, I want to be on the ground floor of that. And the other thing is, is when I came interviewed at Georgia Tech, I just felt comfortable. I felt at home. The faculty were like, oh, you are awesome. You are great. You could be part of the team. And I felt it was so much more about collaboration, even though this was an academic environment. And the things that I'd heard was, yeah, as an academic, you basically work by yourself
Starting point is 00:14:54 and you're siloed, was what I'd heard. But when I visited Georgia Tech, it didn't feel like that. And so that was the reason why Georgia Tech and why I said yes. That's great. I mean, I think that I feel at least all that you hear, as well as my own experiences, innovation really comes when you collaborate, when you have those diverse opinions that are clashing together, not always in agreement. But that's when you sort of really, you know, think about what can be done that might not have been done before. One of the things that I also read around that, Ayana, was the fact that at one point, we were always looking at robots for very mechanical or repetitive tasks, right? Working in a factory when you actually had a very defined set of movements that the robots would sort of do. And that started to change. And you started to work more in terms of saying,
Starting point is 00:15:41 how do we actually build more intelligence and add more human sort of thought process into robots? Could you talk to us a little bit about that? Yeah. So, you know, one of the things is when, even when I was working at NASA, when I would go out talking to, you know, the public about robotics, I mean, every time there would be at least one person to ask the question about, well, you work in robots, you're taking away jobs. I mean, this was 20 something years ago, because you had a lot of these industrial manufacturing robotic systems coming into car factories and other types of manufacturing environments and taking people's jobs. And I remember at the time that my mindset
Starting point is 00:16:23 was, well, those aren't the robots that I work with. I don't want to be in a position where I'm designing technology that makes the human condition worse. And I decided that a long time ago, a lot of it because of my background and my parents and things like that. My role and my responsibility is to enhance as much as possible with my mind, the human condition. And so when I think about robots and I think about the role that robots could play and should play, the reason why we have technology is to enhance our quality of life. And if we're designing robots that aren't doing that as a positive, like sometimes there might be some negatives, but if the overall positive
Starting point is 00:17:02 is not for enhancing our quality of life, enhancing the human condition, then we really should not be doing it, period. Even if we can't, we shouldn't be doing it. And so that is my philosophy, and therefore it's why I work on robotics projects that really focus on this human and this robot through a symbiotic relationship, because I believe that that is the way that robots enhance our quality of life and aren't as detrimental as they could be. Got it. But one question I do have, Ayanna, is, is it mostly so folks who are working in this space, I'm sure, you know, many of your colleagues or even across the world, is it mostly our own sort of moral compass that keeps us responsible towards that goal of making sure that we're enhancing human capabilities, but not taking away from the opportunities that we may have otherwise had. How is that in some ways regulated? So it is a moral compass right now.
Starting point is 00:17:59 But I think like with anything, morality is learned, right? It's just the fact if you notice when you have, you know, little kids, you know, I want something, I take it. And parents and teachers are like, no, no, no, you don't just take something, you ask, or you pay, right? We learn the right and wrongs and morality from the very beginning. And so I think this aspect of how should we use technology? What do we do that's good with computer science and robotics and AI, we as academics, as instructors, as teachers, really do have to embed that quality of values in our students. Because it's just like with anything else, it has to be taught. And I do think it is a responsibility. It's a responsibility of technologists. It's also a responsibility of society to push back, which in some cases they're not because they're reliant, but it's also the
Starting point is 00:18:49 responsibility of technologists to ask these questions, to ask about the pros and the cons whenever we're designing a new robotic application. And, you know, I will tell you in my own research, we've designed technology where I'm like, yeah, maybe we shouldn't do this. Maybe we shouldn't create a system, a robotic system that gives advice about how you look. You know, maybe that's not a good idea because this aspect of how you look is going to be based on, you know, my group's perception of, you know, what is a good way to look and what's a bad way to look. So maybe that's not a good thing to do, but healthcare robotics, which my group works on, that's a positive, whichever way you look at it, because it means that people can improve their health. Like, there's no argument that that's very helpful,
Starting point is 00:19:35 even if you're telling someone how to do this. And so this morality thing, it is a responsibility and currently is not regulated very well. There's some rules and regulations that are being considered in the EU, for example, right now. There are some in the United States around, I've seen a couple of bills that are flying through that have some aspects of regulations around AI, around ethics, around robotics, but there's nothing that is confirmed and final around this aspect of what is the morality or what are the values that we should have in terms of designing the next generation of robotics AI systems that are used by people. Understood and very well put. The point that you make about both teaching and learning, as humans, we're obviously capable of that.
Starting point is 00:20:25 And having somebody to guide us while we make these decisions is super crucial. How does that translate to robots as they are being built to make a lot of decisions for us? How do you see that you're able to teach robots to see between right and wrong? Yeah, so one is there are some grounded, I would say, outcomes that we can measure. So an easy one is if I make a decision, one, you need to make sure that that decision is equal, irrespective of gender, irrespective of race, ethnicity, religion, right? Like that's an easy one, right? So if I'm a healthcare robotic
Starting point is 00:21:05 system and I'm making a medical diagnosis, is it different or is it a more negative outcome if it's a female patient versus a male patient? Like that's an easy one that a robotic system can like look through the data, you know, do some changes of parameters, see are the same outcomes coming out. I think that robotic systems, AI systems should be doing this. They don't traditionally do this, especially if they're being deployed in the field. But that's a simple one. Now, other things in terms of what is right and wrong absolute, that's harder because it is tightly linked to the environment from which the robotic system or the AI system is deployed. So my whole philosophy is you need to give the users the choice to decide. So if I have a robotic
Starting point is 00:21:54 system, for example, that's deployed in the United States in a rural community versus, say, somewhere in North America or South America in a predominantly, say, urban environment, you should be able to give the community the ability to change the parameters based on their culture and value system. It shouldn't be up to me as a technologist to infer or to force a default condition based on my own personal value system because it's going to be different. And so how do you decide between right and wrong? I think we need to provide the ability to give people the autonomy to incorporate that in terms of their robotic platforms when they adopt them. Right. And that also means building these systems with that level of flexibility to be able to take in those inputs and adapt to the
Starting point is 00:22:43 environment that they are deployed you know, deployed in. What are some of the technology challenges that you're seeing, Ayanna, with this field? I mean, when we're going back to what we were talking about earlier, robots are no longer in sort of a constrained environment, right? So what are some of the challenges that you're facing as you're building these new systems? Well, there's a bunch of challenges. So one challenge is this aspect of adaptation. We don't do that so well. Just in general, I can give you an example. If I'm creating a healthcare diagnosis system on my robotic system
Starting point is 00:23:16 to do something interesting, I will collect some data from different target demographics to try to teach the robot what to do, what's the proper right way to do. And what happens then is if the system gets deployed in society, there's no way that it could have been trained on all the possible conditions. But giving the robot the ability to recognize when it's wrong or when it's in a novel environment is still an open-ended problem because every environment that you consider novel, the parameters are going to change and shift, right? So that is an unanswered problem right now, which is interesting as a researcher because it means that you can still have researchers still publish papers. That's one difficulty.
Starting point is 00:23:59 The other difficulty is when you're deploying in the real world environment, how do you translate from the lab into these, you know, deployed, very dynamic, sometimes uncertain environments? This process is, as a researcher at an academic institution, I don't assume that my, you know, nice lab, proof of concept, trained algorithm is going to be deployed in society. But we're seeing this accelerating. So you're seeing things that are done in the lab in terms of algorithm development. They're tested, they're validated, but then they're quickly deployed into the billions of people that are on Earth. And we're using basically the humans as our test conditions and then saying, oh, that didn't work.
Starting point is 00:24:46 Oh, this group was a disenfranchised. Okay, let's train on it. For toy problems, it's fine. But when you have things like facial recognition algorithms that are used in surveillance, that's an issue. When you have facial recognition used in your phone to unlock it, okay, maybe a little bit of a problem, but not you know, not so much. But that's a problem is that we're deploying these systems in real world scenarios that could be
Starting point is 00:25:10 detrimental. And we as researchers haven't provided the scaffolds for these systems to adapt in real time. Yeah, no, that's an excellent point. But I wonder, Ayana, how would we do this better, right? I mean, actually, deploying these systems in the real world is a get enough material to learn and improve, and yet there are some significant risks. So what is it that we can do to actually do these trials, but do it in a way where we're not sort of making it very, very risky? So I actually have some ideas. Now, whether they will work in the real world is uncertain, but they have definitely worked in my own research in my lab. So one is giving the ability for people to tell the system when it's wrong and using that as input. And so if you think about learning, it's how do you design systems that have a core learning algorithm,
Starting point is 00:26:20 and then as you're collecting a very small in samples, it can retrain and relearn, right? And identify that. So it works in the research lab. Does it work in the real world? I'm certain. So basically what that means is, and I'll use my phone that has sometimes problems unlocking because of various, you know, my tilting of my face and things like that. So wouldn't it be nice if every time it messes up and I know when it messes up, right? Like I can put in a little like, you know, click and that means, okay, we need to retrain on this angle of my face and I can click it and click it and click it. And therefore I have the ability that after a while it's like, oh my gosh, like 10 out of the 20 times this phone did not recognize
Starting point is 00:27:07 my face. Let's look at the data and see why. And half the time it's because it's the angle that I'm holding it because, you know, I like to sit back in my chair and things like that. So that's one way is putting this as part of the system where it can personalize the learning based on me and my behaviors and my attributes and designing it. I know it works in the research lab. We've tried it in my own group in the algorithms we deploy. Whether it works on a commercially available platform, not sure. The other is, and this deals with this aspect of setting your parameters from the beginning. A lot of times, if you bring out
Starting point is 00:27:46 a new robot or bring out an AI algorithm, there's these default settings. I would argue, let's not have default settings. Let's create a system such that the default is based on the human input. And so we do this with respect to setting up a new computer. It comes up, it says, you know, what time zone are you in? Right? And then it asks you, you know, these parameters, what kind of background do you want to have? And it doesn't assume anything. We should do the same thing in our robotic AI systems, like, you know, come up and saying, like, parameters used in this algorithm, say healthcare, is gender. You know, what gender do you identify with? What ethnicity do you identify with? From the beginning versus just assuming that I've learned everything about every human that's out
Starting point is 00:28:32 there, which is of course, a hundred percent incorrect and wrong. And so don't have default settings, have settings that by default are selected by the human. We've tried it again in my lab environment, and it works fairly well in terms of Ben is very customized at the very beginning, because healthcare should be a personalized application. Whether it works on commercially available platforms that are deployed in the wild, who knows? That's great. Thank you for sharing that. That actually makes a lot of sense. One of the other things sort of, you know, moving in another direction, Iana, I also read that the pandemic and the situation that we're in today has changed our way of life and actually has been somewhat of a boon for robotics. How true is that statement? It is true. It's one of these mixed blessings because as a roboticist, I've become cooler, right it's like, oh my gosh, I thought it was cool before, but now everyone knows this some disparate impacts in terms of different groups.
Starting point is 00:29:46 And so the negative is that the systems don't necessarily work the same for everyone. And we don't yet have the algorithms to fix them real time in the real world. So they're being used, they're deployed, and we can clearly see that there's differences sometimes in the decisions, depending on who's using the robot, who's using the AI system. Got it. And so what are the sort of the most common applications you're seeing that is that's being adopted more? I mean, other than my robotic vacuum cleaner, which has become an absolute treasure in our home, what else are people using, especially during the pandemic?
Starting point is 00:30:26 Yeah, so in terms of, I would say there's the physical platform, like robotics with hardware, and then there's a virtual platform, so robotics in the virtual world. So in terms of the physical, it's mostly around things related to cleaning, in the hospital, safety. So a lot of the kind of linkages and adoptions have been, you know, how do you design robots or how do you bring in robots so that we can do the cleaning of the floors either in the hospital or in a retail environment faster and safer? I mean,
Starting point is 00:30:58 those aspects, how can you do some of the handling? How can you do those things that have traditionally been based on human people doing labor have been transferred to the robots? The warehousing has increased as well because everyone's been shopping online. And so they've increased the number of robots that are deployed in the warehouse and logistics and things like that. So that's definitely in the physical space. In the virtual space, it's everything from chat
Starting point is 00:31:26 bots that are used in customer service, that are used for moderation online for things related to, say, hate speech and targeting aspects in terms of filtering, advertisement, marketing. So all the things that are in the virtual space that deals with our human interaction around language, primarily around language. And then the other is around healthcare. There's been an acceleration of the use of AI specifically on the algorithm side in terms of healthcare. In fact, there are some stories that have started to come out, which as an AI person we were aware of, but it's now coming out in the public around the use of AI for the COVID and for accelerating the vaccine and how AI was used in that. And even how AI has been used in terms of figuring out how to do the logistics of disbursements and those aspects across the different states and countries and the use of AI. And so in the healthcare, there's been an acceleration as well.
Starting point is 00:32:31 I have to start off by saying thank you for the incredible work that you do, because what you're saying really tells us how important it is, how valuable this has been to help protect the folks who would have otherwise had to do these roles in person, right, especially when you talk about cleaning crew at hospitals, etc, who would have been at such high risk, if not for, you know, having these innovations that are able to sort of handle these things without putting, you know, the overall population at greater risk. It's amazing that that technology was already available for us to actually start using in this way. The other thing I read, Ayanna, was also around robots providing support for human and emotional needs.
Starting point is 00:33:12 How true is that and how effective is it? That is true as well. So this has both been in the physical and, there's been an increase in the number of physical robots that have been used primarily in nursing homes, in places with older adults. Because, as you know, at the very first lockdown, those were the most highly vulnerable. And so there was an increase in these pet-like, animal-like robots that was done. In terms of the virtual, there has also been an increase in companionship types of apps in terms of interacting with virtual agents, also in mental health. There's been an increase in the use of automated AI systems that help with not only diagnosis, but also conversation, early detection of depression, and things like that in the healthcare space, primarily, again, to address
Starting point is 00:34:15 the loneliness, to address the depression that has increased during this time because of isolation of different individuals. So yes, it has helped. It's been a positive. And one of the reasons why I say it's been a positive is because the alternative is nothing, right? It's not like the alternative is, well, we're replacing humans. There is no alternative and that's what we've seen.
Starting point is 00:34:38 And so in this case, the adoption has enabled people to connect or feel connected, even though they may not be connected to necessarily another human in the physical space, but they are being connected through conversation, through emotional connection. What happens as we get to the other side of what stays and what changes, that is still uncertain. But I think a lot of the things that we've become used to and accustomed to will stay. I mean, other things might adapt, but I think we're going to have a hybrid in terms of what's been adopted, what stays and what's like, okay, I'm done with that. No more. I have to emphasize what you just said, which the alternative is nothing. And I absolutely agree with you.
Starting point is 00:35:26 There's always a fear that, you know, if robots are going to take over the human emotional connections that we have, the social interactions, then is that really the right thing for us? But as it stands today, we aren't able to have those connections. And I know many of us who have family, you know, further away or family that's older and living alone would love for something like this, for them to not feel lonely. Because there's a part of this is the physical aspect of dealing with COVID. But certainly there's a lot of mental strain from just the isolation. So, yeah, you know, your point is well noted.
Starting point is 00:35:59 I'd love to also talk about your company, Mzai Robotics. I know you do a lot of work, products that include therapy and educational products for children. Could you talk about that and what was the motivation? What drew you into that field? Yeah, so Zyrobotics focuses on educational products primarily, but they also can be used for therapy, primarily for children with diverse learning needs. So the software, it adapts in terms of personalization. So it has parameters that can be personalized to the needs of the child, whether it's a sensory processing disorder, whether it's things about speed and all these
Starting point is 00:36:37 variations where there's a norm traditionally thought of in terms of even games, right? There's a default setting that applies, but with the software that's created through Zyrobotics, there is no assumption of default. It's based on the child that's using the application. And the reason Zyrobotics came about and why it developed software and technology in this space was because of the research I was doing at Georgia Tech, working in healthcare robotics. Because I was working with children with special needs, primarily children with motor disabilities, what I saw very early on was that when you
Starting point is 00:37:15 saw what was out there, when you saw what was available, it was very much in this, I would say, medical space. And so when kids were interacting with tablets or games, if they had a special need, the kinds of things they were working on were, I would say, not the newest, not the greatest, not the bing-wang, like, oh my gosh, this is so cool. And it was much more of an old school, like, oh, yeah, this is functional, but it's not fun and engaging. And I just thought that this was a disservice. And, you know, as a technologist, I was like, wait, we can still make these very functional applications that are designed for children with special needs. We
Starting point is 00:37:55 can make them fun and engaging so that all kids will want to interact with it. As an academic, of course, though, that's not a research problem necessarily. And so Zyrobotics took it's not a, you have a disability, it's a, everyone has a different need, everyone learns differently. And therefore, these applications and this software and technology basically works with any child's needs. That sounds like wonderful work. Are you seeing a good amount of adoption of the products that you're putting out? Yeah. So currently, well, as the last note that I was given in terms of report, we had about 800,000 users of either the software or the hardware, mostly in the United States, though. I would say 90% of that is in the United States. Oh, that's great. And, you know, it's certainly an area that needs a lot of innovation.
Starting point is 00:39:05 I'm so grateful that you're actually working in that space. The other thing I wanted to talk about, Ayanna, was also that you recently authored a book called Sex, Race and Robots. What inspired you to write that? Are you seeing biases in the field that are specifically concerning to you? Yeah. So the reason I wrote this was because, well, a couple of things. One is that I was starting to get frustrated at all of the, I would say, negative things that were coming out in the media, but also the things that were coming out of companies in terms of language. There was language models that were spewing out racist statements when they were deployed. There was facial recognition
Starting point is 00:39:45 algorithms that weren't recognizing people in terms of their passport photos. And you would have these things come out from these companies and apps that would basically undress women. And you can download anyone's woman's app and you can have them, you know, naked on your iPhone. And it was just frustrating me. And then I would see people out in the media saying, oh, but we give kind of some hope for how do we change this? How do we fix this both as a technologist, but also as a consumer of these technologies? And so I just wanted to basically lay it out as one identifying what it was, what are the problems without all of the media hype, but also what power we have to really ensure that we change this and we have a better future with it. And so that was the motivation of it. And I didn't expect to see how well or how well people navigated or gravitated toward the message in that. And Weave Throughout is also the story of
Starting point is 00:41:00 being a Black female, you know, growing up in this technology world of robotics and AI and just some of the challenges personally I faced, but also linking it to the challenges we also have with the technology. So it also tried to put a human face to technologists as well. That's great. We'll definitely link it in our show notes for our listeners. But do you have any advice, Ayana, around the same topic for professionals who are in the field, you know, work that you'd like for us to pursue, as well as anybody new who is entering the field of robotics or students who are aspiring to become roboticists around this specific topic? Yeah, so, you know, the one thing to technologists, and I say this over and over again, is it is our responsibility because, frankly, it's also our fault, right?
Starting point is 00:41:50 A lot of times I think as technologists, we're like, well, the ethicist will take care of it. You know, the ethics board will, you know, they'll check it off if it's good or if it's bad, they'll send it back. Or, well, people are going to use it. It's their fault if they believe this stuff and use it. And I'm of the philosophy that, no, we're designing it, we're building it. We are also responsible to mitigate the evils. And if we don't know how, it's our responsibility to learn how. Because at the end of the day, if we don't, the technology will come after us. It's just like, I think about some of the old tales where we create, if you think about Frankenstein, we create these things.
Starting point is 00:42:30 And if we're not careful, our own technology will destroy the creator. And so one, it's a selfish reason, i.e. I don't want the technology that I love and just really lean into to come back and be detrimental to me as a creator. But also, it's a responsibility. If we're creating these things, we're creating it. The public doesn't know how to fix it. We are the only ones that know how to fix it because we are so embedded in the algorithm.
Starting point is 00:42:59 And therefore, we have to do what's right for not only our field, but also for the world. Very powerful message, Ayanna. Thank you so much. We have to own what we create and we have to be accountable for it. I think that's definitely something we could all take a leaf out of your book. For our final bite, I'd love to know, what are you most excited about in the field of robotics, maybe over the next five years? I'm most excited. I mean, maybe this is my little ego selfishness, but I'm most excited about the acceleration of the adoption of robotics and AI, honestly, in terms of real-world applications. Again, even though there are some issues, I think that the enhanced quality, the enhanced benefit to humanity is a positive. And so I'm really excited about
Starting point is 00:43:47 that. It becomes pervasive. So we always talk about internet is pervasive and the cell phone is now pervasive. I feel like robotics and AI will be in the next five years, maybe 10, will be pervasive. It'll just be part of our environment. It'll be part of our DNA. It'll just be an accepted norm. Like, of course, I have my personal AI agent. Like, what did you do back in the, like, 2000s, right? Like, that's what I'm hopeful for and excited about. Oh, I think the vision that you paint is definitely exciting to all of us as well.
Starting point is 00:44:20 This has been such a riveting conversation. Thank you so much for taking the time to speak with us at ACM ByteCast. Thank you. ACM ByteCast is a production of the Association for Computing Machinery's Practitioners Board. To learn more about ACM and its activities, visit acm.org. For more information about this and other episodes, please visit our website at acm.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.