ACM ByteCast - Maja Matarić - Episode 73

Episode Date: August 20, 2025

In this episode of ACM ByteCast, Bruke Kifle hosts 2024 ACM Athena Lecturer and ACM Eugene L. Lawler Award recipient Maja Matarić, the Chan Soon-Shiong Chaired and Distinguished Professor of Computer... Science, Neuroscience, and Pediatrics at the University of Southern California (USC), and a Principal Scientist at Google DeepMind. Maja is a roboticist and AI researcher known for her work in human-robot interaction for socially assistive robotics, a field she pioneered. She is the founding director of the USC Robotics and Autonomous Systems Center and co-director of the USC Robotics Research Lab. Maja is a member of the American Academy of Arts and Sciences (AMACAD), Fellow of the American Association for the Advancement of Science (AAAS), IEEE, AAAI, and ACM. She received the US Presidential Award for Excellence in Science, Mathematics, and Engineering Mentoring (PAESMEM) from President Obama in 2011. She also received the Okawa Foundation, NSF Career, the MIT TR35 Innovation, the IEEE Robotics and Automation Society Early Career, and the Anita Borg Institute Women of Vision Innovation Awards, among others, and is an ACM Distinguished Lecturer. She is featured in the documentary movie Me & Isaac Newton. In the interview, Maja talks about moving to the U.S. from Belgrade, Serbia and how her early interest in both computer and behavioral sciences led her to socially assistive robotics, a field she saw as measurably helpful. She discusses the challenges of social assistance as compared to physical assistance and why progress in the field is slow. Maja explains why Generative AI is conducive to creating socially engaging robots, and touches on the issues of privacy, bias, ethics, and personalization in the context of assistive robotics. She also shares some concerns about the future, such as the dehumanization of AI interactions, and also what she’s looking forward to in the field. We want to hear from you!

Transcript
Discussion (0)
Starting point is 00:00:00 This is ACM Bycast, a podcast series from the Association for Computing Machinery, the world's largest education and scientific computing society. We talk to researchers, practitioners, and innovators who are at the intersection of computing research and practice. They share their experiences, the lessons they've learned, and their own visions for the future of computing. I am your host, Brooke Kifle. Today we're diving into the transformative world of socially assistive robotics, where cutting-edge AI meets human-centered design to support mental health, learning, and development. In a time when emotional intelligence, personalized learning, and digital well-being matter more than ever, socially-assistive robotics is redefining how we design machines to support, motivate, and empower people, helping to build a more resilient, inclusive, and emotionally intelligent society. Our guest is Professor Maya Matarik, a trailblazer in robotics and human machine interaction.
Starting point is 00:01:01 She's a distinguished professor of computer science and neuroscience and pediatrics at USC, founding director of the USC Robotics and Autonomous System Center, and currently serves as a principal scientist at Google Deep Mind. Maya earned her PhD and master's from MIT and bachelors from the University of Kansas and has held key leadership roles at USC, including Vice Dean for Research and President of the Academic Senate. She's a fellow of ACM IEE, IAAAS, and AAAI, and a member of the National Academy of Engineering, and the 24, 2025 ACM, Athena Lecturer. She is also a recipient of the Presidential Award for Excellence in Mentoring, presented by President Obama. Her groundwaking work in social
Starting point is 00:01:46 assistive robotics spanning stroke recovery, autism support, and education continues to shape both research and real world impact. Professor Maya, welcome to ACM by Kess. Thank you so much. It's great to be here. When I hear such a long intro, I feel old. Well, we're super, super excited to have you here. You know, one of the things that I wanted to start off with is you have such a remarkable professional and academic career, but you also have a very interesting personal journey starting in Delgrade and making your way to Kansas and eventually to MIT and now USC. And so as you think about your life's journey, what are some of the key, you know, inflection points or early experiences that ultimately inspired your journey into the field of computing and now
Starting point is 00:02:33 your work in socially assistive robotics? That's a really great question. I think people love to tell these stories of their path being usually deterministic, like, you know, oh, when I was young, I used to dream of doing X and then now I'm doing it. And I would say it was nothing like that. In fact, I spent my first decade and a half of life in Belgrade in what was then Yugoslavia. And computing was nowhere on the horizon for me. In fact, I was interested in going into languages because that's what my mom did. She was a PhD in English literature. It actually came quite easily to me.
Starting point is 00:03:07 So foreign languages came easily to me. I sometimes wonder, what would have happened if I'd done that? That would have been perhaps easier for me. But ultimately, we ended up moving to the U.S. And I got some really good advice for my uncle, who was an aerospace engineer, and he basically said, Maya, you should go into computers. Computers will be big. And you know what? He was so right. And I will take only the little bit of credit to say that I was a good kid and I listened. And I took his advice. And so in college, I pursued computer science. I was one of very, very few women in the class. And this was at the University of Kansas. We back then we called it CompSye. Who does that anymore? But I was really always interested in psychology. Actually, I think what I was interested in is behavioral science, but it wasn't quite labeled that back then.
Starting point is 00:03:56 And so I also pursued psychology. And I bring that up because it took me many decades to ultimately realize the intersection of the two. So I didn't know back then that that's really what I'm interested in, but somehow I kept trying it and it ended up there many decades later. So probably the key decision that got me to where I am now is that I'm always been, fundamentally fascinated by what makes people tick, why we do the things that we do and the things that we don't want to be doing, but we're doing them, and sort of not just the neuroscience of it, but really the behavior. And so when I was looking at computing, the part of computing that had to do with behavior was robotics, which is why I was drawn to robotics and real
Starting point is 00:04:40 physically embodied systems that had to behave in this real world and not in any kind of a simplified world, anything from the blocks world to simulations, which were, of course, not as complex back then as they are now. But they're still not the real world. And so I think that early understanding, implicit though it was, that I care about behavior got me into AI and robotics, and then eventually it looped back. So that's a long story, but the key point is we don't know when we're young exactly what we want to do. Following things and continually learning gets us somewhere interesting. And certainly taking the advice of your aunts and uncles is quite helpful as well. Yes. It's listen to family and to smart people. You know, we really don't want to when we're young,
Starting point is 00:05:25 but I'm really glad I did. Now, I also got advice from him about all these high-level calculus classes that I should take. And I ignore that advice to my peril. Yeah, I wasn't perfect. I'll say that right up front. One thing that I found quite interesting is a lot of your work in pioneering this field of socially assistive robotics. As we think about robotics, we often think of these autonomous systems that are capable of just operating on their own. And so beyond just focusing on this notion of autonomy, but on assistance, and as you think about, you know, co-founding this field of socially assisted robotics, what drove that field and how did it shift from just physical assistance to now things that are beyond that, whether it be emotional or cognitive support? I would say that in
Starting point is 00:06:12 general, assistive as an area, whether it's physically assistive, socially assistive, or cognitively assistive, it's an area of AI computing and robotics overall that is underdeveloped. So I think there's a sort of a lack of interest, I think, overall in the field to pursue it. And there are various reasons for it. They can range from people not knowing that it's an option to do this work. Often when I talk about what I do, I hear young people say, wow, I had no idea. that was part of AI. I didn't know that existed in robotics. So we need to open minds. Also, it's really hard, right? Because you have to work with real people and you have to get real data and often it's messy. And if you work with people whose behavior is atypical or perception
Starting point is 00:06:57 or cognition, you know, you'd have to learn even more and you have to have a lot of empathy and it's harder. And so that's another problem. So I think that's an unfortunate thing that needs to change. And I think it's starting to change now as, you know, we're getting better and better tools from the AI side. But there's also a danger from the AI side that we're going to automate all the humanity out of it. We don't want to do that. But for me, the particular decision was, you know, I had been doing multi-robot systems and teams of robots and nothing assistive at all. And then when I started looking at human robot interaction as a field, and I started looking at that because one of my graduate students at the time was interested in it,
Starting point is 00:07:38 so let's always be honest about the fact that the freshest ideas always come from students. And we advisors, we're the beneficiaries of it. This was Monica Nicholas, who said, well, you know, I'm interested in human robot interaction. Now, she was very interested in physical interaction in the real world. But once I started thinking about it, I thought, well, you know, what are some interesting interactions? And at the same time, for me, the biggest driver was, you know, I had just had kids, and I really wanted to have a great answer to them about what does mama do. And I wanted the answer to be mama makes robots that help people. And that's a huge driver. I think, you know, if you can tell kids what you do and it makes sense to kids,
Starting point is 00:08:16 then you're doing something meaningful. If you can't, you know, people argue about like, oh, these high-level concepts, I don't know. If you can't explain it to a kid to a level where they think it's cool, for me, it wasn't enough. So for me, that was a big driver. I really just went around and I looked at where robotics could help people within my lifetime. That's still a really big open question of where can robotics help people. And I started this 25 years ago. And I started this 25 years ago. And so physical assistance and physically assistive systems are still a very large challenge because making it possible for machines to understand their environment, to do real-time perception of not just the world, but people and human behavior and human intentions is still
Starting point is 00:09:00 really, really hard. And so if you want to help someone physically, you have to solve all of the physical robotics, which we haven't solved yet. So then I thought, well, what else is needed? And one of the things that really pushed me into what socially assistive robotics eventually became was to notice that looking in assistive domains, whether it's stroke rehab, whether it's social learning for individuals on the autism spectrum, more recently, whether it's self-regulation around anxiety. We find that it's not that people don't know what to do, it's that they need help and support in order to do the thing that's hard and then to push through. failure. So to have that motivation to keep going when things are really hard. And everyone needs
Starting point is 00:09:45 that. And so I thought, wow, this is a huge niche. So how can robots do that? And that created this niche of companion robots, but really robots that really do something measurably helpful. So to me, that's a really key distinguishing property of socially assistive robotics is that it's assistive. It measurably helps someone. And socially so, which means it does that. through social interaction. Now, physically assistive robotics achieves that through physical interaction. And right now, these are separate fields. Ultimately, they will have to come together so that if you're helping a stroke patient in helping them do something physical, you must take all the psychology and the social aspects into account. Similarly, if you're helping a child
Starting point is 00:10:32 on the autism spectrum with social skills, well, it turns out our social behavior is physically grounded. So our physicality is a big part about how we gesture, where we look. So these things are really inextricable. But right now, it's still so early in our fields that they're being siloed. I think that's a very helpful overview. And I think calling out some of the progress that has yet to be made both on the physical robotic side, but now this new sort of field of the social robotics as well, I think is an important point to bring up. I think undoubtedly, you know, of course, we've seen a lot of advancements with the generative AI wave over the course of the past years. I'm curious how that translates, if at all, or helps advance some of the progress that's
Starting point is 00:11:16 being made in this socially assistive robotics domain. This has been a huge breakthrough, and it actually has been an incredibly enabler. So it's interesting to see how many different promises, promising arenas and pathways AI may help with, and then all the others that people didn't anticipate, and then all the others that people are expecting now, but most likely won't pan out. So, you know, people think it'll solve all things. I don't believe that. But I will say for socially assistive robotics in particular, if you think about what do these robots do? So these machines need to understand the human in real time, behavior intent, driving, drivers of behavior, motivation. So there's a large perception component. And we now know that through these very
Starting point is 00:12:02 large-scale deep models, perception is becoming much better than it used to be. So there's that. It's not solved, but it's better. But what isn't solved yet is social perception, which is really understanding social cues and dynamics of, you know, eye gaze and things like that. Even large models are still not great at that yet. But they will get there. The other thing that has been really, really enabling for socially assisted robotics from the perspective of AI developments is dialogue. So, right, you know, five years ago, people had dialogue trees and, you know, it was all very painful. The robot had to be pre-programmed with all the things it would say, and, you know, you had to drop in, like maybe the name, but, you know, everything else had to be.
Starting point is 00:12:45 You had to think about it ahead of time and you were limited in how generative you could be. Now, the robot can talk in a completely interesting, generative, non-repetitive way, which could also be layered with personality and intent and tone and sentiment. And it's just, you know, marvelous where we're going. And that's really important because the fundamental underpinnings of what any kind of social interactive agent must have have to do with the ability to engage the user socially. So if I go back to helping someone in rehab, it doesn't matter how much I tell you what you should do and how you should do it. If I'm not motivating you to do it, you're not going to do it. And if you don't fundamentally want to do it, you're not going to do it. And this is fundamental to how
Starting point is 00:13:35 people are wired. Like we can literally tell at the neuroscience level that if you don't have the desire, if you're not trying, then you cannot really learn it. Just like you cannot, for example, take a passive limp for someone and just move it around and then expect them to know what to do. They have to be trying. They have to be paying attention. They have to want. So that is the part. that social part of motivation and grit to keep going. That's the part we're trying to address. And for that, you need to have a lot of subtlety in understanding the human in real time,
Starting point is 00:14:07 understand the interaction dynamic back and forth, and then also be able to verbally be engaging. So those have been all major leaps forward with current AI that was so much harder before. Having said that, there's still a lot left. So we're not done, it's not done, but it's much, much better. And I think it's very ready. Of course, I thought this even five years ago, but I think it's very ready to deploy and test and collect more data and do even better.
Starting point is 00:14:36 And that's the necessary step. And maybe as you think about some of the challenges that still exist in deploying these socially assistive robots in real world settings, what are some of the big problems that remain? So it's a really good question because, you know, you can say, well, we have all these. wonderful AI capabilities now, so why isn't it done? And the reason it's not done is because the models are still driven by data that are not interactive with real people in the real world. Most of the training data are still come from videos and like YouTube and, you know, movies and things like that. They're not real human interactions in contexts where people actually need help. Those data tend to be sensitive. They tend to be hip-hop protected. They tend to be
Starting point is 00:15:21 private, for good reason, but then the problem is the systems really don't have enough relevant data to work with. And to fix that, we have to deploy systems in the real world in the scenario in which we're actually interested in ultimately using them and then collect the data. And that's hard. So this is why progress is slow because, one, you can't simulate humans. I mean, you can simulate human body to some degree, but you cannot simulate social interactions with real embodied humans who have challenges, right? Show me a good simulation of a child with autism trying to interact with another child or a parent or a set of friends at school, right?
Starting point is 00:16:04 That cannot be done. It has to be done in the real world. And that means it's hard. And so people tend to want things that are easier because of whether it's due to lack of resources or lack of patience or lack of empathy. But this is hard stuff. and we have to do hard things. So I think that's one of the issues
Starting point is 00:16:24 is really getting out of the lab and into the real world. And if you're driven by getting more papers published faster, then this is not helping you. If you're driven by making impact in the real world for real people who need help, then this is where the really interesting problems are. Very insightful.
Starting point is 00:16:42 And as you think about maybe, to your point, I love this emphasis on real world impact, and I think clearly your work has supported a wide range of use cases, some of what you've described earlier with supporting children maybe who might be on the spectrum or stroke patients. Is there any particular story that has stayed with you that has sort of served as a driver motivator and some of the work that you're continuing to push forward? There are so many stories. So now I have to think. How do I pick the favorite ones across the different domains that we worked with? And I should say
Starting point is 00:17:16 people sometimes ask, why do you work in so many different domains? And it is because people are ultimately human. And so if we can gain fundamental insights about how people behave and what motivates them in stroke, rehab, or autism, or anxiety, often these things generalize. So that's important because when I think of anecdotes with people, they're really about them being people. They're not about them suffering from a particular condition or set of circumstances. People are people. And There's a generalization to be found there. So we certainly have had, let's see, let me think in different domains. So in autism, when we deployed robots in homes with one or more children on the spectrum
Starting point is 00:18:00 for a month at a time, this was something that had not really been done before. We did it with our colleagues at Yale, Brianz Casillade's lab. And he was just, it was the wild frontier. I mean, the things that kids did with these robots were, you know, in our, in particular, in our experience, there was a family where the child really insisted on having a towel over the robot because there's a certain set of like, if I can't see you, you can't see me. So there was like, we put a towel on the robot and then the child would want to do a reveal to pull the towel off. Well, when you pull the towel off to like free the robot,
Starting point is 00:18:33 you're also topple the robot on the floor, which is not very freeing. And this kept happening, just kept happening. The child had this particular desire to do this reveal. And we really had to think about how do you empower the user to do something that's really important to them rather than telling them, no, you can't do that, makes the robot fall down, you can't do it. You got to do the thing that works for the robot. It's just the tiniest little example of the things that people find delightful are really not typically predictable. You know, if you talk to companies, they're like, oh, we'll do market research and we will know. You will not. The only way you'll find out is by sending it out into the world and seeing what happens
Starting point is 00:19:12 and not sitting over people's shoulders and watching because that's not normal either. So it's always interesting to me what people find delightful and counterintuitively what people find frustrating because that's where the interesting research is, that's where the interesting insights are. Like, for example, we deployed some robots with the elderly,
Starting point is 00:19:32 and people said, oh, you know, elderly people, they're not going to know, they're just going to be so amazed or they won't know what to do with the robots. Well, first of all, many of the users said, why doesn't it do everything my iPad does? So, you know, no, these people, you know, they had standards like all people have standards. And then you kind of had to realize that,
Starting point is 00:19:51 wow, the behavior of this robot needs to be engaging enough to where you're not thinking about, oh, my iPad does this. Because it's nothing like the iPad. It's your companion. And you shouldn't even think about that comparison. And if you're thinking about it, then, you know, something didn't go right. So how do you create these tools? truly meaningful, delightful interactions.
Starting point is 00:20:10 That's to me always been at the heart of it. Because if it's delightful, then you'll want to keep doing it, and that will be good for you. So I'm not interested in delightful products for the sake of people getting hooked so that they could be doing whatever it is. I want them to be empowered and, in a sense, hooked, but only because what we're trying to do is help them help themselves, right? Like, we had a robot that we left in the homes of elderly people who lived alone. The only goal was to get the robot to encourage the elderly to stand up more.
Starting point is 00:20:45 And that was it. And so the robot was seated, if you will, on a little stand next to the person's favorite chair. And it would just occasionally tell them, oh, you know, it's time to stand up. You've been sitting too long. But if you stand up and you walk around, you know, I'll do a little dance for you. And maybe I'll tell you a joke. And it was remarkable. Within a small number of days, every single participant in the study would anticipate, before the robot nudge them, they would stand up. And then the robot would say, wow, you're doing a great job. You're standing up and doing things on your own. I'll give you an extra joke. And it was just this wonderful dynamic where the person was showing off to the robot, like, hey, look, I'm doing great. And that was fantastic. And the other part that was really compelling was once we had to take the robots away, because these are not products, they should be, but they're not, because we're researchers.
Starting point is 00:21:37 People all just went back to sitting because we couldn't make them want to stand. We could only make them want to stand because they had this interesting dynamic with the robot, and they had made, in a sense, a buddy, and now they had this game. But when the buddy was gone, the behavior subsided. And that's exactly what happens between friends as well. Like, you can friends have this dynamic that can be positively encouraging, but it when it goes away, then things go away as well. That's why it's good to have a buddy for healthy behaviors. And so in our quest to create these delightful, motivating interactions,
Starting point is 00:22:16 we're always surprised, and I think it's really important to expect to be surprised. If you, as a researcher, think that your hypotheses are so tight that you're going to predict everything, well, I think you need to get out more. Or maybe it's a wrong hypothesis. It's a narrow hypothesis. Human behavior is such a mess, but in a really interesting way, isn't that what drives us, is to understand messy, hard things? At least that drives me. ACM Bycast is available on Apple Podcasts, Google Podcasts, Podbean, Spotify, Stitcher, and Tune.
Starting point is 00:22:54 If you're enjoying this episode, please subscribe and leave us a review on your favorite platform. Very interesting. And I think those are super, you know, inspiring use cases, but I think I really love this idea of delightful, motivating interactions. You know, I think in obviously in the AI space, as we're seeing the proliferation or application of these technologies, there's a lot of growing concern around things pertaining to privacy, to bias in these systems, which I think becomes even more of a concern when you think about the role of these technologies in physical form or, assistive formats. And so as you're building out these systems that are interacting deeply with children or vulnerable populations, how do you think about privacy, personalization, or concerns around, you know, bias and ethics as core principles that drive how we design these systems?
Starting point is 00:23:50 These are great questions, and we should be asking them of all the work we do. I find it really concerning that ethics are either not taught as regular computer science curricula or an AI curricula. If they are, they're like, oh, and then there's the ethics class. And you really, you need to have it as a part of all the work you do. So I think that's, you know, the first thing to say up front. There are many components to this question. So let me talk about privacy first, because I don't think it's, while it's not an easy question or assault question, it's sort of something that is actually, we've kind of given it away ages ago already. So I hate to say it, but this ship has sailed. When people tell me, oh, I'm so worried about the robot listening. And I
Starting point is 00:24:26 think, well, you know, your phone has a microphone that's always on. Your laptop has a that's always on unless you're covering it. There's so much privacy that we give away readily and for free and don't think about. But then there's certain social contexts like, oh, now I have a robot buddy. That feels much more sensitive. And that's true. It is sensitive in some sense, but also if your dialogue is being captured elsewhere, you should really think about that. You should think about where your line is. So with the work that we do, because we have the luxury of being in a research setting, we keep the data locally on the device, if that's That's what the participants prefer, right?
Starting point is 00:25:01 So it may never go into the cloud. That can be done with products as well. So we did that, you know, I did that both with all of our research studies and also with the startup. But of course, there are benefits to maybe putting your data in the cloud, right? So there are models where companies can say, okay, well, you know, if you do this, if you share your data, you could share it in an anonymized way that will still help the community. For example, I know for the autism community, it's very difficult to get data, but many
Starting point is 00:25:27 members of the community will be willing to share the data from their family for research purposes, precisely so that it would help others in the community. So I think that's a really important thing to understand is under what circumstances will people share what data. But having said that, again, I just think people should be sensitive about the fact that they're sharing data all the time. Like cameras are, like they're on camera, they're on speaker all the time, and they're not worried about that. And I think, you know, maybe, maybe just consider the bigger picture. So that's one thing, is the privacy. The ethics I worry about a lot more, as do a lot of people in my field, which is that we're now so, so, so dependent increasingly on these very large
Starting point is 00:26:10 models. And the very large models are just as biased at scale as the deep models were, and we already knew this for the earlier deep models, we knew how biased they were. And that's just completely natural, right? The model is only as good as the data that, you know, it's trained on. So, you know, that's completely a mathematical predictive thing. But then now what do you do about it? And that's back to my point about collecting data in the real world in a much richer sense. But of course, now you're trading off privacy, right?
Starting point is 00:26:43 So that's the issue, right? Why don't we have these wonderful models that are able to understand when you and I are interacting and I look away and I'm, you know, having an emotional response. Well, that's because it wasn't trained. The models were not trained on those data because those data are sensitive. And don't tell me about movies. They can be trained on movies. That's fake behavior. That's the duality. But, you know, we're going there. People are collecting a bunch of training data. My big concern about training data, again, is that those data are not necessarily diverse enough. And they're not real. They're not collected in real settings. You know, people sit in rooms and, you know,
Starting point is 00:27:20 get video recorded, and that's not real. That's a big problem. There's also a cultural problem, right? Not all places in the world have the resources to collect data, and therefore, you know, the models are biased. So I'm not saying anything new here. I do think that there is something interesting about human robot interaction in particular, though, because it is a place where you can collect data.
Starting point is 00:27:41 So in order to train these robots, we need them to interact with people. But when they're actually interacting with people on something real and interesting, now you're getting these dynamics so you can scale up your data collection. I think that's really interesting. And I would love to encourage people to get robots out in the real world and get people interacting with them
Starting point is 00:28:00 rather than constantly having people teleopping robots, which is just not a real... It's not an interaction. It's not the real world. Please don't do only that. Because there's the whole lot of that. So I know it's hard again, but it's so telling. For example, a few years back
Starting point is 00:28:16 when people put robots in, like humanoid robots, into stores, for example, stores that were selling cell phones, and people would kick the robots and punch them. So this is interesting. Why were people doing that? Well, there are many possible reasons, but likely the robots were failing to deliver what they were promising on with their complex humanoid look and yet lacking ability to perceive and respond and be intelligent.
Starting point is 00:28:42 That's very useful. I would actually prefer that people put robots out there in the real world and get some failures and learn from that rather than having what in many ways might be naive right now, which is like, we're going to build these perfect robots and we're going to send them in the world and they will be great and people will love them. And, you know, I question several parts of that belief. Very interesting. I find it so reassuring but also quite encouraging that maybe it's a byproduct of the field that you're in, but also like a common through line. And most of the things that you've shared is this idea of real-world translation and getting, you know, your research and your testing out to the masses and actually validating and getting some data points.
Starting point is 00:29:26 And so maybe part of it, you know, having looked at your professional journey, you've held leadership roles, you know, at USC, at Google, you've worked in the startup ecosystem. So it seems like that's been a big driver of your work, which is really translating the research into the practice. And so how have you, over the course of your career, as we kind of pivot towards your professional journey, how have you balanced the deep research with that practical impact and what's kind of been the driving force behind that thesis for that approach? I would say that in my early years, I really was not use inspired, right? I wasn't thinking about the real world. I was biologically inspired, right? So when I did teams of robots, I looked at how, you know, social animals and social insects behaved, right? So that's different, though.
Starting point is 00:30:11 There's inspiration from biology, and then there's the actual, like, you know, writing something and creating something to give back into the real world. And it was only when I started to focus on socially assisted robotics and understanding people that it became clear right away that you cannot do this in the lab. You know, you can be naive and try, and then the minute you try to evaluate it with real people, it fails. Because our intuitions about what people will do are very limited based on. you know, maybe how we are. That's not general. And also we just, you know, just the machine's
Starting point is 00:30:46 ability to perceive and predict is very limited in a complex setting like a human. And so I think by necessity, it was absolutely necessary if you're trying to create something for people that you have to get out there and deal with people. And then once you deal with people, you'll learn what's not a real problem. I always thought that was super interesting that when you talk to people from various communities and you as an engineer have a conception of what they will want or And then you're just completely disabused of that very quickly. It's like, no, no, that's not at all what people want. And I'm not representative of that community.
Starting point is 00:31:19 And so, I don't know, maybe it's because I've always been interested in psychology and motivation and what drives human behavior. But I always find it so fascinating at how bad we are at actually at some level understanding and predicting each other unless it's someone who is very similar. You know, there's a cognitive bias where we expect people to behave like we do. And then when they don't, we don't like that. So you can see where this goes. But it doesn't work at all.
Starting point is 00:31:45 Let's just assume that everybody will be very different. And let's go and learn about that and see how to make that work. But, you know, that's not our cognitive bias. And so that's why I find it fascinating. If you want to do machine learning, to me, it's much more interesting to work in a data regime where the data are messy and noisy and sparse because that really pushes your methodology. It's so much more interesting than, you know, working in a regime where like I can get everything I want from Reddit or YouTube and then
Starting point is 00:32:14 let's just do that. I just feel like that's not going to really push it to the limit of what it will need to be able to do in the real world. So I just, you know, in some ways I'm kind of back to what my advisor, Rod Brooks at MIT used to say, and that is the world is its own best model. You know, it was a thing that he used to say when world modeling was popular and people were very angry when he said it. But, you know, it's true. In the end, you can't learn about people, you can't learn about the world without interacting with people in the real world. You just can't. Very interesting. And I think that's kind of a great nugget. As you maybe engage with the new generation of researchers, roboticists, scientists, I'm sure a lot of motivation or what
Starting point is 00:32:57 drives a lot of folks these days is thinking about translating their work beyond the lab and into the actual real world impact. And so similar to the advice that you got from your advisor as a graduate student, what advice would you offer to researchers and students looking to take the impact of their work from the lab or the classroom into the real world? Well, let me first say that we didn't, when I was a grad student in MIT, we did not get the advice to translate in the real world. My advisor was very good about saying, like, build real systems and test them. And I really respect that because that was something that, again, people were not really doing. They were working in simulation and not very good simulations. But we were not challenged to think about.
Starting point is 00:33:39 But how does this impact the world and what can you do to impact the world? And I think that's so important. I think that's what's missing right now. A lot of young people feel a lack of agency, right? They feel like, well, these things are happening to the world and they're happening to me. And what can I do? I'm just one person and I don't have enough ability to move the needle for any needle. And I think instead the approach is what can I learn and what skills can I gain so that I can be doing something that I believe is meaningful.
Starting point is 00:34:08 and then get out and do it. So I think, you know, don't compromise the path that you're interested in, right? If you're interested in creating assistive systems, okay, learn about AI, but then don't forget that you started because you wanted to build that thing that would have made your grandma's life easier. Don't forget. I think this is what people do is they forget. They get derailed.
Starting point is 00:34:29 They, you know, they take a job or they do something and it's just for now, but then it ends up not being just for now. And life is complicated, but in the end, If you don't keep that North Star waking you up every day and going, this is the thing I'm aiming for, and don't swerve. And I think the best way to find that direction, and this is what I say to young people, because they say, well, I don't know what I should be working on. And my advice always is get out of your lab, get out of your house, going to a different
Starting point is 00:34:59 neighborhood, one that is more in need than yours, wherever yours is, and just look and see if there is something that you think you can do a little better that you can help with. And then consider, what would it take to do that? And so, you know, you talk to a lot of high school students who need to do community service. And it doesn't occur to them. They're like, well, I don't know how to do anything hard. I'm like, you don't need to do anything really impressive. Like, go to a place that doesn't have resources and maybe start by setting up a website.
Starting point is 00:35:29 That's easy. Even AI can do it. But then while you're doing that, you're going to learn a lot about that place and what they need. and then you can go from there. That's what we did when the field was new in socially assistive robotics when we started with, for example, stroke patients, we started thinking that we knew
Starting point is 00:35:47 what we were going to do and we learned so much about how we were wrong and where to go next. But those were such precious lessons. I'm so grateful to, like, for example, one of the first stroke patients we worked with and we gave her a robot and we thought, oh, how will the robot keep her exercising?
Starting point is 00:36:04 It's really not a very interesting robot. Well, she actually thought the robot was great, and she kept interacting with it, and she cheated on the robot. She would move, like, and trick the robot, and she kept talking to the robot, and we realized, wait a minute, we haven't thought about this whole other thing, which is people want to be entertained and engaged and motivated. And by the way, they are going to cheat. This was a teacher, so this is not a person who cheats for a living. Her job was to catch other kids cheating, and yet she cheated because it was a hard behavior. and also she wanted the robot to be smart enough to detect it because then it's a game. And so then we build on that.
Starting point is 00:36:41 We would do something like not say anything while she was cheating. But then at the end of the interaction, we had a pre-recorded thing because this was ages ago before AI. We had a little pre-recorded thing that said, oh, by the way, I noticed when you were cheating. Don't think I didn't. Oh, that was fantastic. Then the person was like, oh, my goodness, wait, how does it know? And then they would be all motivated. So these little tricks, there's no way.
Starting point is 00:37:04 that we could have thought of all of that, and no human can, without getting out there and trying it. And I feel that the same lesson applies to everything. You have agency. You just have to take the smallest little thing that you can do and go and start to do it, and then it just opens up. Even if that thing that you thought you were going to do is completely wrongheaded. But I think that's great. It doesn't matter. You're going to learn and you'll find your path and you'll go down it for a while, and then if you want to change it, it doesn't matter. Just keep looking and exploring. The world is just full of amazing challenges and amazing people. And, you know, if you're not working on something that's an amazing challenge, change. And if you're not working with amazing
Starting point is 00:37:47 people, change. That's extremely, extremely valuable advice, one that I'll take for myself. You know, you started off earlier in the session, and you mentioned it's quite motivating to be able to build stuff and do things that kids can understand, which I've found to be quite interesting. And so I'm curious outside of your research, how other parts of your identity, whether it be as a mother, as a mentor, as a creative, as an immigrant, how that has all shaped the way you think about robotics around empathy, around socially assistive sort of technology and systems. Thank you. That's a lovely question. I love to answer that question because I have a great story about it. So a few years ago, during the pandemic, well, we were all, you know, still
Starting point is 00:38:32 in quarantine. For reasons, I don't know, the BBC reached out to me and said, we'd like you to talk to this young girl in Africa. You know, she was like in her early teens, if that. And, you know, we're going to put you together and you'll have a conversation. And I thought, oh, wow, how lovely, you know. And of course, you know, I didn't really know. I thought, well, I don't know how much access she has this stuff. I mean, there's the web, but whatever. So I just thought it would be just kind of like a basic conversation. Well, it turns out, I mean, she blew me away. She basically first kind of knew what I was working on, which was, you know, very nice of her.
Starting point is 00:39:03 But then she said, within minutes, she said to me, you want to be everyone's mom. But since you can't be everyone's mom, then you want to build robots that can fill that role. And I thought, oh, my God, she got me completely. It was absolutely right. It was a most insightful moment. And, you know, she was like, I don't know, 12. Wow. I'm just so touched. And so I think that's right. I think she figured me out in a way that I never figured myself out. I kept discovering things along the way, but it makes me feel really, really good to believe that some of the things that my students and I are developing will actually teach people and help people. And, you know, I love working with young people. I'm over parenting my students, I am sure, but they don't seem to mind, you know, so I guess it's been good. And have these lifelong relationships with students, it just means so much to me. Like I was talking to one of
Starting point is 00:39:58 my students who graduated, you know, let's not say how many decades ago, and I have the opportunity to put him up for certain awards. And there's nothing more satisfying because I'm just so proud of what he's achieved or I'm so proud of what my other students have achieved. I just had the joy to like ride along for a little bit of that, but he was at a critical period where I could help them in grad school to harness that amazing potential. And so this is like parenting, right? It's the best thing in the world, which is why my gig is the best gig in the world. And I try to do the same thing at Google. You wouldn't think that's what it's about. But really, I work with younger researchers and, again, get to mentor and hopefully help people along a little bit. And I just, that's my favorite
Starting point is 00:40:39 thing. That is, I finally know that's my thing. And I need to remember to do mostly that. So I think everyone should figure out their thing. And if it's at all possible to structure your life, so you can do enough of it, that will bring you joy. Of course, we all have to do other things as well. You know, I have to write grant proposals and, you know, I have to write reports and all these things. But, you know, it's okay. It's in service of that thing that I find to give me a sense of purpose, you know?
Starting point is 00:41:09 That's very beautiful. As we wrap up, I'd love to get your thoughts on some future directions for the field of robotics, of human machine collaboration. And so as you look forward, what excites you most about these domains? maybe specifically what's next for socially assistive robotics, and what are some of the biggest challenges that keep you up at night? I think what keeps me up in night, I'll start with that so we can end on a positive note. So what keeps me up at night is I am concerned about, in some sense, the dehumanization,
Starting point is 00:41:40 the fact that we're creating incredibly powerful AI-driven agents or, in any case, interactions with machines that people might find, in some cases, easier to deal with than dealing with people, but that's not good, right? We need for our health. We need to interact with one another. And in fact, we need to interact. People need to interact with people in the physical world. Just all health data show that. And so to the extent that is diminishing, that worries me. And it worries me that AI will create opportunities where that might diminish, including, for example, in education, right, the idea that you're going to completely outsource education to AI. No, we should harness AI and use AI, but if you don't have human
Starting point is 00:42:18 teachers, you will lose a lot in terms of socio-emotional development that kids need. And they also need it from one another. So these are the kinds of things where I think we need people who have a deep understanding of the domains, whether it's education or medicine or law. They need to understand the domain and understand the subtleties and not just think that, oh, well, we can do this thing faster or better or cheaper. So that does worry me, but I'm hoping that enough people will pay attention and care. But on the other side, so much will ideally be possible and, you know, hopefully a new whole, you know, sets of fields and areas and jobs will be created. And I have to believe that because I do know for sure that some areas will just go away. And so we already
Starting point is 00:43:07 know that AI is going to be amazing about science and it's going to help us do major science discoveries. So there's no doubt about that. That's fantastic what's happening in science for health and science for discovery. That's fantastic. I just hope that we don't try to do as much replacement of human purpose. So, you know, there's a difference between replacing human work and replacing human purpose. There's certain kinds of work that nobody wants to do, so let's replace it. But it turns out that we sometimes don't understand which work is important. So let me give a specific example, nursing homes. So nursing homes, especially memory care units where people with dementia, including Alzheimer's, need to be taken care of every day.
Starting point is 00:43:47 So you can introduce AI and AI can talk to people and show them pictures, but somebody still has to change their diapers and has to talk to them nicely and has to paint their nails and robots can't do that. And when they can, what is lost? And so I think we need to understand what matters and then use the tremendous power of AI to go there as opposed to, you know, just saying, oh, well, Well, because we can do this thing, we should. Just because you can doesn't mean you should.
Starting point is 00:44:16 You should think about the consequences and then insert it where it's going to make a positive difference. I know this sounds very kumbaya, but we do not tend to think ahead about the consequences. And, you know, we've got big brains. We've got that frontal lobe. We should maybe do a little bit more prediction about consequences and maybe do things a little more wisely because we can. there's so much that we can and we'll do. But it would be a little bit better if we just planned a little bit and avoided some of the pitfalls and we've had some major pitfalls in like the last 20 years of technology.
Starting point is 00:44:53 So maybe we can learn from that. I will say, why am I optimistic in the end? I'm optimistic because of the young people in the field. When they come in with this hope and promise and ultimately motivation, and if we can help them have the grit, then we'll be okay. I think that's a beautiful note to end on, and I think you've highlighted the importance of the human touch and the value of the human connection, even as we continue to build out technologies that think, but also understand support and uplift the communities and people it serves. And so I am also quite optimistic and excited about the continued work in the space of socially assistive robotics as in industries around health and education. And so provide a huge thank you to you, Professor Maya Mataric, for sharing your
Starting point is 00:45:39 pioneering work and vision for the field and for joining us on Bycass. Thank you so much for this opportunity. It was lovely. Thank you. ACM Bycast is a production of the Association for Computing Machinery's Practitioner Board. To learn more about ACM and its activities, visit acm.org. For more information about this and other episodes, please visit our website at learning.acm.org slash B-Y-T-E-C-A-C-A-S-T. That's learning.acm.org slash bikecast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.