Lex Fridman Podcast - #329 – Kate Darling: Social Robots, Ethics, Privacy and the Future of MIT

Episode Date: October 15, 2022

Kate Darling is a researcher at MIT Media Lab interested in human robot interaction and robot ethics. Please support this podcast by checking out our sponsors: - True Classic Tees: https://trueclassic...tees.com/lex and use code LEX to get 25% off - Shopify: https://shopify.com/lex to get 14-day free trial - Linode: https://linode.com/lex to get $100 free credit - InsideTracker: https://insidetracker.com/lex to get 20% off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free EPISODE LINKS: Kate's Twitter: http://twitter.com/grok_ Kate's Website: http://katedarling.org Kate's Instagram: http://www.instagram.com/grok_ The New Breed (book): https://amzn.to/3ExhBuf Creativity without Law (book): https://amzn.to/3MqV5F3 LuLaRobot (paper): http://drive.google.com/file/d/1PtYpkDQaQVPbhQIc6wcCC50JKWVsDo3k/view PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:57) - What is a robot? (23:57) - Metaverse (33:19) - Bias in robots (47:02) - Modern robotics (49:34) - Automation (53:57) - Autonomous driving (1:02:22) - Privacy (1:05:48) - Google's LaMDA (1:10:35) - Robot animal analogy (1:23:38) - Data concerns (1:41:40) - Humanoid robots (2:00:42) - LuLaRobot (2:09:36) - Ethics in robotics (2:24:57) - Jeffrey Epstein (2:58:31) - Love and relationships

Transcript
Discussion (0)
Starting point is 00:00:00 The following is a conversation with Kate Darling, her second time on the podcast. She's a research scientist at MIT Media Lab interested in human robot interaction and robot ethics, which she writes about in her recent book called The New Breed, What Our History with Animals Reveals About Our Future with Robots. Kate is one of my favorite people at MIT. She was a courageous voice of reason and compassion to the time of the Jeffrey Epstein scandal at MIT three years ago. We reflect on this time in this very conversation, including the lessons that reveal the both human nature and our optimistic
Starting point is 00:00:39 vision for the future of MIT, a university we both love and believe in. And now a quick few second mention of each sponsor. Check them out in the description. It's the best way to support this podcast. We've got true classic teas for high quality t-shirts, Shopify for e-commerce, Lenovo for Linux, inside tracker for biomoderingoring and express VPN for privacy. Choose what is in my friends.
Starting point is 00:01:08 And now onto the full ad reads. As always, no ads in the middle. I hate those. And since I do this podcast, I'm able to control whether we do them or not. I try to make these interesting, but if you skip them, please still check out our sponsors. I enjoy their stuff. Maybe you will too.
Starting point is 00:01:23 This show is brought to you by, I believe in you, sponsor. I've been wearing their t-shirts for a while now, so I don't remember. I do remember that I've been loving it for a while now. Anyway, the sponsors true classic tees, their high quality, soft, slim fitted t-shirts for men. They also make all the other men's wear staples like polos and workout shirts and they're all built with the same flattering fit as their t-shirts. It's kind of fascinating how something like a t-shirt can feel good and look good and
Starting point is 00:02:01 the way to achieve those two goals are very subtle design decisions. So it's fascinating because I'm a t-shirt person. I usually just wear a black t-shirt, a bunch of black t-shirts. True classic tees is an example of a company that delivers. Now that I've tried it, that's all I've been wearing. I feel amazing. Get comfortable and upgrade your wardrobe with True Classic. Get 25% off at TrueClassic.com with code Lex. Free shipping included. I'm purchasing this over $100.
Starting point is 00:02:33 100% risk-free guarantee with 30-day return policy. This show is also brought to you by Shopify, a platform designed for anyone to sell anywhere. With a great looking online store that brings your ideas to life and gives you tools to manage day to day operations. I've been using Shopify for a while to sell stuff, but I, you know, my use cases are pretty simple. And I think that's probably true for many people, for many merchants, like you just want to sell a couple of things that you care about. And to a small set of people they're interested in that kind of thing. But I think there's a lot of entrepreneurs that really use Shopify to run a business,
Starting point is 00:03:12 small business, medium-sized business, all that kind of stuff. I think it's 1.7 million entrepreneurs that use it. Yeah, it's my favorite. As far as I'm concerned, if you look on Reddit and all those other kinds of places, Shopify is the recommended place to sell stuff online. Super easy to use, super easy to set up, all that. Get a free trial and full access to Shopify's entire suite of features when you sign up at Shopify.com slash Lex.
Starting point is 00:03:44 That's all lowercase.ify.com slash Lex. That's all lowercase. Shopify.com slash Lex. This episode is also brought to you by Linode, Linux, virtual machines. It's an awesome computer infrastructure that I just love, everything about it, unless you develop, deploy, and scale what applications you build faster and easier.
Starting point is 00:04:04 I use it for small personal projects. I hope to one day have huge projects that I can run on it. I think the big competitor is AWS. That's probably a bunch of others, but AWS is lower cost than AWS, better customer service, the simplicity of everything. I just love it. Obviously, computer infrastructure, the computer has to be really good, right? The actual systems have to be really good. The distributed computer has to be good, but
Starting point is 00:04:33 the interface from a user perspective of how you set everything up, how you scale, all that kind of stuff, also should be good. I think that's actually more important. The ability to set stuff up, to monitor all that kind of stuff. And that's really why I love Leno'd. And of course, the number one reason, or should I say the number zero reason, is that it's Linux. I love Linux. All things Linux. Visit lino.com slash Lex to get a hundred dollars in free credit. Visit lino.com slash Lex to get a hundred dollars in free credit. This show is also brought to you by Inside Tracker.
Starting point is 00:05:08 Service I use to track biological data. They have a bunch of plans that collect a bunch of information from your body and the use machine learning, algorithms to analyze your blood data, DNA data, fitness tracker data, all of that to give you a picture of what's going on inside you and give you recommendations for diet lifestyle changes. I wish they gave like dating advice or career advice or just like food advice, what I should eat today based on my body. That's probably the future. My body is giving me very noisy signals about when it's hungry and what it wants to eat. I wish I had higher resolution signals or like the signals that it's sending needs to be interpreted. There's probably a lot of signal there. It's just my brain is too dumped to interpret it. So I
Starting point is 00:06:00 would love to understand what my body's telling me. That's why I love inside tracker is that it's listening to your body to give you advice about what you should do with that body. Get special savings for a limited time when you go to inside tracker.com slash lex. This show is also brought to you by ExpressVPN. I use them to protect my privacy and the internet. I also use them to feel good about my life. But that's because I have a strange relationship with software that's really well designed. Anyway, it's like a good VPN should be, it's fast, works on any device, including Linux, Android,
Starting point is 00:06:39 and all of that good stuff. And it's a base level of protection that everybody should be using. I'm probably having a bunch of protection that everybody should be using. I'm probably having a bunch of conversation with cybersecurity folks on both sides. I think I have a person coming on that used to be an FBI agent doing cybersecurity. And also really want to get a bunch of hackers on. Former current hackers would be epic. Of course, it's very difficult because if they're current hackers, there's this gray area about what they can and can't talk about. And I don't like gray areas.
Starting point is 00:07:16 I like people to be raw and transparent and real, all that kind of stuff. But no matter what, it's a super fascinating topic. Go to expressvpm.com slash likes pod for an extra three months free. This is the Lex Freedom Podcast. The supported. Please check out our sponsors in the description. And now, dear friends, here's Kate Darling. Last time we talked a few years back, you were just in Beaver shirt for the podcast.
Starting point is 00:08:01 So, now looking back, you, your respected researcher, all the amazing accomplishments in robotics, you're an author. Was this one of the proudest moments of your life, how proudest decisions you've ever made? Definitely. You handled it really well though. It was cool because I walked in. I didn't know you were going to be filming. I walked in and you're in a fucking suit. Yeah. And I'm like, why are you all dressed up? Yeah. And then you were so nice about it. You made some excuse. You were like, oh, well, I'm interviewing some art. Didn't you say you were interviewing some military general afterwards to like make me feel better? CTO of Lockheed Martin. Oh, that's what it was. Yeah. You didn't tell me,
Starting point is 00:08:40 oh, I was dressed like this. Are you an actual Bieber fan? I was at like one of those t-shirts that's in the back of the closet that you used for painting. I think I bought it for my husband as a joke. Yeah, I was. We were gut renovating a house at the time and I had worn it to the site. God is joking. Now you were. Okay. Have you worn it since? Is this a one time? No, like how could I touch it again?
Starting point is 00:09:07 It was on your podcast, that's frames. It's like a wedding dress or something like that. You don't, you only wear it once. You are the author of the new breed, what our history with animals reveals about our future with robots. You opened the book with the surprisingly tricky question. What is a robot? So let me ask you, let's try to sneak up to this question. What's a robot? That's not really
Starting point is 00:09:31 sneaking up. It's just asking it. Yeah. All right, well. What do you think a robot is? What I think a robot is something that has some level of intelligence and some level of magic. That little shine in the eye that allows you to navigate the uncertainty of life. So that means like autonomous vehicles to me in that sense are robots because they navigate The uncertainty the complexity of life Obviously social robots are that I love that I like that you mentioned magic because that also Well, so first of all I don't define robot definitively in the book because there is no definition that everyone agrees on. And if you look back through time, people have called things robots until they lose the magic because they're
Starting point is 00:10:32 more ubiquitous, like a vending machine used to be called a robot, and now it's not, right? So I do agree with you that there's this magic aspect that, which is how people understand robots. If you ask a roboticist, they have the definition of something that is, well, it has to be physical. Usually it's not an AI agent. It has to be embodied. They'll say it has to be able to sense its environment in some way. It has to be able to make a decision autonomously and then act on its environment again. I think that's a pretty good technical definition, even though it really breaks down when you come to things like the smartphone because the smartphone can do all of those things. And most robotists would not call it a robot. So there's really no, no one good definition. But part of why I wrote the book is because people have a definition of robot in their minds
Starting point is 00:11:28 that is usually very focused on a comparison of robots to humans. So if you google image search robot you get a bunch of humanoid robots. Robots with a torso and head and two arms and two legs and that's the definition of robot that I'm trying to get us away from, because I think that it trips us up a lot. Why does the humanoid form trip us up a lot? Well, because this constant comparison of robots to people, artificial intelligence to human intelligence, first of all, it doesn't make sense from a technical perspective, because you know, the early AI researchers, some of them were trying to recreate human intelligence.
Starting point is 00:12:07 Some people still are and there's a lot to be learned from that academically, et cetera, but that's not where we've ended up. AI doesn't think like people. We wind up in this fallacy where we're comparing these two. And when we talk about what intelligence even is, we're often comparing to our own intelligence. And then the second reason this bothers me is because it doesn't make sense. I just think it's boring to recreate intelligence that we already have. I see the scientific value of understanding our own
Starting point is 00:12:39 intelligence, but from a practical, what could we use these technologies for perspective? It's much more interesting to create something new, to create a skill set that we don't have that we can partner with and what we're trying to achieve. And it should be in some deep ways similar to us, but in most ways different, because you still want to have a connection which is why the similarity might be necessary. That's what people argue, yes.
Starting point is 00:13:09 And I think that's true. So the two arguments for humanoid robots are people need to be able to communicate and relate to robots and we relate most to things that are like ourselves. And we have a world that's built for humans. So we have stairs and narrow passageways and door handles. And so we need humanoid robots to be able to navigate that. And so you're speaking to the first one, which is absolutely true,
Starting point is 00:13:33 but what we know from social robotics and a lot of human robot interaction research is that you all you need is something that's enough like a person to give off cues that someone relates to, but that doesn't have to look human or even act human, you can take a robot like RTD2 and it just like beeps and boops and people love RTD2, right? Even that's just like a trash can on wheels. And they like RTD do more than C3PO who's a humanoid. So there's lots of ways to make robots even better than humans in some ways and make us relate more to them.
Starting point is 00:14:10 Yeah, it's kind of amazing the variety of cues that can be used to anthropomorphize the thing, like a glowing orb or something like that. Just a voice, just subtle-based interaction. I think people sometimes over-engineer these things, like simplicity can go a really long way. Totally. I mean, ask any animator and they'll know that. Yeah, those are actually, so the people behind Cosmo, the robot, the right people to design those as animators, like Disney type of people. Versus psychrobatics.
Starting point is 00:14:48 Roboticists, quote unquote, are mostly clueless. They just have their own discipline that they're very good at and they didn't have. But that don't, you know, I feel like robotics of the early 21st century is not going to be the robotics of the later 21st century. Like if you call yourself a roboticist, it'll be something very different. Because I think more and more, you'll be like a control engineer or something, controls engineer. Like you separate because ultimately all the unsolved, all the big problems of robotics will be in the social aspect, in the interacting with humans aspect, in the perception interpreting the world aspect, in the brain part, not the basic control level part.
Starting point is 00:15:47 You call it basic, it's actually really complex. It's very, very complicated. And that's why I think you're so right and what a time to be alive. Because for me, we've had robots for so long and they've just been behind the scenes. And now finally, robots are just been behind the scenes. And now finally robots are getting deployed into the world. They're coming out of the closet. Yeah.
Starting point is 00:16:11 And we're seeing all these mistakes that companies are making because they focus so much on the engineering and getting that right and getting the robot to be even be able to function in a space that it shares with a human. See, what I feel like people don't understand is to solve the perception and the control problem. You shouldn't try to just solve the perception control problem. You should teach the robot how to say, oh shit, I'm sorry, I fucked up. Yeah, we're asked for help.
Starting point is 00:16:39 We're asked for help or be able to communicate the uncertainty. Yeah, exactly. all of those things because you can't solve the perception and control. We humans have solved it. We were really damn good at it. But the magic is in the self-deprecating humor and the self-awareness about where our flaws are, all that kind of stuff.
Starting point is 00:17:02 Yeah, and there's a whole body of research and human robot interaction showing like ways to do this. But a lot of these companies haven't, they don't do HRI. They, like the, have you seen the grocery store robot in the stop and shop? Yes. Yeah, the Marty, it looks like a giant penis.
Starting point is 00:17:19 It's like six feet tall at Rome's the Isles. I will never see Marty the same way again. Thank you for that. You're welcome. But like, these poor people were so hard on getting a functional robot together. And then people hate Marty because they didn't at all consider how people would react to Marty in their space. Does everybody, I mean, you talk about this, do people mostly hate Marty?
Starting point is 00:17:44 Because I like Marty. I feel like let's be. Yeah, I actually like this. There's a parallel between the two. I believe there is. So we were actually going to do a study on this right before the pandemic hit. And then we canceled it because we didn't want to go to the grocery store and neither did anyone else. But our theory, so this was with a student at MIT, Daniela DiPoella. She noticed that everyone on Facebook, in her circles, was complaining about Marty. They're like, what is this creepy robot?
Starting point is 00:18:15 It's watching me. It's it was in the way. And she did this quick and dirty sentiment analysis on Twitter, where she was looking at positive and negative mentions of the robot. And she found that the biggest spike of negative mentions happened when stop and chop through a birthday party for the Marty robots, like with free cake and balloons, like who complains about free cake?
Starting point is 00:18:36 Well, people who hate Marty apparently. So, and so we were like, that's interesting. And then we did this like online poll, we used mechanical Turk, and we tried to get at what people don't like about Marty. And a lot of it wasn't, oh, Marty's taking jobs. It was, Marty is the surveillance robot, which it's not, it looks for spills on the floor. It doesn't actually look at any people.
Starting point is 00:19:03 It's watching, it's creepy, it's getting in the way. Those were the things that people complained about and So our hypothesis became Is Marty a real life clippy because I know Lex you love clippy, but many people hated clippy Well, there's a complex thing there. It could be like marriage a lot of people seem to want like to complain about marriage But they secretly love it. So it could be marriage. A lot of people seem to like to complain about marriage, but they secretly love it. So it could be the relationship you might have with Marty is like, oh, there he goes again doing a stupid surveillance thing, but you grow to love the, I mean, bitching about the thing that kind of releases a kind of tension. And there's, I mean, bitching about the thing that kind of releases a kind of tension and there's, I mean, some people, a lot of people show love by sort of, uh, busting each other's
Starting point is 00:19:53 jobs, you know, like making fun of each other. And then if I think, I think people would really love it if Marty talked back. And like, well, these are so many possible options for humor there. One, you can lean in. You can be like, yes, I'm an agent of the CIA, monitoring your every move, like mocking people that are concerned, you know, saying like, yes, I'm watching you because you're so important with your shopping patterns. I'm collecting all this data or just, you know, any kind of making fun of people.
Starting point is 00:20:27 I don't know. But I think you hit on what exactly it is because when it comes to robots or artificial agents, I think people hate them more than they would some other machine or device or object. And it might be that thing, it might be combined with love or like whatever it is, it's a more extreme response because they view these things as social agents and not objects. And that was, so Clifford Nass was a big human computer interaction person and he, his theory about Clippy was that because people viewed Clippy as a social agent, when Clippy was annoying and would like bother them and interrupt them and like not remember what
Starting point is 00:21:10 they told him, that's when people got upset because it wasn't fulfilling their social expectations. And so they complained about Clippy more than they would have if it had been a different, like not a, you know, virtual character. So it's complaining to you a sign that we're in the wrong path with a particular robot, or is it possible, like, again, like marriage, like family, that there still is a path towards that direction where we can find deep meaningful relationship. I think we absolutely can find deep meaningful relationship
Starting point is 00:21:46 with robots. And well, maybe with Marty. I mean, I just would, I would have designed Marty a little differently. Like how? Is there a charm to the clumsiness, the slowness? I got some kind of. There is, if you're not trying to get through
Starting point is 00:21:59 the shopping cart and screaming child, you know, there's, I think, I think you could make it charming. I think there are lots of design tricks that they could have used. And one of the things they did, I think without thinking about it at all is they slapped too big, googly eyes on Marty. Oh, yeah. And I wonder if that contributed maybe to people feeling watched, because it's looking at them. And so, like, is there a way to design the robot to do the function that it's doing in a way that people are actually attracted to rather than annoyed by? And there are many ways to do that, but companies aren't thinking about it. Now they're realizing that they should have thought about it.
Starting point is 00:22:41 Yeah, I wonder if there's a way to, if it would help to make Marty seem like an entity of its own, versus the arm of a large corporation. So there's some sense where this is just the camera that's monitoring people versus this is an entity that's a standalone entity. It has its own task and it has its own personality. The more personality you give it, the more it feels like it's not sharing data with anybody else.
Starting point is 00:23:18 When we see other human beings, our basic assumption is whatever I say to this human being, it's not like being immediately sent to this CIA. Yeah, what I say to you, no one's going to hear that, right? That's true. That's true. Well, you forget it. I mean, you do forget it.
Starting point is 00:23:34 I mean, I don't know if that even with microphones here, you forget that that's happening. But then for some reason, I think probably with Marty, I think what is done really crudely and crapily, you start to realize, oh, this is like PR people trying to make a friendly version of a surveillance machine. But I mean, that reminds me of the slight clumsiness or significant clumsiness on the initial releases of the avatars for the metaverse. I don't know, what do you actually thoughts about that?
Starting point is 00:24:08 The way the avatars, the way like Mark Zuckerberg looks in that world, you know, the metaverse, the virtual reality world where you can have virtual meetings and stuff like that. Like how do we get get that right? You have thoughts about that because it's a kind of a It's a It feels like a similar problem to social robotics, which is how you design a digital virtual world that is compelling
Starting point is 00:24:43 When you connect others there, in the same way that physical connection is. Right, I haven't looked into, I mean, I've seen people joking about it on Twitter and like posting, whatever. Yeah, but I mean, have you seen it? Cause there's something you can't quite put into words that doesn't feel genuine.
Starting point is 00:25:02 Yeah. About the way it looks. And so the question is, if you're an hour to meet virtually, what should the avatars look like for us to have similar kind of connection? Should it be really simplified? Should it be a little bit more realistic? Should it be cartoonish?
Starting point is 00:25:21 Should it be moreish? Should it be more better capturing of expressions in interesting complex ways versus like cartoonish oversimplified ways? But having video games figured this out, I'm not a gamer so I don't have any examples, but I feel like there's this whole world in video games where they've thought about all of this and depending on the game, they have different like avatars and a lot of the games are about connecting with others. I just the thing that I don't know is and again I haven't looked into this at all. I've been like shockingly not very interested in the metaverse but they must have poured so much investment into this meta. And why are people...
Starting point is 00:26:10 Why is it so bad? It's got to be a reason. There's got to be some thinking behind it, right? Well, I talked to Carmack about this, John Carmack, who's a part-time, Oculus, CTO. I think there's several things to say. One is, as you probably know, there's bureaucracy, there's large corporations, and they often, large corporations have a way of killing the indie kind of artistic flame that's required to create something really compelling. Somehow they make everything boring because they run through this whole process through the PR department, through all that kind of stuff and it somehow becomes generic through that process.
Starting point is 00:26:58 They strip out anything interesting because it could be controversial. Yeah, right. Exactly. Like, what, I mean, we're living through this now, like, with a lot of people with cancellations, almost kinds of stuff. People are nervous, and nervousness results in, like, like, usual, the assholes are ruining everything. But, you know, the magic of human connections taking risks of making a risky joke of like with people you like, we're not assholes, good people, like some of
Starting point is 00:27:31 the fun, some of the fun in the metaverse or in video games is, you know, being edgier, being interesting, revealing your personality in interesting ways, in the sexual tension or in, they're definitely paranoid about that. Oh yeah. Like in metaverse, the possibility of sexual assault and sexual harassment and all that kind of stuff, it's obviously very high, but they're, so you should be paranoid to some degree, but not too much because then you remove completely the personality of the whole thing. Then everybody's just like a vanilla bot, but like you have to have ability to be a little bit political, to be a little bit edgy, all that kind of stuff. Large companies tend to suffocate that. But in general, if you get all that, just the ability to come up
Starting point is 00:28:25 So, but in general, if you get all that, just the ability to come up were really cool, beautiful ideas. If you look at, I think Grimes tweeted about this, which is very critical about the metaverse, is that, you know, the independent game designers have solved this problem of how to create something beautiful and interesting and compelling. They do a really good job. So you have to let those kinds of minds, the small groups of people design things and let them run with it, let them run wild and do edgy stuff.
Starting point is 00:28:59 But otherwise you get a clippy type of situation, which is a very generic looking thing. But even clippy has some, like that's kind of wild that you would take a paper clip and put eyes on it. And suddenly people are like, oh, you're annoying, but you're definitely a social agent. And I just feel like that wouldn't even, that
Starting point is 00:29:26 clippy thing wouldn't even survive Microsoft or Facebook of today, matter of today, because it would be like, what, there would be these meetings about why is it a people? Like, why don't we, it's not sufficiently friendly. Let's make it, you know, and then all of a sudden, the artist that with whom it originated is killed. And it's all PR marketing people and all of that kind of stuff. No, they do important work to some degree, but they kill the creativity.
Starting point is 00:29:57 I think the killing of the creativity is in the whole. Like, okay, so what I know from social robotics is like, obviously, if you create agents that, okay, so what I know from social robotics is like, obviously, if you create agents that, okay, so take for an example, you create a robot that looks like a humanoid, and it's, you know, Sophia or whatever. Now, suddenly, you do have all of these issues where are you reinforcing an unrealistic beauty standard? Are you objectifying women? forcing an unrealistic beauty standard. Are you objectifying women? Why is the robot white? So you have, but the thing is, I think that with creativity,
Starting point is 00:30:32 you can find a solution that's even better where you're not even harming anyone and you're creating a robot that looks like not humanoid, but like something that people relate to even more. And now you don't even have any of these bias issues that you're creating. And so how do we create that within companies? Because I don't think it's really about like I, because I, you know, maybe we disagree on that.
Starting point is 00:30:59 I don't think that edginess or humor or interesting things need to be things that harm or hurt people or that people are against. There are ways to find things that everyone is fine with. Why aren't we doing that? The problem is there's departments that look for harm and things. And so they will find harm and things that have no harm. That's the big problem because their whole job is to find harm in things. So what you said is completely correct, which is, edginess should not hurt, doesn't necessarily, doesn't need to be a thing that hurts people. Obviously, great humor, great personality, doesn't have to, like, clippy. But, yeah, I mean, it, but it's tricky to get right. Now, I'm not exactly sure. I don't know. I don't know why a large corporation with a lot of funding can't get this right.
Starting point is 00:31:52 I do think you're right that there's a lot of a version to risk. And so if you get lawyers involved or people whose job it is, like you say, to mitigate risk, they're just going to say no to most things that could even be in some way. Yeah. Yeah, you get the problem in all organizations. So I think that you're right that that is a problem. I think what's the way to solve that in large organizations, the stuff, Steve Jobs, that's the characters. Unfortunately, you do need to have, I think, from a designer perspective, or maybe like
Starting point is 00:32:23 a Johnny Ive that is almost like a dictator Ive, that is almost like a dictator. Yeah, you want a benevolent dictator. Yeah, who rolls in and says, like, cuts through the lawyers to PR, but has a benevolent aspect, like, yeah, that has a good heart and make sure, like, I think all great artists and designers create stuff that doesn't hurt people.
Starting point is 00:32:45 If you have a good heart, you're going to create something that's going to actually make a lot of people feel good. That's what people like Johnny Ive, what they love doing, is creating a thing that brings a lot of love to the world. They imagine millions of people using the thing and instills them with joy. That's that you could say that about social robotics, you could say that about the metaverse. It shouldn't be done by the PR people. Should be done by the science.
Starting point is 00:33:14 I creep, PR people ruin everything. Yeah, all the fun. In the book you have a picture, I just have a lot of ridiculous questions. You have a picture of two hospital delivery robots with a caption that reads by the way see your book I appreciate that it keeps the humor in you didn't run it by the PR department. No no one edited the book You got rushed through The the caption reads two hospital delivery robots
Starting point is 00:33:42 The caption reads, two hospital delivery robots, who's sexy nurse names, Roxy and Lola made me roll my eyes so hard, they almost fell out. What aspect of it made you roll your eyes? Is it the naming? It was the naming. The form factor is fine.
Starting point is 00:33:57 It's like a little box on wheels. The fact that they named them also great, that'll let people enjoy interacting with them. We know that even just giving a robot a name people will, it facilitates technology adoption. People will be like, oh, you know, bedsy made a mistake, let's help her out instead of the stupid robot doesn't work. But why lowly and lowland rocksy? Like those are to you too sexy. I mean, there's a research showing that Those are to you too sexy. I mean, there's research showing that a lot of robots are named according to gender biases
Starting point is 00:34:32 about the function that they're fulfilling. So, you know, robots that are helpful in assistance and are like nurses are usually female gendered, robots that are, you know, powerful, all wise computers, like Watson, usually you have like a booming male coded voice and name. So like, like, that's one of those things, right? You're opening a can of worms for no reason, for no reason. You can avoid this whole can of worms. Yeah. Just give it a different name. Like why Roxy? It's because people aren't even thinking. So to some extent, I don't like PR departments, but getting some feedback on your work
Starting point is 00:35:12 from a diverse set of participants, listening and taking in things that help you identify your own blind spots. And then you can always make your good leadership choices and good, like you can still ignore things that you don't believe are an issue, but having the openness to take in feedback and making sure that you're getting the right feedback from the right people, I think that's really important. So don't unnecessarily propagate the biases of society. Yeah, why? In the design.
Starting point is 00:35:45 But if you're not careful, when you do the research of, like, you might, if you ran a poll with a lot of people of all the possible names these robots have, they might come up with Roxy and Lola as names they would enjoy most. Like like that could come up as the highest. As then, you do marketing research. And then, well, that's what they did with Alexa. They did marketing research.
Starting point is 00:36:15 And nobody wanted the male voice. Everyone wanted it to be female. What do you think about that? If I were to say, I think the role of a great designer, again, to go back to Johnny Yves, is to throw out the marketing research. Like, take it in, do it, learn from it. But like, if everyone wants to, like,
Starting point is 00:36:39 say to be a female voice, the role of the designers to think deeply about the future of social agents in the home and think, like, what does that future look like and try to reverse engineer that future? So like, in some sense, there's this weird tension, like, you want to listen to a lot of people, but at the same time, you want to, you're creating a thing that defines the future of the world and the people that you're listening to are part of the past. So that we are attention.
Starting point is 00:37:12 Yeah, I think that's true. And I think some companies like Apple have historically done very well at understanding a market and saying, you know what our role is? It's not to listen to what the current market says. It's to actually shape the market and shape consumer preferences and companies have the power to do that. They can, before we're thinking, and they can actually shift what the future of technology looks like. And I agree with you that I would like to see more of that, especially when it comes to
Starting point is 00:37:45 especially when it comes to existing biases that we know, or that I think there's the low hanging fruit of companies that don't even think about it at all and aren't talking to the right people and aren't getting the full information. And then there's companies that are just doing the safe thing and giving consumers what they want now, but to be really forward looking and be really successful, I think you have to make
Starting point is 00:38:04 some judgment calls about what the future is going to be. But do you think it's still useful to gender and to name the robots? Yes, I mean, gender is a minefield, but people... It's really hard to get people to not gender a robot in some way. So if you don't give it a name or you give it a ambiguous voice, people will just choose something. And maybe that's better than just entrenching something that you've decided is best. But I do think it can be helpful on the anthropomorphism engagement level to give it attributes
Starting point is 00:38:48 that people identify with. Yeah, I think a lot of robotists, I know, they don't gender the robot. They don't even try to avoid naming the robot. Or naming it something that can be used as a name in conversation kind of thing. And I think that actually, that's irresponsible, because people are going to anthropomorphize the thing anyway. So you're just removing from yourself the responsibility of how they are going to anthropomorphize it. That's a good point.
Starting point is 00:39:21 And so like, you want to be able to, like they're going to do it, you have to start to think about how they're going to do it. Even if the robot is like a boss and dynamics robot, that's not supposed to have any kind of social component, they're obviously going to project a social component to it. Like that arm, I worked a lot with quadrupe as now with robot dogs. You know, that arm people think is ahead immediately. Yeah.
Starting point is 00:39:50 It's supposed to be an arm, but they start to think it's ahead. And you have to like acknowledge that. You can't, I mean, they do now. They do now? Well, they've deployed the robots and people are like, oh my God, the cops are using a robot dog. And so they have this PR nightmare. And so they're like, oh my God, the cops are using a robot dog. And so they have this PR nightmare.
Starting point is 00:40:05 And so they're like, oh, yeah. Okay, maybe we should hire some major eye people. Well, Boston Dynamics is an interesting come. Or any of the others that are doing similar thing because their main source of money is in industrial applications. So like surveillance, the factories, and doing dangerous jobs.
Starting point is 00:40:29 So to them, it's almost good PR for people to be scared of these things because it's for some reason, as you talk about people are naturally for some reasons scared, we could talk about that of robots. And so it becomes more viral, like playing with that little fear. And so it's almost like a good PR because ultimately, they're not trying to put them in the home and have a good social connection. They're trying to
Starting point is 00:40:55 put them in factories. And so they have fun with it. If you watch Boston Dynamics videos, they're aware of it. Oh, yeah. I mean, mean, the video is for sure that they put out. It's almost like an unspoken tongue in cheek thing. They're aware of how people are going to feel when you have a robot that does like a flip. Now most of the people are just like excited about the control problem of it, like how to make the whole thing happen. But they're aware when people see. Well, I think they became aware. I think that in the beginning,
Starting point is 00:41:32 they were really, really focused on just the engineering. I mean, they're at the forefront of robotics, like locomotion and stuff. And then when they started doing the videos, I think that was kind of a labor of love. I know that the former CEO, Mark, he oversaw a lot of the videos and made a lot of them himself. And he's even really really detail oriented. There can't be some sort of incline that would give the robot an advantage. He was very, I love integrity about the authenticity of them.
Starting point is 00:42:01 But then when they started to go viral, I think that's when they started to realize, oh, there's something interesting here that, you know, I don't know how much they took it seriously in the beginning other than realizing that they could play within the videos. I know that they take it very seriously now. What I like about Boston Dynamics and similar companies, it's still mostly run by engineers. But, you know, I've had my criticisms. There's a bit more PR leaking in. But those videos are made by engineers because that's what they find fun. It's like testing their robustness of the system.
Starting point is 00:42:46 I mean, they're having a lot of fun there with the robots. Totally. Have you been to visit? Yeah, yeah, it's one of the most incredible. I mean, because I have eight robot dogs now. Wait, you have eight robot dogs. What? So they're just walking around your place?
Starting point is 00:43:11 Like, where are you? Yeah, I'm walking around them. That's actually one of my goals is to have at any one time always a robot moving. Oh. I'm far away. That's the Navvicious goal. Well, I have like more room buzz than I know what to do with the room their program
Starting point is 00:43:27 So the the programmable room buzz nice and I have a bunch of little like I built the My I'm not finished with the butter robot from Rick and Morty. I still have a bunch of robots everywhere, but the thing is What happens is you're working on one robot at a time and That becomes like a little project. It's actually very difficult to have just a passively functioning robot always moving. Yeah. And that's a dream for me, because I'd love to create that kind of little world. So, the impressive thing about Boston Dynamics to me was to see like hundreds of spots.
Starting point is 00:44:09 And like, the most impressive thing that still sticks with me is there was a spot robot walking down the hall, seemingly with no supervision whatsoever. And he was wearing he or she, I don't know, was wearing a cowboy hat. It was just walking down the hall and nobody paying attention. And it's just like walking down this long hall.
Starting point is 00:44:32 And I'm like looking around, is anyone like what's happening here? So presumably some kind of automation was doing the map. I mean, the whole environment is probably really well mapped. But it was just, it gave me a picture of a world where robot is doing this thing, wearing a cowboy hat, just going down the hall, like getting some coffee or whatever.
Starting point is 00:44:55 Like I don't know what it's doing, what's the mission, but I don't know, for some reason, it really stuck with me. You don't often see robots that aren't part of a demo or that aren't, uh, you know, like a, with a semi-autonomous or autonomous vehicle, like directly doing a task. This was just chilling. Yeah.
Starting point is 00:45:13 Walking around. I don't know. Well, yeah, you know, I mean, we're at MIT, like, when I first got to MIT, I was like, okay, where's all the, where's all the robots? And they were all like broken broken or not demoing. So yeah. And what really excites me is that we're about to have, that we're about to have so many moving rope about to. Well, it's coming.
Starting point is 00:45:35 It's coming in our lifetime that we will just have robots moving around. We're already seeing the beginnings of it. There's delivery robots and some cities on the sidewalks. And I just love seeing the TikToks of. There's delivery robots and some cities on the sidewalks. And I just love seeing like the TikToks of people reacting to that. Because yeah, you see a robot walking on the hall with a cowboy hat.
Starting point is 00:45:52 You're like, what the fuck? What is this? This is awesome, and scary, and kind of awesome. And people either love or hate it. That's one of the things that I think companies are underestimating that people will either love a robot or hate a robot and nothing in between. So it's just again an exciting time to be alive.
Starting point is 00:46:10 Yeah, I think kids almost universally, at least in my experience, love them. Love legged robots. If they're not loud, my son hates the rumor because ours is loud. Oh, that, yeah. No, the legs, the legs, the difference. Did they understand the rumor to be a robot? Oh, yeah, my kids, that's one of the first words they learned. They know how to say beep, boop. And yes, they think the rumor is a robot.
Starting point is 00:46:40 Does they project intelligence out of the thing? Well, we don't really use it around them anymore for the reason that my son is scared of it. Yeah, that's right. I think they would. Like even in Roomba, because it's moving around on its own, I think kids and animals view it as an agent. So what do you think if we just look at the state of the art of robotics, what do you think robots are actually good at today?
Starting point is 00:47:09 So if we look at today, you mean physical robots? Yeah, physical robots. Wow. Like what are you impressed by? So I think a lot of people, I mean, that's what your book is about is, have maybe a, a, not a perfectly calibrated understanding of where we are in terms of robotics. Was difficult to robotics, was easy in robotics. Yeah. We're way behind where people think we are. So what's impressive to me, so let's see. Oh, one thing that came out recently was Amazon has this new warehouse robot,
Starting point is 00:47:47 and it's the first autonomous warehouse robot that can be safe for people to be around. And so, most people, I think, envision that our warehouses are already fully automated and that there's just like robots doing things. It's actually still really difficult to have robots and people in the same space because it's dangerous for the most part. Robots, you know, because especially robots that have to be strong enough to move something heavy, for example, they can really hurt somebody. And so until now, a lot of the warehouse robots had to just move along like pre-existing lines, which really restricts what you can do.
Starting point is 00:48:29 And so having, I think that's one of the big challenges and one of the big, like exciting things that's happening is that we're starting to see more co-bottics in industrial spaces like that where people and robots can work side by side and not get harmed. Yeah, that's what people don't realize sort of the physical manipulation task with humans. It's not that the robots want to hurt you. I think that's what people are worried about. Like this malevolent robot, it's a lot of its own and wants to destroy all humans. Now, it's actually very difficult to know where the
Starting point is 00:49:05 human is. Yeah. And to respond to the human and dynamically and collaborate with them on a task, especially if you're something like an industrial robotic arm, which is extremely powerful. See, some of those arms are pretty impressive. No, that you can just, you can grab it, you can move it. So the collaboration between human robot and the factor setting is really fascinating. Yeah. Do you think they'll take our jobs? I don't think it's that simple. I think that there is a ton of disruption that's happening and will continue to happen.
Starting point is 00:49:47 I think speaking specifically of the Amazon warehouses, that might be an area where it would be good for robots to take some of the jobs that are, where people are put in a position where it's unsafe and they're treated horribly. And probably it would be better if a robot did that and Amazon is clearly trying to automate that job away. So I think there's gonna be a lot of disruption.
Starting point is 00:50:11 I do think that robots and humans have very different skill sets. So while a robot might take over a task, it's not gonna take over most jobs. I think just things will change a lot. I don't know, one of the examples I have in the book is mining. So there, you have this job that is very unsafe, and that requires a bunch of workers and puts them in unsafe conditions.
Starting point is 00:50:40 And now, you have all these different robotic machines that can help make the job safer. And as a result, now people can sit in these like air conditions, remote control stations, and like control these autonomous mining trucks. And so that's a much better job, but also they're employing less people now. So it's just a lot of,
Starting point is 00:51:04 I think from a bird's eye perspective, you're not going to see job loss. You're going to see more jobs created because the future is not robots just becoming like people and taking their jobs. The future is really a combination of our skills and then the supplemental skills that that robots have to increase productivity to help people have better safer jobs to Give people work that they actually enjoy doing and are good at But it's really easy to say that from a bird's-eye perspective and Ignore kind of the the rubble on the ground as we go through these transitions because of course specific jobs are going to get lost.
Starting point is 00:51:49 If you look at the history of the 20th century, it seems like automation constantly increases productivity and improves the average quality of life. So it's been always good. So like thinking about this time being different is that it would need to go against the lessons of history. It's true. And the other thing is I think people think that automation as a physical task is easy. I was just in Ukraine and the interesting thing is I mean, there's a lot of difficult and Dark lessons just about a war zone
Starting point is 00:52:30 But one of the things that happens in war is there's a lot of mines that are placed This one of the big problems for years after a war is even over is is the entire landscape is covered in minds. And so there's a demining effort. And you would think robots would be good at this kind of thing, or like you're intuition would be like, well, see you have unlimited money, and you wanna do a good job of it, unlimited money.
Starting point is 00:53:01 You would get a lot of really nice robots, but no, humans are still far superior. Or animals. Or animals, but right, but humans with animals together. Yeah. You can't just have dog with a hat. But yes, and but figuring out also how to disable the mind Obviously the easy thing the thing a robot can help with is to find the mind and blow it up But that kind of destroy the landscape that that really does a lot of damage to the land you want to Disable the mind and to do that because of all the different, all
Starting point is 00:53:45 the different edge cases of the problem requires a huge amount of human-like experience. It seems like, so it's mostly done by humans. They have no useful robots. They don't want robots. Yeah. I think we overestimate what we can automate. In especially in the physical realm. Yeah.
Starting point is 00:54:03 It's weird. I mean, it's continues that the story of humans, we think we're shity at everything in the physical world, including driving. We think everybody makes fun of themselves and others for being shity drivers, but we're actually kind of incredible. No, incredible. And that's why, like, this way Tesla still says that if you're in the driver's seat, like, you's why Tesla still says that if you're in the driver's seat, like you, you are ultimately responsible.
Starting point is 00:54:27 Because the ideal for, I mean, I mean, you know more about this than I do, but he like robot cars are great at predictable things and can react faster and more precisely than a person and can do a lot of the driving. And then the reason that we still don't have autonomous vehicles on all the roads yet is because of this long tail of just unexpected occurrences where a human immediately understands that's a sunset and not a traffic light. That's a forcing carriage ahead of me on the highway, but the car is never encountered that before. So like in theory, combining those skill sets is what's going to really be
Starting point is 00:55:05 powerful. The only problem is figuring out the human robot interaction and the hand off. So like in cars, that's a huge problem right now figuring out the handoffs. But in other areas, it might be easier. And that's really the future is human robot interaction. What's really hard to improve, it's terrible that people die in car accidents, but I mean, it's like 70, 80, 100 million miles, one death per 80 million miles. That's like really hard to beat for a robot. It's like incredible, like think about it. Like the how many people,
Starting point is 00:55:47 like just the number of people throughout the world that are driving every single day, all of this, you know, Steve deprived drunk, uh, distracted all of that and still very few die relatives of what I would imagine. If I were to guess back in the horse, see, when I was like in the beginning of the 20th century riding my horse, I would talk so much shit about these cars. I'd be like, this is going to, this is extremely dangerous. These machines traveling at 30 miles an hour or whatever the hell they're going at. This is irresponsible.
Starting point is 00:56:22 It's unnatural and it's going to be destructive to all of human society. But then it's extremely surprising how human's adapt to the thing. And they know how to not kill each other. I mean, that at ability to adapt is incredible. And to mimic that in the machine is really tricky. Now that said, what Tesla is doing, I mean, I wouldn't have guessed how far a machine learning can go on the internet alone. It's really, really incredible. And people that are, at least from my perspective, people that are kind of, you know, critical of Elon and those efforts, I think they're not given enough credit at how much progress we made, how much incredible progress has been made in that direction.
Starting point is 00:57:09 I think most of their bodies community wouldn't have guessed how much you can do on vision alone. It's kind of incredible. Because we would be, I think it's that approach which is relatively unique has challenged the other competitors to step up their game. So if you're using LiDAR, if you're using mapping, that challenges them to do better, to scale faster, and to use machine learning and computer vision as well, to integrate both LiDAR and vision. So it's kind of incredible. And I'm not, I don't know if I even have a good intuition of how hard driving is anymore.
Starting point is 00:57:53 Maybe it is possible to solve. So all the sunset, all the interesting mention. Yeah, the question is one. Yeah, I think it's not happening as quickly as people thought it would because it is more complicated. But I wouldn't have, I agree with you. My current intuition is that we're going to get there. I think we're going to get there too. But I didn't before. I wasn't sure we're going to get there without, like, with current technology. So, you know, I was kind of,
Starting point is 00:58:26 this is like with vision alone. My intuition was you're gonna have to solve like common sense reasoning. You're gonna have to solve some of the big problems in artificial intelligence, not just perception. Yeah. Like, you have to have a deep understanding of the world, as much as my sense. intelligence, not just perception. Yeah.
Starting point is 00:58:46 You have to have a deep understanding of the world, as much as my sense, but now I'm starting to like, well, that me and I'm continuously surprised how well the thing works. Yeah. Obviously Elon and others have stopped, but Elon continues saying, we're gonna solve it in a year.
Starting point is 00:59:01 Well, yeah, that's the thing. We have bold predictions. Yeah, but everyone else used to be doing that, but they kind of like, all right. Yeah, maybe that's not promised. We're going to solve level four driving by 2020. Let's chill on that. But people are just still trying silently. I mean, the UK just committed 100 million pounds to research and development to speed up the process of getting autonomous vehicles on the road. Like, everyone is, everyone can see that it is solvable and it's going to happen and it's going to change everything. And they're still investing in it. And, like, Waymo Lowkey has driverless cars in Arizona.
Starting point is 00:59:47 Like you can get, you know, there's like robots. It's weird. Have you ever been to one? No. It's so weird. It's so awesome. Because the most awesome experience is the wheel turning. And you're sitting in the back. It's like, I don't know.
Starting point is 01:00:04 It's, it feels like you're a passenger with that friend who's a little crazy of a driver. It feels like, shit, I don't know. Are you ready to drive, bro? You know, that kind of feeling, but then you kind of, that experience that nervousness and the excitement of trusting another being, and in this case, it's a machine that's really interesting. Just even introspecting your own feelings about the thing. They're not doing anything in terms of making you feel better. Like at least Waymo, I think they went with the approach
Starting point is 01:00:49 of like, let's not try to put eyes on the thing. It's a wheel, we know what that looks like. It's just a car. It's a car getting the back. Let's not like discuss this at all. Let's not discuss the fact that this is a robot driving you. And you're in the back. And if the robot wants to start driving 80 miles an hour and run off from a bridge, you have no
Starting point is 01:01:09 recourse. It's not discussed this. You're just getting the back. There's no discussion about how shit can go wrong. There's no eyes. There's nothing. There's a map showing what the car can see. showing what the car can see. Like, you know, what happens if it's like a, a how-on-9000 situation? Like, I'm sorry, I can't, you have a button, you can like call customer service. Oh God, then you could put on hold for two hours? Yeah, probably.
Starting point is 01:01:38 But, you know, currently what they're doing, which I think is understandable, but, you know, the car just can pull over and stop and wait for help to arrive. And then a driver will come, and then I'll actually drive the car for you. But that's like, you know, what if you're late for meeting or all that kind of stuff? For like the more dystopian, isn't it the fifth element where it's Will Smith and that movie? Who's in that movie? No Bruce Willis? Bruce Willis. Oh yeah. And he gets into like a robotic cab or car or something. And then because
Starting point is 01:02:11 he's violated a traffic rule, it locks him in. Yeah. And he has to wait for the cops to come and he can't get out. So like, yeah, we're gonna see stuff like that maybe. Well, that's I believe that the companies that have robots. The only ones that will succeed are the ones that don't do that, meaning they respect privacy. You think so? Yeah, because people because they're going to have to earn people's trust. Yeah, but like Amazon works with law enforcement and gives them the data from the ring camera. So why should it, yeah, yeah. Do you have a ring camera?
Starting point is 01:02:52 Uh, no. Okay. No, no, but basically any security camera, right? I've got a Google's, whatever they have. We have one that's not the data, we stir the data on a local server because we don't want it to go to law enforcement because all the companies are doing it. They're doing it.
Starting point is 01:03:11 I bet Apple wouldn't. Yeah. Half of the only company I trust and I don't know how much longer. I don't know. I, maybe that's true for cameras, but with robots, people are just not gonna let a robot inside their home where like one time where somebody gets arrested because of something a robot sees,
Starting point is 01:03:35 that's gonna be, that's gonna destroy a company. You don't think people are gonna be like, well, that wouldn't happen to me, that happened to a bad person. And I think they would. Yeah. Because in the modern world, people I get, have you seen Twitter? They get extremely paranoid about any kind of surveillance.
Starting point is 01:03:54 But the thing that I've had to learn is that Twitter is not the modern world. Like when I go, you know, inland to visit my relatives, like they don't, that's a different discourse that's happening. I think like the whole tech criticism world. Yeah. It's loud in our ears because we're in those circles. You think you can be a company that does social robotics and not win over Twitter? That's a good question. I feel like the early adopters are all on Twitter. And it feels like you have to win them over.
Starting point is 01:04:26 Feels like nowadays you have to win over TikTok honestly. I don't. TikTok, is that a website? I need to check it out. Yeah, and that's an interesting one because China is behind that one. Exactly.
Starting point is 01:04:44 So it's compelling enough maybe people would be able to give a privacy and that kind of stuff. That's really scary. I just, I mean, I'm worried about it. I'm worried about it. And I'm, there have been some developments recently that are super exciting, like the large language learning models. Wow, I did not anticipate those improving so quickly. And those are going to change everything. And one of the things that I'm trying to be cynical about is that I think they're going to have a big impact
Starting point is 01:05:19 on privacy and data security and manipulating consumers and manipulating people. Because suddenly you'll have these agents that people will talk to you and they won't care or won't know, at least on a conscious level, that it's recording the conversations. So kind of like we were talking about before. And at the same time, the technology is so freaking exciting that it's going to get adopted. It's not even just the collection data, but the ability to manipulate at scale.
Starting point is 01:05:48 So what do you think about the AI, the engineer from Google that thought Lambda is sentient? You had actually a really good post from somebody else. I forgot her name. It's brilliant. I can't believe I didn't know about her. Thanks to you. Finale Shane. Yeah, from Weird AI.
Starting point is 01:06:07 Oh, yeah, I love her book. Oh, she's great. I love to know it for myself to reach out to her. She's amazing. She's hilarious and brilliant and just a great summarizer of the state of AI, but she has, I think that was from her where I was looking at AI explaining that it's a squirrel. Oh yeah, because the transcripts that the engineer released, Lambda kind of talks about the experience of human like feelings and I think even consciousness. And so she was like, oh cool, that's impressive. I wonder if an AI can also describe the experience of being a squirrel.
Starting point is 01:06:47 And so she interviewed, I think she did GPT-3 about the experience of being a squirrel. And then she did a bunch of other ones too, like, what's it like being a flock of crows? What's it like being an algorithm that powers a Roomba? And like, you can have a conversation about any of this things and they're very committed. It's pretty convincing, yeah. It's even GPT-3, which is not like state of the art. Right. It's convincing of being a squirrel.
Starting point is 01:07:11 It's like, what is like, I mean, you should check it out, because it really is. It's like, yeah, that probably is what a squirrel would talk about. It's not like, are you excited? Like, what's the like being a squirrel? That's fun. Okay, I can see you're not around all day. Like, how do you think people feel like
Starting point is 01:07:31 when you tell them that you're a squirrel? You know, or like, I forget what it was. Like, a lot of people might be scared to find out that you're a squirrel or something like this. And then the system answers pretty well. Like, yeah, I hope, like, what do you think when they find out you're a squirrel? I hope they'll see how fun it is to be a squirrel. What do you say to people who don't believe you're a squirrel?
Starting point is 01:07:58 I say, come see for yourselves. I am a squirrel. That's great. Well, I think it's really great because the two things to know about it are, first of all, just because the machine is describing an experience as it actually has that experience. But then secondly, these things are getting so advanced and so convincing at describing these things and talking to people. I mean, just the implications for health, education, communication, entertainment, gaming, all of the applications, it's mind boggling what we're going to be able to do with
Starting point is 01:08:35 this, and that my kids are not going to remember a time before they could have conversations with artificial agents. Do you think they would, because to me, this is the focus in the accommodia has been, well, this engineer surely is hallucinating. The thing is not sentient. But to me, I, first of all, it doesn't matter if he is or not, this is coming, where a large number of people
Starting point is 01:09:04 would believe a system is sentient, including engineers within companies. So, in that sense, you start to think about a world where your kids aren't just used to having a conversation with a bot, but used to believing, kind of having an implied belief that the thing is sentient. Yeah, I think that's true. And I think that one of the things that bothered me about all of the coverage and the tech press about this incident, like obviously I don't believe the system is sentient.
Starting point is 01:09:35 Like I think that it can convincingly describe that it is. I don't think it's doing what he thought it was doing and actually experiencing feelings. But a lot of the tech press was about how he was wrong and depicting him as kind of naive. And it's not naive. Like, there's so much research in my field showing that people do this, even experts, they might be very clinical when they're doing human
Starting point is 01:10:01 robot interaction experiments with a robot that they've built. And then you bring in a different robot and they're like, oh, look at it, it's having fun. It's doing this. That happens in our lab all the time. We are all this guy. And it's going to be huge. So I think that the goal is not to discourage this kind of belief or like design systems that people won't think or send you. I don't think that's possible. I think you're right, this is coming. It's something that we have to acknowledge
Starting point is 01:10:31 and even embrace and be very aware of. So one of the really interesting perspectives that your book takes on a system like this is to see them not to compare a system like this to humans, but to compare it to animals of how we see animals. Can you kind of try to, again, sneak up, try to explain why this analogy is better than the human analogy, the analogy of robots as animals? Yeah.
Starting point is 01:10:59 And it gets trickier with the language stuff, but we'll get into that too. I think that animals are a really great thought experiment when we're thinking about AI and robotics because, again, this comparing them to humans that leads us down the wrong path, both because it's not accurate, but also, I think, for the future. We don't want that. We want something that's a supplement, but I think animals, because we've used them throughout history for so many different things, we domesticated them not because they do what we do, but because what they do is different, and that's useful. And I, it's just like whether we're talking about companionship, whether we're talking about work integration, whether we're talking about responsibility for harm, there's just so many things we can draw on in that history
Starting point is 01:11:45 from these entities that can sense, think, make autonomous decisions and learn, that are applicable to how we should be thinking about robots and AI. And the point of the book is not that they're the same thing. The animals and robots are the same. Obviously, there are tons of differences there. Like, you can't have a conversation with a squirrel, right? But the point. I do it all the time. Like, you can't, you can't have a conversation with a squirrel, right? But the point. I do it all the time.
Starting point is 01:12:08 Oh, really? By the way, squirrels are the cutest. I project so much on squirrels. I wonder what their inner life is. I suspect they're much bigger assholes than we imagine. Really? Like, if it was a giant squirrel, it would fuck you over so fast.
Starting point is 01:12:24 If you had the chance, it would take everything you own It would eat all your stuff because it's small and the furry tail the furry tail is a is a Is a weapon against human Consciousness and cognition it wins us over. That's what cats do to cats out. I'll compete at squirrels and dogs like yeah dogs like dogs have love. Cats have no soul. They, I'm just kidding, people get so angry
Starting point is 01:12:52 when I talk to your dog cats, I love cats. Anyway, so yeah, you're describing all the different kinds of animals that get domesticated. And it's a really interesting idea that it's not just sort of pets, There's all kinds of domestication going on there all have all kinds of uses yes like the ox that you proposed might be that
Starting point is 01:13:15 At least historically one of the most useful domesticated animals it was a game changer because it revolutionized like what people could do economically, etc. So, just robots, they're going to change things economically, they're going to change landscapes. Cities might even get rebuilt around autonomous vehicles or drones or delivery robots. I think just the same way the animals have really shifted society. And society has adapted also to like socially accepting animals as pets. I think we're gonna see very similar things with robots. So I think it's a useful analogy. It's not a perfect one but I
Starting point is 01:13:53 think it's it helps us get away from this idea that robots can shoot or will replace people. If you remember what are some interesting uses of animals? Ferrets, for example. Oh, yeah, the ferrets. They still do this. They use ferrets to go into narrow spaces that people can't go into like a pipe or like they'll use in the run electrical wire. I think they did that for princess size wedding.
Starting point is 01:14:16 There's so many weird ways that we've used animals and still use animals for things that robots can't do. Like the dolphins that they used in the military. I think Russia still has dolphins and the US still has dolphins in their navies. Mine detection, looking for lost underwater equipment. Some rumors about like using them for weaponry, which I think Russia's like, sure, believes that.
Starting point is 01:14:49 And America's like, no, no, we don't do that. Who knows? But they started doing that in like the 67s. They started training these dolphins because they were like, oh, dolphins have this amazing echolocation system that we can't replicate with machines and they're trainable. So we're going to use them for all the stuff that we can't do with machines or by ourselves. And they've tried to phase out the dolphins. I know the US has invested a lot of money in trying to make robots do the mind detection,
Starting point is 01:15:16 but like you were saying, there are some things that the robots are good at and there's some things that biological creatures are better at, so they still have the dolphins. So there's also pigeons, of course. Oh, yeah, pigeons. Oh my gosh, there's so many examples. The pigeons, I mean, the pigeons were the original hobby photography drone. They also carried mail for thousands of years, letting people communicate with each other in new ways. So the thing that I like about the animal analogies, they have all these physical abilities, but also sensing abilities that we just we don't have. And like that's just so useful. And that's robots,
Starting point is 01:15:53 right? Robots have physical abilities. They can help us lift things or do things that we're not physically capable of. They can also sense things. It's just, I just feel like I still feel like it's a really good analogy. Yeah, it's really strong. And it works because people are familiar with it. What about companionship? And when we start to think about my cats and dogs like pets, that seem to serve no purpose, whatsoever, except the social connection. Yeah, I mean, that's kind of a newer thing.
Starting point is 01:16:22 At least in the United States, like dogs used to have, like they used to have a purpose, they used to be guard dogs, or they had some sort of function. And then at some point they became just part of the family. And it's so interesting how there's some animals that we've treated as workers, some that we've treated as objects, some that we eat, and some that are parts of our families. And that's different across cultures. And I'm convinced that we're gonna see
Starting point is 01:16:53 the same thing with robots, where people are gonna develop strong emotional connections to certain robots that they relate to, either culturally or personally emotionally, and then there's gonna be other robots that we don't treat the same way. I wonder does that have to do more with the culture and the people or the robot design as they're interplay between the two? Like why did dogs and cats out compete ox and I don't know what else like farm animals to really get inside the home and get inside our hearts.
Starting point is 01:17:28 Yeah, I mean people point to the fact that dogs are very genetically flexible and they can evolve much more quickly than other animals. And so they evolutionary biologists think that dogs evolved to be more appealing to us. And then once we learned how to breed them, we started breeding them to be more appealing to us too, which is not something that we necessarily would be able to do with cows, although we've bred them to make more milk for us. So, but part of it is also culture. I mean, they're cultures where people eat dogs still today. And then there's other cultures where we're like, oh, no, that's terrible.
Starting point is 01:18:07 We would never do that. And so I think there's a lot of different elements that play in. I wonder if there's good, is that understand dogs? Is they use their eyes? They're able to communicate affection, all those kinds of things. It's really interesting what dogs do. There's a whole conference as a dog consciousness and cognition and all that kind of stuff. Now cats is a mystery to me because they seem to not give a shit about
Starting point is 01:18:31 the human. But they're warm and fluffy. But they, but they're also passive aggressive. So they're at the same time, they're like, they're dismissive of view in some sense. I think some people like that. some people like that about people. Yeah, they want, they want to push and pull over relationships like that. They don't want loyalty or unconditional love. That does, that means they haven't earned it. Yeah. Yeah.
Starting point is 01:18:58 Yeah. And maybe that says a lot more about the people than it does about the animal. Oh, yeah, we all need therapy. Yeah. So I'm judging harshly the people that have cats or the people that have dogs. Maybe the people that have dogs need are desperate for attention and unconditional love. And they're unable to sort of struggle to earn meaningful connections. I don't know. Maybe people are talking about you and your robot pets in the same way. Yeah, that's... it is kind of sad. There's just a robots everywhere. But I mean, I'm joking
Starting point is 01:19:43 about it being sad because I think it's kind of beautiful. I think robots are beautiful in the same way that pets are even children in that like they capture some kind of magic of social robots. They have the capacity to have the same kind of magic of connection. I don't know what that is. Like, when they're brought to life and they move around, the way they make me feel, I'm pretty convinced is, as you know, they will make billions of people feel. Like, I don't think I'm like some weird robotist guy. I'm not. I mean, you are, but not in this way not in this Why I mean I just I can put on my like a huge normal human hat and Just see this. Oh, this is like there's a lot of possibility there of something cool just like with dogs
Starting point is 01:20:37 Yeah, what is it? Why are we so into dogs or cats like it's like that. It's way different than us It is it's like drooling all over the place with its tongue out. It's like what it's like a weird creature that used to be a wolf. Why are we into this thing? Well dogs can either express or mimic a lot of emotions that we recognize. And I think that's a big thing. Like a lot of the magic of animals and robots is our own self-projection. And the easier it is for us to see ourselves in something and project human emotions or qualities or traits onto it, the more we'll relate to it. And then you also have the movement, of course, I think that's also really, that's why I'm so interested in physical
Starting point is 01:21:24 robots because that's, I think, the visceral magic of them. I think we're, I mean, there's research showing that we're probably biologically hardwired to respond to autonomous movement in our physical space because we've had to watch out for predators or whatever the reason is. And so animals and robots are very appealing to us as these autonomously moving things that we view as agents instead of objects.
Starting point is 01:21:49 I mean, I love the moment, which is, I've been particularly working on, which is when a robot like the Cobbler hat is doing its own thing and then it recognizes you. I mean, the way a dog does. And it looks like this. And the moment of recognition, like you're walking, say you're working an airport on the street, and there's just, you know, hundreds of strangers. But then you see somebody you know, and that like, will you wake up to like that excitement of seeing somebody you know and saying hello and all that kind of stuff. That's a magical moment. Like I think, especially with the dog, it makes you feel noticed and heard and loved. Like that somebody looks at you and recognizes you
Starting point is 01:22:40 that it matters that you exist. Yeah, you feel seen. Yeah, and that's a cool feeling. And I honestly think robots can get that feeling. Oh yeah, totally. Currently Alexa, I mean, one of the downsides of these systems is they don't, they're servants. They like,
Starting point is 01:23:01 part of the, you know, they're trying to maintain privacy, I suppose, but I don't feel seen with Alexa, right? I think that's going to change. I think you're right. And I think that's the game changing nature of things like these large language learning models. And the fact that these companies are investing in embodied versions that move around of Alexa,
Starting point is 01:23:27 like Astro. Can I just say, yeah, Astro, I haven't, is that all? I mean, it's out, you can't just buy one commercially yet, but you can apply for one. Yeah. My gut says that these companies don't have the guts to do the personalization. This goes to the, because it's edgy, it's dangerous. It's going to make a lot of people very angry. Like, in the way that, you know, just imagine, okay, all right, if you do the full landscape of human civilization, just visualize the number of people that are going to break up right now.
Starting point is 01:24:09 Just the amount of really passionate, just even if you just look at teenagers, the amount of deep heartbreak that's happening. And like, if you're going to have a Lex, I have more of a personal connection with the human, you're going to have have humans that have existential crises. There's a lot of people that suffer from loneliness and depression and you're now taking on the full responsibility of being a companion to the rollercoaster of the human condition.
Starting point is 01:24:39 As a company, they can imagine PR and marketing people, they're gonna freak out. They don't have the guts. It's gonna have to come from somebody from a new Apple, from those kinds of folks, like a small startup. And it might, like they're coming. There's already virtual therapists, there's that replica app.
Starting point is 01:24:56 I haven't tried it, but replica is like a virtual company. Like it's coming and if big companies don't do it, someone else will. Yeah, I think the future, the next trillion dollar company will be those personalizations. If you think about all the AI will have around us, all the smart phones, so on, there's very minimal personalization. You don't think that's just because they weren't able. Really?
Starting point is 01:25:24 I don't think they have the guts. I mean, it might be true, but I have to wonder. I mean, Google is clearly gonna do something with the like, I mean, they don't have the guts. Are you challenging them? Uh, partially, but not really, because I know they're not gonna do it. I mean, they don't have to, it's bad for business in the short term. I'm gonna be honest, like I know they're not going to do it. I mean, they don't have to, is bad for business in the short term. I'm going to be honest, maybe it's not such a bad thing if they don't just roll this out quickly because I do think there are huge issues. Not just issues with the responsibility of unforeseen effects on people. But what's the business model?
Starting point is 01:26:06 And if you are using the business model that you've used in other domains, then you're gonna have to collect data from people which you will anyway to personalize the thing. And you're gonna be somehow monetizing the data or you're gonna be doing some like ad model. It just, it seems like now we're suddenly getting into the realm of like severe consumer protection issues. And I'm, I'm really worried about that. I, I see massive potential for this technology to be used in
Starting point is 01:26:36 a way that's not for the public good and not, I mean, that's in an individual user's interest maybe, but not in society's interest. Yeah, see, I think that kind of personalization should be, like redefine how we treat data. I think you should own all the data your phone knows about you, like, and be able to delete it with a single click, walk away.
Starting point is 01:27:06 And that data cannot be monetized or used or shared anywhere without your permission. I think that's the only way people will trust you to give, for you to use that data. But then how are companies going to, I mean, a lot of these applications rely on massive troves of data to train the AI system. Right. So you have to opt in constantly. And opt in not in some legal, I agree. But obviously, like show exactly, like in the way, I opt in to tell you a secret.
Starting point is 01:27:42 Like, we understand that I have to choose like, how well do I know you? And then I say, like, don't tell us to anyone. And then I have to judge how leaky that, like, how good you are, I keep you secrets. In that same way, like, it's very transparent in which data you're allowed to use for which purposes. That's what people are saying is the solution.
Starting point is 01:28:09 And I think that works to some extent, having transparency, having people consent. I think it breaks down at the point at which, we've seen this happen on social media too, like people are willingly giving up their data because they're getting a functionality from that. And then the harm that that causes is on a, like, maybe just someone else and not to them personally. So I don't think people are giving their data.
Starting point is 01:28:33 They're not being asked. Like, but if you were a potential, if you were like, tell me a secret about yourself. And I'll give you $100. I'd tell you a secret. No, not a hundred dollars. First of all, you wouldn't. You wouldn't trust like why you gave me a hundred dollars. It's a bad example. But like I need, I would ask for your specific like fashion interest in order to give recommendations to you for shopping. And I'd be very clear for that. And you could disable that. You can delete that. But then you can be have a deep
Starting point is 01:29:13 meaningful rich connection with the system about what you think you look fat in, where you look great in, what like the full history of all the things you've worn, whether you regret the Justin Bieber, enjoy the Justin Bieber, share all of that information that's mostly private to even you, not even your loved ones, that a system should have that because then a system if you trusted to keep control of that data that you own, you can walk away with. That system can tell you a damn good thing to wear. It could. And the harm that I'm concerned about is not that the system is going to then suggest a dress for me. That is based on my preferences.
Starting point is 01:29:55 So I went to this conference once where I was talking to the people who do the analytics and like the big ad companies. And like literally a woman there was like, I can ask you three totally unrelated questions and tell you what menstrual product you use. And so what they do is they aggregate the data and they map out different personalities and different people and demographics.
Starting point is 01:30:18 And then they have a lot of power and control to market to people. So like I might not be sharing my data with any of the systems because I'm like, I'm on Twitter, I know that this is bad. Other people might be sharing data that can be used against me. Like it's, I think it's, it's way more complex than just I share a piece of personal information and it gets used against me. I think that at a more systemic level and then it's always vulnerable populations that are targeted by this,
Starting point is 01:30:52 low income people being targeted for scamming loans or I don't know, like I could get targeted, like someone, not me, because someone who doesn't have kids yet and is my age could get targeted for like freezing their eggs. And there's all these ways that you can manipulate people where it's not really clear that that
Starting point is 01:31:12 came from that person's data. It came from all of us, all of us opting into this. But there's a bunch of sneaky decisions along the way that could be avoided if there's transparency. So that, so one of the ways that goes wrong if you share that data with too many ad networks, don't run your own ad network. Don't share with anybody. Okay, and that's something that you can regulate. The belongs to just you and all the ways you allow the company to use it the default doesn't know where at all. And you are consciously constantly saying exactly how to use it and and also it has to do with the recommender system itself from the company which is. It has to do with the recommender system itself from the company which is
Starting point is 01:32:11 Freezing your eggs if that doesn't make you happy if that idea doesn't make you happy Then the system shouldn't recommend it and should very be very good at learning So not the kind of things that the category of people it thinks you belong to it will do But more you specifically what makes happy, what is helping you grow. But you're assuming that people's preferences and like what makes them happy is static. Whereas when we're talking before about how a company like Apple can help people what they want,
Starting point is 01:32:42 and they will start to want it, that's the thing that I'm more concerned about. Right. Yeah, that is a huge problem. It's not just listening to people, but manipulating them into wanting something. And that's like, we have a long history of using technology for that purpose, like the persuasive design in casinos to get people to gamble more. Or like, it's just, I'm,
Starting point is 01:33:10 the other thing that I'm worried about is, as we have more social technology, suddenly you have this on a new level. Like, if you look at the influencer marketing that happens online now. What's the influencer market? So like on Instagram, there will be some like, person who has a bunch of followers. Yeah, and then a brand will like hire them to promote some product And it's above board. They disclose like I'm this is an ad that I'm promoting
Starting point is 01:33:34 But they have so many young followers who like deeply admire and trust them This I mean this must work for you too. Don't you have like ads on the podcast like people trust you magic spoon cereal low carb for you too. Don't you have like ads on the podcast like people trust you. Magic spoon cereal. Low carb. Yes. If you say that like I guarantee you some people will buy that just because even though they know that you're being paid, they trust you. Yeah. It's different with podcasts because well my particular situation but it's true for a lot of podcasts, especially about that. And I think that's why I think I think that's why I think that's why I think I think that's why I think that's why I think I think that's why I think that's why I think
Starting point is 01:34:12 I think that's why I think that's why I think I think that's why I think that's why I think that's why I think that's why I think that's why I think that's why I think that's why I think that's why I think that's why I think that's why I think that's why I think that's why I think that's why I think that's why I think that's why I think that's why I think that's why I think that's why I think that's Sure. And that's why it's fine when it's still human influencers. Right.
Starting point is 01:34:28 Now, if you're a bot, you're not gonna discriminate. You're not gonna be like, oh, well, this product is good for people. You think there'll be bots, essentially, with millions of followers. There already are. There are virtual influencers in South Korea who show products. And that's just the tip of the iceberg because that's still very primitive.
Starting point is 01:34:52 Now with the new image generation and the language learning models. And so we're starting to do some research around kids and young adults because a lot of the research on like what's okay to advertise to kids and what is too manipulative has to do with television ads back in the day where like a kid who's 12 understands. Oh, that's an advertisement. I can distinguish that from entertainment. I know it's trying to sell me something. Now it's getting really really murky with influencers and then then if you have a bot that a kid has developed a relationship with, is it okay to market products through that or not?
Starting point is 01:35:31 Like you're getting into all these consumer protection issues because you're developing a trusted relationship with a social entity. But it's, and so now it's personalized, it's scalable, it's automated, and it can, so some of the research showing that kids are already very confused about the incentives of the company versus what the robot is doing. Meaning they're not deeply understanding the incentives of the system. Well, yeah, so like kids who are old enough to understand this is a television advertisement is trying to advertise to me.
Starting point is 01:36:14 I might still decide on what this product that they understand was going on. So there's some transparency there. That age child. So, um, child. So, Danielle DiPoele and Estesia Ostrovsky, and I advised on this project, they asked kids who had interacted with social robots whether they would like a policy that allows robots to market to people through casual conversation, or whether they would prefer that it has to be transparent that it's like an ad coming from a company. And the majority said they preferred the casual conversation. And when asked why there was a lot of confusion about they were like, well, the robot knows me better than the company does.
Starting point is 01:36:56 So the robots only going to market things that I like. And so they don't really, they're not connecting the fact that the robot is an agent of the company. They're viewing it as something separate. And I think that even happens subconsciously with grownups when it comes to robots and artificial agents. And it will. Like this blank guy at Google, started going on and on, but like his main concern was that Google owned the sentient agent and that it was being mistreated. His concern was not that the agent was going to mistreat people. So I think we're going to see a lot of this. Yeah, but shitty companies will do that.
Starting point is 01:37:31 I think ultimately that confusion should be alleviated by the robot should actually know you better and should not have any control from the company. But what's the business model for that? If you use the robot to buy, first of all, the robot should probably cost money. Should what? Cost money. Like the way Windows operating system does. I see it more like an operating system. Then, like this thing is your window, no pun intended, into the world. So it's helping you as like a personal assistant, right?
Starting point is 01:38:09 And so that should cost money. You should, you know, whatever it is, 10 bucks, 20 bucks. Like that's the thing that makes your life significantly better. This idea that everything should be free is, is like it should actually help educate you. You should talk shit about all the other companies that do stuff for free. But also, yeah, in terms of if you purchased stuff based on its recommendation,
Starting point is 01:38:34 it gets money. So it's kind of ad-driven, but it's not ads. It's like It's not ads. It's like, it's not controlled. Like no external entities can control it to try to manipulate you to want a thing. That would be amazing. It's actually trying to discover what you want. So it's not allowed to have any influence. No promoted ad, no, anything. So that's finding, I don't know, the thing that would actually make you happy, that's the only thing it cares about. I think, I think companies like this can win out. Yes, I think eventually once people understand
Starting point is 01:39:22 the value of the robot, even just, like I think that robots would be valuable to people, even if they're not marketing something or helping with like preferences or anything, like just a simple, the same thing as a pet, like a dog that has no function other than being a member of your family. I think robots could really be that and people would pay for that.
Starting point is 01:39:43 I don't think the market realizes that yet. And so my concern is that companies are not going to go in that direction, at least not yet, of making like this contained thing that you bought. It seems almost old fashioned, right? To have a disconnected object that you buy, that you're not like paying a subscription for. It's not like controlled by one of the big corporations. But that's the old fashioned things that people yearn for because I think is very popular
Starting point is 01:40:15 now and people understand the negative effects of social media, the negative effects of the data being used in all these kinds of ways. I think we're just waking up to the realization we tried, but we're like, baby dear, finding our legs in this new world of social media of ad-driven companies and realizing, okay, this has to be done somehow different. Like one of the most popular notions of the least in the United States is social media is evil
Starting point is 01:40:44 and is doing bad. It's doing bad by us. It's not like it's totally tricked us into believing that it's good for us. I think everybody knows it's bad for us. So there's a hunger for other ideas. All right. It's time for us to start that company. Let's do it.
Starting point is 01:41:00 Let's go. Hopefully no one listens to this and steals the idea. There's no. See I think let's go. Hopefully no one listens to this and steals the idea. There's no see that's the other thing I think I'm big person on Executions what matters. I mean, oh, yeah, it's like ideas are kind of true. The social robotics is a good example that there's been so many amazing companies that went out of business I mean to me it's obvious like it's obvious that There will be a robotics company that puts a social robot on the home of billions of homes. Yeah. And it'll be a companion. Okay, there you go. You can steal that idea. Do it. It's very tough for you.
Starting point is 01:41:48 What about Elon Musk's humanoid? Is he going to execute on that? There might be a lot to say. So for people who are not aware, there's an optimal test, there's an optimal robot that's, I guess the stated reason for that robot is a humanoid robot in the factory that's able to automate some of the tasks that humans are currently doing. And the reason you want to do, it's the second reason you mentioned, the reason you want to do a humanoid robots because the factory is built for the certain tasks that are designed for humans. So it's hard to automate with any other form factor than a humanoid.
Starting point is 01:42:20 And then the other reason is because so much effort has been put into this giant data engine machine of perception that's in Citesla autopilot. That's seemingly at least the machine, if not the data, is transferable to the factory setting, to any setting. If you said it would do anything that's boring to us. Yeah, yeah. The interesting thing about that is there's no interest and no discussion about the social aspect. Like, I talked to him on my, an off mic about it quite a bit. And there's not a discussion about like, to me, it's obvious if a thing like that works at all, at all. In fact, it has to work really well in a factory. If it works kind of shitty, it's much more useful in the home. That's true. home. Because we're much this we're I think being shitty at stuff is kind of what
Starting point is 01:43:29 makes relationships great. Like you want to be flawed and be able to communicate your flaws and be unpredictable in certain ways. Like if you fell over every once in a while for no reason whatsoever, I think that's essential for for like it's charming. It's charming but also concerning and also like like are you okay? I mean it's both hilarious. Whenever somebody you love like falls down the stairs, it's both hilarious and concerning. It's some dance between the two. And I think that's essential for like, you almost want to engineer that in, except you don't have to because of robotics in the physical space is really difficult. So I think I've learned to not discount the the efforts that Elon does. There's a few things that are really interesting
Starting point is 01:44:26 there. One, because he's taking it extremely seriously, what I like is the humanoid form, the cost of building robot. I talk to Jim Keller offline about this a lot, and currently human robots cost a lot of money. And the way they're thinking about it, now they're not talking about all the social robotics stuff that you and I care about. They are thinking, how can we manufacture this thing cheaply and do it like well. And the kind of discussions they're having is really great engineering. It's like, it's the bait. It's like first principal's question of like, why is this cost so much? Like, what's the cheap way? Well, why can't we build?
Starting point is 01:45:08 And there's not a good answer. Why can't we build this humanoid form for under $1,000? And like I've sat and had these conversations. There's no reason. It's, I think the reason they've been so expensive is because they were focused on trying to... They weren't focused on doing the mass manufacturer. People are focused on getting a thing that's... I don't know exactly what the reasoning is, but it's the same waymo is like,
Starting point is 01:45:38 let's build a million dollar car in the beginning, like a multi-million dollar car. Let's try to solve that problem. The way Elon, the way Jim Keller, the way some of those folks are thinking is, let's like at the same time try to actually build a system that's cheap, not crappy, but cheap. Unless from first principles, what is the minimum amount of degrees of freedom we need? What are the joints? Where's the control sit like how many how do we act like where are the activators? What's the way to power this in the lowest cost way possible? But also in a way that's like actually works. How do we make the whole thing not part of the components where there's a supply chain you have to Have all these different parts that have to feed us to do it all from scratch and do the learning. I mean, it's
Starting point is 01:46:30 like, me and these certain things like become obvious, do the exact same pipeline as you do for autonomous driving, just the exact, I mean, the infrastructure that is incredible for the computer vision for the manipulation task, the control problem changes, the perception problem changes, but the pipeline doesn't change. Do it. So I don't, obviously the optimism about how long it's going to take. I don't share. But it's a really interesting problem. And I don't want to say anything because my first gut is
Starting point is 01:47:05 To say that why the humanoid form that doesn't make sense. Yeah, that's my second gut, too But but then there's a lot of people that are really excited about the humanoid form there It's like I don't want to get in the way Like they might solve this thing and they might it's like similar with Boston Dynamics. Like, if I were to, you can be a hater and you go up to Mark Riberd and just, like, how are you going to make money with these super expensive legged robots? What's your business plan? This doesn't make any sense. Why are you doing these legged robots? But at the same time, they're pushing forward the science, the art of robotics and the way
Starting point is 01:47:46 that nobody else does. And with Elon, they're not just going to do that, they're going to drive down the cost to where we can have humanoid bots in the home potentially. So the part I agree with is a lot of people find it fascinating and it probably also attracts talent who want to work on humanoid robots. I think it's a fascinating scientific problem and engineering problem and it can teach us more about human body and locomotion and all of that. I think there's a lot to learn from it.
Starting point is 01:48:19 Where I get tripped up is why we need them for anything other than art and entertainment in the real world. Like I get that there are some areas where you can't just rebuild like a spaceship. You can't just like they've worked for so many years on these spaceships. You can't just re-engineer it. You have some things that are just built for human bodies, a submarine, a spaceship, but a factory, maybe I'm naive, but it seems like we've already rebuilt factories to accommodate other types of robots. Why would we want to just like make a humanoid robot to go in there? I just get really tripped up on, I think that people want humanoids. I think people are fascinated by them. I think it's
Starting point is 01:49:07 a little overhyped. Well, most of our world is still built for humanoid. I know what it shouldn't be. It should be built so that it's wheelchair accessible. Right. So the question is, do you build a world that's the general form of wheelchair accessible, all robot form factor accessible. Or do you build humanoid robots? I mean, it doesn't have to be all and it also doesn't have to be either or. I just feel like we're thinking so little about the system in general and how to create infrastructure
Starting point is 01:49:43 that works for everyone, all kinds of people, all kinds of robots. Like that's, I mean, it's more of an investment, but that would pay off way more in the future, than just trying to cram expensive or maybe slightly less expensive humanoid technology into a human space. Unfortunately, one company can't do that. We have to work together.
Starting point is 01:50:03 It's like autonomous driving can be easily solved if you do V2i if you change the infrastructure of the cities and so on, but that requires a lot of people a lot of them are politicians and a lot of them are somewhat if not a lot corrupt and all those kinds of things I And the talent thing you mentioned is really, really, really important. I've gotten a chance to meet a lot of folks at SpaceX and Tesla, other companies too, but they're specifically the openness makes it easier to meet everybody. I think a lot of amazing things in this world happen when you get amazing people together. And if you can sell an idea like us becoming a multi-planetary species, you can say why the hell are we going to Mars?
Starting point is 01:50:53 Like why colonize Mars? If you think from basic first principles it doesn't make any sense. It doesn't make any sense to go to the moon. It doesn't go, the only thing that makes sense to go to spaces for satellites. But there is something about the vision of the future, the optimism, laden that permeates this vision of us becoming multi-painter. It's thinking not just for the next 10 years, it's thinking like human civilization reaching out into the stars. It makes people dream. It's really exciting. And that, they're going to come up with some cool shit that might not have anything to do with like, here's what I, because Elon doesn't seem to care about
Starting point is 01:51:43 social robotics, which is constantly surprising to me. I've talked to him because Elon doesn't seem to care about social robotics, which is constantly surprising to me. I've talked to him. Humans are the things you avoid and don't hurt. Right? Like, the number one job of a robot is not to hurt a human, to avoid them. That the collaborative aspect, the human robot interaction, I think is not at least not in his, not something he thinks about deeply. But my sense is if somebody like that takes
Starting point is 01:52:12 on the problem of human robotics, we're going to get a social robot out of it. Like, people like, not necessarily Elon, but people like Elon. If they take on seriously these, like I can just imagine with a humanoid robot, you can't help but create a social robot. So if you do different formfacts, if you do industrial robotics, you're likely to actually not end up into like walking head into a social, a social robot human robot interaction problem.
Starting point is 01:52:50 If you create for whatever the hell reason you want to, a human robot, you're going to have to reinvent or not reinvent, but do introduce a lot of fascinating new ideas into the problem of human robot interaction, which I'm excited about. So like, if I, if I was a business person, I would say this is not, this is, this is way too risky. This is making sense. But when people are really convinced, and there's a lot of amazing people working on it, it's like, all right, let's see what happens here. This is really interesting. Just like with Atlas and Boston Dynamics. I mean, they, I apologize if I'm ignorant on this, but I think they're really more than anyone else, maybe with Ibo, like Sony pushed forward human antibiotics, like a leap with the- Oh yeah, with Atlas, absolutely. And like without them,
Starting point is 01:53:44 like, why the hell did they do it? Why? Well, I think for them, it is a research platform. It's not, I don't think they ever, the speculation, I don't think they ever intended Atlas to be like a commercially successful robot. I think they're just like, can we do this? Let's try. Yeah, I wonder if they, maybe the answer they landed on is because they eventually went to spot the earlier versions of spot. So quadruple is like four, four-legged robot, but maybe they reached for, let's try to make, like, I think they tried it and they still are trying it for atlas to be picking up boxes, to moving boxes, to being, it makes sense. Okay, if they were exactly the same cost, it makes sense to have a human or robot in the
Starting point is 01:54:41 warehouse. Currently. Currently. I think it's shortsightsighted, but yes. Currently, yes, it would sell. But it's not, it's short-sighted, it's short-sighted, but it's not pragmatic to think in the other way. To think that you're going to be able to change warehouses.
Starting point is 01:54:58 You're going to have to, you're going to... If you're Amazon, you can totally change your warehouse. Yes. Oh, yes. Yes. Yes. Yes. But even if you're Amazon, that's very costly to change warehouses. It is. It's a big investment.
Starting point is 01:55:14 But isn't shouldn't you do that investment in a way? So here's the thing. If you build a human robot that works in the warehouse, that human robot, see, I don't know why Tesla is not talking about it this way, as far as I know, well, like that human robot is going to have all kinds of other applications outside their setting. Like, to me, it's obvious. I think it's a really hard problem to solve, but whoever solves the human robot problem are going to have to solve the social robotics problem.
Starting point is 01:55:44 Oh, for sure. I mean, they're already with the spot meaning just all social robotics problems. For, like, for spot to be effective at scale. I'm not sure if spot is currently effective at scale. It's getting better and better. But they're actually, the thing they did is an interesting decision. Perhaps that's the end of doing the same thing,
Starting point is 01:56:03 which is spot is supposed to be a platform for intelligence. So spot doesn't have any high level intelligence, like high level perception skills. It's supposed to be controlled remotely. And it's a platform that you can attach. Attached to something to you. Yeah. And somebody else is supposed to do the attaching. It's a platform that you can attach. Attached off to something to do. Yeah. And somebody else is supposed to do the attaching.
Starting point is 01:56:26 It's a platform that you can take an uneven ground and it's able to maintain balance, go into dangerous situations. It's a platform. On top of that, you can add a camera that does surveillance, that you can remotely monitor, you can record. You can record the camera. You can remote control it, but it's not going to object manipulation.
Starting point is 01:56:46 Basic object manipulation, but not autonomous object manipulation. It's remotely controlled, but the intelligence on top of it, which was what would be required for automation, somebody else is supposed to do. Perhaps that's what would do the same thing ultimately, but it doesn't make sense because the goal of Optimus is automation. Without that, but then you never know. He's like, why go to Mars? Why?
Starting point is 01:57:18 I mean, that's true. And I reluctantly like I'm very excited about space travel. Why? Why? Can you expect like why? Why am I excited about space travel. Why, can you expect, like, why? Am I excited about it? I think what got me excited was I saw a panel with some people who study other planets, and it became really clear how little we know
Starting point is 01:57:40 about ourselves and about how nature works and just how much there is to learn from exploring other parts of the universe. So like on a rational level, that's how I convince myself that that's why I'm excited. In reality, it's just fucking exciting. I mean, just like the idea that we can do this difficult thing and that humans come together to build things that can explore space, I mean, there's just something inherently thrilling about that. And I'm reluctant about it because I feel like there are so many other challenges and problems that I think are more important to solve, but I also think we should be doing all of it at once.
Starting point is 01:58:25 And so, to that extent, I'm like all for research on humanoid robots, development of humanoid robots. I think that there's a lot to explore and learn, and it doesn't necessarily take away from other areas of science, at least it shouldn't. I think unfortunately, a lot of the attention goes towards that and it does take resources and attention away from other areas of robotics that we should be focused on, but I don't think we shouldn't do it. So you think it might be a little bit of a distraction.
Starting point is 01:58:59 Oh, forget the Elon particular application, but if you care about social robotics, the humanoid form is a distraction. Is the distraction? And it's the one that I find particularly boring. It's just, it's interesting from a research perspective, but from like what types of robots can we create to put in our world? Like, why would we just create a humanoid robot? So even, even just robotic manipulation, so arms is not useful either. Oh, arms can be useful, but like why not have three arms? Like why does it have to look like a person?
Starting point is 01:59:35 Well, I actually personally just think that washing the dishes is harder than a robot that can be a companion. Like being useful in the home is actually really tough. But does your companion have to have like two arms and look like you? No, I'm making the case for zero arms. Oh, okay, zero arms. Yeah. Okay, freaky.
Starting point is 01:59:59 That didn't come out the way I meant it. Because it almost sounds like I don't want to robot it to defend itself. Like that's immediately a project. I mean, like, it's your... No, I think... I just think that the social component doesn't require arms or legs or so on, right? As we've talked about.
Starting point is 02:00:20 And I think that's probably where a lot of the meaningful impact that's gonna be happening. Yeah, I think we's probably where a lot of the meaningful impact that's going to be happening. Yeah, I think we could get so creative with the design, like why not have a robot on roller skates? Or like, whatever, like, why does it have to look like us? Yeah. Still, it is a compelling and interesting form from a research perspective, like you said. Yeah. from a research perspective, like you said. Yeah. You call author to papers, you were talking about that
Starting point is 02:00:46 for Wii Robot 2022, Lula Robot, Consumer Protection and the face of automated social marketing. I think you were talking about some of the ideas and that. Yes. Oh, you got it from Twitter. I was like, that's not published yet. Yeah, this is how I do my research.
Starting point is 02:01:03 You just go through people's Twitter feeds. Yeah, go, thank you. It's not stalking if it's public. So there's, you looked at me like you're a fan of like, how did you know? No, it's just like worried that like some early, I mean. Yeah, there's a PDF. Does it? There is. There's a PDF.
Starting point is 02:01:24 Like now? Yeah. Maybe like as of. Does it? There is. There's a PDF like now. Yeah, maybe like as of a few days ago. Yeah. Okay. Yeah. Okay. You look violated. Like, how did you get that PDF? It's just a draft.
Starting point is 02:01:36 It's online. Nobody read it yet until we've written the final paper. What's really good. So I enjoyed it. Oh, thank you. But the time this comes out, I'm sure it'll be out. No, when's we robot? So basically we robot. That's the workshop where you have an hour where people give you constructive feedback on the
Starting point is 02:01:53 paper. And then you write the good version. Right. I take it back. There's no PDF. I don't know what I imagine. But there is a table in there in a virtual imagined PDF that I like, that I wanted to mention, which is like this kind of, strategies used across various marketing platforms, and it's basically looking at traditional media, person-to-person interaction, targeted ads, influencers on social robots, this is the kind of idea that you've been speaking to.
Starting point is 02:02:25 And it's just a nice breakdown of that. That social robots have personalized recommendations, social persuasion, automated scalable data collection and embodiment. So person-person interaction is really nice, but it doesn't have the automated and the data collection aspect. But the social robots have the automated and the data collection aspect, but the social robots have those two elements. Yeah, we're talking about the potential for social robots to just combine all of these different marketing methods to be this really potent cocktail. And that table, which was Danielle's idea and a really fantastic one, we put it in at the last second. So yeah, I really like that. I'm glad you like it. In a PDF that doesn't exist, that nobody can find if they look.
Starting point is 02:03:07 Yeah. So let me say social robots, what does that mean? Does that include virtual ones or no? I think a lot of this applies to virtual ones. To, although the embodiment thing, which I personally find very fascinating, is definitely a factor that research shows can enhance people's engagement with a device.
Starting point is 02:03:24 But can embodiment be a virtual thing also, meaning like it has a body in the virtual world. Like, maybe makes you feel like, because what makes a body? A body is a thing that can disappear, like, as a permanence. I mean, there's certain characteristics that you kind of it disappears, like, as a permanence. I mean, there's certain characteristics that you kind of associate to a physical object. So I think what I'm referring to and I think this gets messy because now we have all these new like virtual worlds and AR and stuff
Starting point is 02:03:59 and I think it gets messy, but there's research showing that something on a screen, on a traditional screen, and something that is moving in your physical space that has a very different effect on how your brain perceives it even. So, I mean, I have a sense that we can do that in a virtual world. Probably. Like, when I've used VR, I jump around like an idiot because I think something's gonna hit me and even if a video game on a 2d screen is compelling enough like the thing that's immersive about it is I kind of put myself into that world. You kind of those the the The objects you're interacting with call of duty things you're shooting there. They're kind of
Starting point is 02:04:45 things you're shooting, they're kind of, I mean, your imagination fills the gaps and it becomes real. Like, it pulls your mind in once well done. So it really depends what's shown on the 2D screen. Yeah. Yeah, I think there's a ton of different factors and there's different types of embodiment. You can have embodiment in a virtual world. You can have an agent that's simply text-based, which has no embodiment. So I think there's a whole spectrum of factors that can influence how much you engage with something.
Starting point is 02:05:10 Yeah, I wonder. I always wondered if you can have an entity living in a computer. It's, okay, this is going to be dark. I haven't always wondered about this. So it's going to make it sound like I keep thinking about this kind of stuff. No, but like, this is almost like black mirror, but the entity that's convinced, or is able to convince you that it's being tortured inside the computer needs your help to get out.
Starting point is 02:05:39 Something like this, to me, suffering is one of the things that Yes. To me, suffering is one of the things that make you empathize with. Like, we're not good at, as you've discussed in the physical form, like holding a robot upside down, you have a really good example about that and discussing that. I think suffering is a really good catalyst for empathy. And I just feel like we can project embodiment on a virtual thing if it's capable of certain things like suffering. So I was wonder. I think that's true. And I think that's what happened with the lambda thing. Not that, I don't, none of the transcript was about suffering, but it was about having the capacity for suffering
Starting point is 02:06:25 and human emotion that convinced the engineer that this thing was sentient. And it's basically the plot of X-Machina. True. Have you ever made a robot like scream in pain? Have I? No, but have you seen that? Did someone,
Starting point is 02:06:41 okay, no, they actually made a room of scream whenever it hit a wall. Yeah, I programmed that myself as well. Yeah, I was inspired by that. Yeah, it's cool. You still have it. Oh, sorry, hit a wall. I didn't, whenever a bum did something, it was scream and
Starting point is 02:06:56 then. Yeah, no, I, so I had the way I programmed the room was is when I kick it whenever. So contact between me and the robot is on screen. Really? Okay. And you were inspired by that? Yeah, I guess I remember the video. I saw the video a long, long time ago and maybe hurt somebody mentioned it and that just
Starting point is 02:07:17 it's the easiest thing to program. Uh-huh. So I did that. I haven't run those rooms for over a year now, but yeah, it was my experience with it was that it's like they quickly become Like you remember them
Starting point is 02:07:35 you You miss them Like they're real living beings. So the capacity yourself for or is a really powerful thing Yeah, even then that I mean it was kind of hilarious beings. So the capacity just for or is a really powerful thing. Yeah. Even then that, I mean, it was kind of hilarious. It was just a random recording of screaming from the internet. But it's still, it's still as weird. There's a thing you have to get right based on the interaction, like the latency. Like it has, there is a There is a realistic aspect of how you should scream relative to when you get hurt like it should correspond correctly Like if you kick it really hard it's just scream louder
Starting point is 02:08:15 No, it's just scream at the appropriate time not like I see not like one second later, right? Like there's a exact like there's a timing when you get like, I don't know, when you run it to, when you run your foot into like the side of a table or something, there's a timing there. The dynamics you have to get right for the, for the actual screaming because the, the roomba in particular doesn't, so I was, the sensors don't it doesn't know about pain. See what? Sorry to say, Rumba doesn't understand pain. So you have to correctly map the sensors, the timing to the production of the sound.
Starting point is 02:09:02 But when you get that somewhat right, it's really weird feeling. And you actually feel like a bad person. Uh-huh. Yeah. So, but it makes you think because that, with all the ways that we talked about, that could be used to manipulate you. Oh, for sure.
Starting point is 02:09:20 In a good and bad way. So the good way is like you can form a connection with a thing in and bad way. So the good way is like you can form a connection with a thing in a bad way that you can form a connection in order to sell you products that you don't want. Yeah. Or manipulately, you politically are in the many nefarious things. You tweeted, or about to be living in the movie her,
Starting point is 02:09:40 except instead of, obviously, every search your tweets like they're like Shakespeare. We're about to be living in the movie her except instead of about love it's gonna be about what I say the chatbot being subtly racist and the question whether it's ethical for companies to charge for software upgrade. Yeah so can we break that down? What do you mean by that? Yeah, obviously some of it is humor. Yes, well, kind of. I am like, oh, it's so weird to be in the space where I'm so worried about the technology
Starting point is 02:10:17 and also so excited about it at the same time. But the really like, I haven't, I got in a little bit jaded and then with GPT-3 and then the Lambda transcript, I was like re-energized, but have also been thinking a lot about you know, what are the ethical issues that are going to come up? And I think some of the things that companies are really going to have to figure out is obviously algorithmic bias is a huge and known problem at this point. Like, even the new image generation tools, like Dolly, where they've clearly put in a lot of
Starting point is 02:10:59 effort to make sure that if you search for people, it gives you a diverse set of people, etc. Like even that one, people have already found numerous ways that it just kind of regurgitates biases of things that it finds on the internet. Like how if you search for success, it gives you a bunch of images of men, if you search for sadness, it gives you a bunch of images of women. So I think that this is like the really tricky one with these voice agents that companies are going to have to figure out. And that's why it's subtly racist and not overtly because I think they're going to be able to solve the overt thing. And then with the subtle stuff, it's going to be really difficult.
Starting point is 02:11:36 And then I think the other thing is going to be, yeah, like people are going to become so emotionally attached to artificial agents with this complexity of language, with a potential embodiment factor that, I mean, there's already, there's a paper at We Robot this year written by Robot Assist about how to deal with the fact that robots die and looking at it as an ethical issue because it impacts people. And I think there's going to be way more issues than just that. Like, like, I think that the tweet was software upgrades, right? Like, how much is it okay to charge for something like that if someone is deeply emotionally invested in this relationship? Oh, the ethics of that is interesting, but there's also the practical
Starting point is 02:12:27 Oh, the ethics of that is interesting, but there's also the practical funding mechanisms that you mentioned with Ibo, the dog in theory, there's a subscription. Yeah, the new Ibo. So the old Ibo from the 90s, people got really attached to you, and in Japan they're still having like funerals and Buddhist temples for the Ibo that can't be repaired, because people really viewed them as part of their families. So we're talking about robot dogs. Robot dogs, the Ibo, yeah, the original like famous robot dog that Sony made came out in the 90s, got discontinued having funals for them in Japan. Now they have a new one. The new one is great. I have one at home. It's like... It's $3,000 how much is it? I think it's $3,000 bucks.
Starting point is 02:13:06 And then after a few years, you have to start paying. I think it's like 300 a year for a subscription service, for cloud services. And the cloud services. I mean, it's a lot of... The dog is more complex than the original, and it has a lot of cool features, and it can remember stuff, and experiences, and it can learn learn and a lot of that is outsourced to the cloud and so you have to pay to keep that running which makes sense.
Starting point is 02:13:32 People should pay and people who aren't using it shouldn't have to pay but it does raise the interesting question. Could you set that price to reflect a consumer's willingness to pay for the emotional connection? So if like you know that people are really really attached to these things just like they would be to a real dog Could you just start charging more because there's like more demand? Yeah, I mean it's you have to be but they're But that's true for anything that people love, right?
Starting point is 02:14:09 It is, and it's also true for real dogs. There's all these new medical services nowadays where people will shell out thousands and thousands of dollars to keep their pets alive. And is that taking advantage of people or is that just giving them what they want? That's the question. Well, back to marriage, what about all the money that it costs to get married and then all the money that it costs to get a divorce? That feels like a very, that's like a scam.
Starting point is 02:14:40 I think the society is full of scams that are like- Oh, it's such a scam. I think the society's full of scams that are like, oh, it's such a scam. And then we've created like the whole wedding industrial complex has created all these quote unquote traditions that people buy into that aren't even traditions. Like they're just fabricated by marketing. Like it's awful. Let me ask you about racist robots. Is it up to a company that creates that? So we talk about removing bias and so on. Yeah. And that's a lot of people agree that it's an important field. But the question is for social robotics, is there a company to remove the bias of society? Well, who else can, oh, to remove the bias of society?
Starting point is 02:15:17 I guess because there's a lot of people that are suddenly racist in modern society, Well, who else can, oh, to remove the bias of society? Like, I guess, because there's a lot of people there, suddenly racist and modern society, like, why shouldn't our robots also be suddenly racist? I mean, that's like, why do we put so much responsibility on the robots? Because, because the, I'm imagining like a, like a Hitler room. I mean that would be funny, but I guess I'm asking a serious question. You're right. You're right. You're last week.
Starting point is 02:15:54 Yes, exactly. I'm a lot to make that joke. And I've been nonstop reading about World War II and Hitler. I think I'm glad we exist in the world where we can just make those jokes. That helps deal with it. Anyway, it is a serious question of sort of like, like it's such a difficult problem to solve. Now, of course, like, bias and so on, like, there's low hanging fruit, which I think was that a lot of people are focused on. But then it becomes like subtle stuff over time. It's very difficult to know. Now, if you can also completely remove the personality, you can completely remove the personalization.
Starting point is 02:16:36 You can remove the language aspect, which is what I had been arguing because I was like, the language is a disappointing aspect of social robots anyway, but now we're reintroducing that because it's now no longer disappointing. So I do think, well, let's just start with the premise, which I think is very true, which is that racism is not a neutral thing, but it is the thing that we don't want in our society. Like, I, it does not conform to my values. So if we agree that racism is bad, I do think that it has to be the company because the prop, I mean, it might not be possible. And companies might have to put out products
Starting point is 02:17:20 that where they're taking risks. And they might get slammed by consumers consumers and they might have to adjust. I don't know how this is going to work in the market. I have opinions about how it should work, but it is on the company and the danger with robots is that they can entrench this stuff. It's not like your racist uncle who you can have a conversation with and... And put things into context maybe with that. Yeah, or who might change over time with more experience. A robot really just like regurgitates things
Starting point is 02:17:56 and trenches them, could influence other people. And I mean, I think that's terrible. Like I said. Well, I think there's terrible. Like I said. Well, I think there's a difficult challenge here is because even the premise you started with essentially racism is bad. I think we live in a society today where the definition of racism is different between different people.
Starting point is 02:18:19 Some people say that it's not enough not to be racist. Some people say you have to be anti-racist. So you have to have a robot that constantly calls out, calls you out on your implicit racism. I would love that. I would love that robot. But maybe it's he's, well, I don't know if you love it because maybe you'll see racism in things that aren't racist and then you're arguing with robot, your robot is going to racist. I'm not exactly sure that.
Starting point is 02:18:52 I mean, it's a tricky thing, I guess I'm saying that the line is not obvious, especially in this heated discussion where we have a lot of identity politics of what is harmful to different groups and so on. Yeah. It feels like a broader question here is should a social robotics company be solving or being part of solving the issues of society? Well, okay. I think it's the same question as, should I as an individual be responsible
Starting point is 02:19:26 for knowing everything in advance and saying all the right things? And the answer to that is, yes, I am responsible, but I'm not gonna get it perfect. And then the question is, how do we deal with that? And so as a person, how I aspire to deal with that is when I do inevitably make a mistake because I have blind spots and people get angry, I don't take that personally
Starting point is 02:19:57 and I listen to what's behind the anger. And it can even happen that like maybe I'll tweet something that's well intentionintentioned. And one, you know, one group of people starts yelling at me. And then I change it the way that they said. And then another group of people starts yelling at me, which just happened, this happened to me actually around in my talks, I talk about robots that are used in autism therapy. And so whether to say a child with autism or an autistic child is super controversial. And a lot of autistic people prefer to be referred to as
Starting point is 02:20:32 autistic people and a lot of parents of autistic children prefer child with autism. And then there's they disagree. So, so I've gotten yelled at from both sides. And I think I'm still responsible, I'm responsible even if I can't get it right. I don't know if that makes sense. It's a responsibility thing. And I can be as well-intentioned as I want and I'm still going to make mistakes. And that is part of the existing power structures that exist. And that's something that I accept.
Starting point is 02:21:01 And you accept being attacked from both sides and grow from it and learn from it. Yeah. But the danger is that after being attacked, assuming you don't get canceled, aka completely removed from your ability to to tweet, you might become jaded and not want to talk about artists anymore. I don't and I didn't. I mean, it's happened to me. But that's what I did was I listened to those sides and I chose, I tried to get information.
Starting point is 02:21:31 And then I decided that I was going to use autistic children. And now I'm moving forward with that. Like, I don't know. For now, right. For now, yeah, until I get updated information and I'm never going to get anything perfect, but I'm making choices and I'm moving forward because being a coward and like just retreating from that. I think, but he hears the problem, you're a very smart person in the individual researcher thinker and
Starting point is 02:22:00 intellectual. So that's the right thing for you to do. The hard thing is one of the company, imagine you to do. The hard thing is one as a company, imagining you had a PR team. It's a cake. You should. Team Sweethaid. Yeah. I mean, just, well, if you're, if you hired PR people, like, obviously, they would see that and they'd be like, well, maybe don't bring up autism. Maybe don't bring up these topics. You're, you're getting attacked is bad for your brand. They'll say the brand word. There'll be, you know, if we look at different demographics that are inspired by your work, I think it's insensitive to them.
Starting point is 02:22:35 Let's not mention this anymore. Like, there's kind of pressure that all of a sudden you, or, you or you do suboptimal decisions, you take a kind of poll, again, it's looking at the past versus the future, all those kinds of things. It becomes difficult. In the same way that it's difficult for social media companies to figure out who's sensor, who'd recommend. I think this is ultimately a question about leadership, honestly, like the way that I see leadership because right now, the thing that bothers me about institutions and a lot of people who run current institutions is that their main focus is protecting the institution
Starting point is 02:23:21 or protecting themselves personally. That is bad leadership because it means you cannot have integrity. You cannot lead with integrity. And it makes sense because like obviously if you're the type of leader who immediately blows up the institution you're leading, then that doesn't exist anymore. And maybe that's why we don't have any good leaders anymore because they had integrity and they didn't put,
Starting point is 02:23:43 you know, the survival of the institution first. But I feel like you have to, just to be a good leader, you have to be responsible and understand that with great power comes great responsibility. You have to be humble and you have to listen and you have to learn. You can't get defensive and you cannot put your own protection before other things. Yeah, take risks where you might lose your job, you might lose your well-being because of, because in the process of standing for the principles, for the things you think are right to do. Yeah, based on the things you,
Starting point is 02:24:28 like based on learning from, like listening to people and learning from what they feel, and the same goes for the institution, yeah. Yeah, but I ultimately actually believe that those kinds of companies and countries succeed that have leaders like that. You should fall for president. No, thank you.
Starting point is 02:24:48 Yeah. That's maybe the problem, like the people who have good ideas about leadership, they're like, yeah, no. This is why I don't, so I'm not running a company. It's been I think three years since the Jeffrey Epstein controversy at MIT, MIT Media Lab, Joy Edo, the head of the Media Lab Resigned. And I think at that time, you were an opinion article about it. So just looking back a few years have passed,
Starting point is 02:25:16 what have you learned about human nature? From the fact that somebody like Jeffrey Epstein found his way inside MIT. That's a really good question. What have I learned about human nature? I think, well, there's, there's, how did this problem come about? And then there's what was the reaction to this problem and to it becoming public. And in the reaction, the things I learned about human nature were that sometimes cowards are worse than assholes. Wow, I'm really, oh. I think that's a really powerful statement. I think because the assholes at least, you know what you're dealing with, they have integrity in a way. They're just living out their asshole their asshole values. And the cowards are the ones that you have to watch out for. And this comes
Starting point is 02:26:29 back to people protecting themselves over doing the right thing. They'll throw others under the bus. Is there some sense that not enough people took responsibility? Is there some sense that not enough people took responsibility? For sure. And I mean, I don't want to sugarcoat at all what Joe Edo did. I mean, I think it's gross that he took money from Jeffrey Epstein. I believe him that he didn't know about the bad bad stuff, but I've been in those circles with those like public intellectual dudes that he was hanging out with. And any woman in those circles, like, saw Ten-Silly and Red Flags, just the whole environment was so misogynist. And so, personally, because Joey was a great boss
Starting point is 02:27:18 and a great friend, I was really disappointed that he ignored that in favor of raising money. And I think that it was right for him to resign in the face of that. But one of the things that he did that many others didn, was he came forward about it and he took responsibility. And all of the people who didn't, I think, it's just interesting. The other thing I learned about human nature, okay, I'm going to go on it, on a tangent, but I'll come back and promise.
Starting point is 02:28:00 So I once saw this tweet from someone or was a Twitter thread from someone who worked at a homeless shelter. And he said that when he started working there, he noticed that people would often come in and use the bathroom and they would just trash the entire bathroom, like ripped things out of the walls, like toilet paper on the ground. And he asked someone who had been there longer, like, why do they do this? Why do the homeless people come in and trash the bathroom? And he was told, it's because it's the only thing in their lives that they have control over. And I feel like sometimes when it comes to the response,
Starting point is 02:28:38 the just the mobbing response that happens in the wake of some harm that was caused. If you can't target the person who actually caused the harm who was Epstein, you will go as many circles out as you can until you find the person that you have power over and you have control over and then you will trash that. And it makes sense that people do this. It's again, it's a human nature thing. Of course, you're going to focus all your energy because you feel helpless and enraged and you, and it's unfair and you have no other power.
Starting point is 02:29:18 You're going to focus all of your energy on someone who's so far removed from the problem that that's not even an efficient solution. And the problem is often the the first person you find is the one that has integrity, sufficient integrity, take responsibility. Yeah and it's why my husband always says he's he's a liberal but he's always like when liberals form a firing squad they stand in a circle because you know that your friends are going to listen to you, so you criticize them. You're not going to be able to convince someone across the aisle. But seeing that situation, what I had hoped is the people in the farther, in that situation,
Starting point is 02:29:59 any situation of the sort, the people that are farther out in the circles stand up. Yeah. And like, also takes some responsibility for the broader picture of human nature versus like specific situation, but also takes some responsibility and, but also defend the people involved. As flawed, not in a like, no, no, nothing, like this, people fucked up. Like you said, there's a lot of red flags that people just ignored for the sake of money
Starting point is 02:30:37 in this particular case. But also like be transparent and public about it and spread the responsibility across a large number of people such that you learn a lesson from it institutionally. Yeah, it was a systems problem. It wasn't one individual problem. And I feel like currently,
Starting point is 02:31:00 because Joey took like a resign because of it, or essentially fired, pressured out, because of it, um, MIT can pretend like, oh, we didn't, we didn't know anything. It wasn't part. Yeah. That leadership, again, because when you are at the top of an institution with that much power, and you were complicit in what happened, which they were. Like, come on, there's no way that they didn't know that this was happening. So I, like, to not stand up and take responsibility, I think it's bad leadership.
Starting point is 02:31:39 Do you understand why Epstein was able to, um, about my tea, he was able to make a lot of friends with a lot of powerful people, because it makes sense to you. Why was he able to get in these rooms and befriend these people? Befriend people that I don't know personally, but I think a lot of them indirectly, I know as being good people, smart people, why would they let Jeffy up into their office, have a discussion with them? Would you understand about human nature from that? Well, so I never met Epstein or, I mean, I've met some of the people who interacted with him but I
Starting point is 02:32:26 was never like I never saw him in action. I don't know how charismatic he was or what that was but I do think that sometimes the simple answer is the more likely one and from my understanding what he would do is he was kind of a grifter a social grifter, like, you know those people who will, you must get this because you're famous. You must get people coming to you and being like, oh, I know your friends so and so in order to get cred with you. I think he just convinced some people who were trusted in a network that he was a great
Starting point is 02:33:09 guy and that, you know, whatever, I think at that point, because at that point, he had like a, what, a commission prior, but it was a one-off thing. It wasn't clear that there was this other thing that was that... And most people probably don't check. Yeah, and most people don't check. Like you're on a van, you meet this guy. I don't know, maybe people do check when they're that powerful and wealthy
Starting point is 02:33:30 or maybe they don't. I have no idea. No, they're just stupid. I mean, and they're not like, all right. Well, like, does anyone check anything about me? Because I've walked into some of the richest, the most powerful people in the world. And nobody, like, asked questions, like, who the fuck
Starting point is 02:33:49 is this guy? Like, yeah. Like nobody asked those questions. It's interesting. I would think like there would be more security or something. Like, there really isn't. I think a lot of it has to do what my hope is, in my case, has to do with like, people, in my case, has to do with people can sense that this is a good person. But if that's the case, then they can surely, then human beings can use charisma to infiltrate
Starting point is 02:34:14 just just saying the right things. And once you have people vouching for you within that type of network. Like once you, yeah, once you have someone powerful vouching for you, who's someone else trusts, then, you know, you're in. So how do you avoid something like that? If you're on my team, if you're Harvard, if you're in your these institutions? Well, I mean, first of all, you have to do your homework before you take money from someone, like, I think, I think that's required. But I think, you of all, you have to do your homework before you take money from someone. Like, I think that's required. But I think, you know, I think Joey did do his homework. I think he did.
Starting point is 02:34:52 And I think at the time that he took money, there was the one conviction and not like the later thing. And I think that the story at that time was that he didn't know she was underage and lava or whatever it was a mistake and Joey always believed in redemption for people and that people can change and that they can genuinely regret and learn and move on and he was a big believer in that. So I could totally see him being like, well, I'm not going to exclude him because of this thing and because other people are vouching for him. So, and that, just to be clear, we're now talking about the set of people who I think
Starting point is 02:35:30 Joy belonged to, who did not like go to the island and have sex with underage girls, because that's a whole other set of people who like were powerful and like were part of that network and who knew and participated. And so like I distinguish between people who got taken in, who didn't know that that was happening and people who knew. I wonder what the different circles look like. So like people that went to the island and didn't do anything, didn't see anything, didn't know about anything versus the people that did something.
Starting point is 02:36:03 And then there's people who heard rumors maybe. And what do you do with rumors? Like, isn't there, isn't there people that heard rumors about Bill Cosby for the longest time? For like, for the longest, like whenever that happened, like all these people came out of the woodwork, like everybody kind of knew. I mean, it's like, all right,
Starting point is 02:36:26 so what are you supposed to do with rumors? Like what, I think the other way to put it is red flags as you were saying. Yeah, and like, I can tell you that those circles, like there were red flags without me even hearing any rumors about anything ever. Like I was already like, hmm, this is, there are not a lot of women here, which is a bad sign.
Starting point is 02:36:47 Isn't there a lot of places where there's not a lot of women, and that doesn't necessarily mean it's a bad sign? There are if it's like a pipeline problem where it's like, I don't know, technology law clinic that only gets like mail lawyers because there's not a lot of women, you know, applicants in the pool. But there's not a lot of women, you know, applicants in the pool. But there's some aspect of this situation that like there should be more women here. Oh, yeah. Yeah.
Starting point is 02:37:13 You've, actually, I'd love to ask you about this because you have strong opinions about Richard Stalman. Is that do you still have those strong opinions? Look, all I need to say is that he met my friend who's a law professor. Yeah. She shook his hand and he licked her arm from wrist to elbow and it certainly wasn't appropriate at that time. What about if you're like an incredibly weird person? Okay, that's a good question because obviously there's a lot of neurodivergence at MIT and everywhere. And obviously, we need to accept that people are different, that people don't understand social conventions the same way.
Starting point is 02:38:03 But one of the things that I've learned about neurodivergence is that women are often expected or taught to mask their neurodivergence and kind of fit in. And men are accommodated and excused. And I don't think that being neurodivergent gives you a license to be an asshole. Like you can be a weird person and you can still learn that it's not okay to lick someone's arm.
Starting point is 02:38:34 Yeah, it's a balance. Like women should be allowed to be a little weirder and men should be less weird. Because I think there's a, because I, you're one of the people, I think tweeting that with me because I wanted to talk to Richard Stallman on the podcast about, because I didn't have a context because I wanted to talk to him because he's, you know, free software. He's very, he's very weird in interesting good ways in the world of computer science. in interesting good ways in the world of computer science. He's also weird in that, you know, when he gives a talk, he would be like, like picking at his feet
Starting point is 02:39:10 and eating the skin off his feet, right? But he's known for these extremely kind of, how else do you put it? I don't know how to put it. But then there was something that happened to him in conversations on this thread of related to Epstein, which I was torn about because I felt it's similar to joy. It was like I felt he was maligned like people were looking for somebody to get angry at. So he was inappropriate, but the,
Starting point is 02:39:47 I didn't like the cowardice more, like I said aside his situation, and we could discuss it, but the cowardice on MIT's part, and this is me saying it, about the way they treated that whole situation. Well, there are always carers about how they treat anything, they just try to make the problem go away.
Starting point is 02:40:04 Yeah, so it was about,, exact commitment to the conversation. I think he should have left the mailing list. He shouldn't have he shouldn't have been part of the mailing list. Well, that's probably true also. But I think I think what what bothered me what always bothers me in these mailing list situations or Twitter situations like What always bothers me in these mailing list situations or Twitter situations. If you say something that's hurtful to people or makes people angry and then people start yelling at you, maybe they shouldn't be yelling, maybe they are yelling because, again, you're the only point of power they have, maybe it's okay that you're yelling, whatever it is, it's your response to that that matters. And I think that I just have a lot of respect for people
Starting point is 02:40:55 who can say, oh, people are angry. There's a reason they're angry. Let me find out what that reason is and learn more about it. It doesn't mean that I'm wrong. It doesn't mean that I am bad. It doesn't mean that I am ill-intentioned, but why are they angry? I want to understand. And then once you understand you can respond again with integrity and say, actually a stand by what I said, here's why, or you can say,
Starting point is 02:41:26 actually I listened and here's some things I learned. That's the kind of response I want to see from people. And people like installment do not respond that way. They just like go into battle. Right, like, or it's obvious you didn't listen. Yeah, no, I'm just didn't listen. Honestly, that's to me as bad as the people who just apologize, just because they are trying to make the prong go away. Of course. Honestly, that's to me as bad as the people who just apologize just because they are trying to make the prong go away.
Starting point is 02:41:47 Of course. Right. So like if that's not a both of bad. A good apology has to include understanding what you did wrong. And in part, standing up for the things you think you did right. So like, if there are those things, yeah. Finding and then, but you have to give,
Starting point is 02:42:05 you have to acknowledge, you have to like give that hard hit to the ego that says I did something wrong. Yeah, that definitely was just almost not somebody who was Yeah, capable of that kind of thing or hasn't given evidence of that kind of thing But that was also even just your tweet had to do a lot of thinking like But also, even just your tweet, I had to do a lot of thinking, like, different people from different walks of life see red flags and different things. And so things I find as a man, nonthreatening and hilarious are not necessarily doesn't mean that there aren't like deeply hurtful to others. And I don't mean that in a social justice warrior way, but in a real way, like people really have different experiences. So I have to like really put things into context.
Starting point is 02:43:03 I have to kind of listen to what people are saying, put aside the emotion of what their, emotion will do, which you're saying it, and try to keep the facts of their experience, and learn from it. And because it's not just about the individual experience, either, it's not like, oh, you know, my friend didn't have a sense of humor about being lick.
Starting point is 02:43:24 It's that she's been, she's been metaphorically licked, you know, 57 times that week because she's an attractive law professor and she doesn't get taken to. And so like, men walk through the world and it's impossible for them to even understand what it's like to have a different experience of the world. And that's why it's so important to listen to people and believe people and believe that they're angry for a reason. Maybe you don't like their tone. Maybe you don't like that they're angry at you. Maybe you get defensive about that. Maybe you think that they should, you know, explain it to you.
Starting point is 02:43:58 But believe that they're angry for a reason and try to understand it. Yeah, there's a deep truth there. And opportunities for you to become a better person. Gasquick a question. Haven't you been doing that for two hours? Three hours now. Okay. Let me ask you about Galein Maxwell.
Starting point is 02:44:21 She's been saying that she's an innocent victim. Is she an innocent victim or is she evil and equally responsible like Jeffrey Epstein now masking far away from any MIT things and more just your sense of the whole situation? I haven't been following it so I don't know the facts of the situation and like what is now like I haven't been following it, so I don't know the facts of the situation and like what is now like known to be her role in that? If I were her, clearly I'm not, but if I were her, I wouldn't be going around saying,
Starting point is 02:44:52 I'm an innocent victim, I would say, maybe she's, I don't know what she's saying again, like I don't know. She was controlled by Jeffrey. Is she saying this as part of a legal case or is she saying this is like a PR thing. Well, PR, but it's not just her it's her whole family believes this. There's a whole effort that says like that how should I put it? I believe they believe it. So in that sense, it's not PR. I believe they believe it. So in that sense, it's not PR.
Starting point is 02:45:23 I don't know. I believe the family, basically the family is saying that she's a good, she's a really good human being. Well, I think everyone is a good human being. I know it's a controversial opinion, but I think everyone is a good human being. There's no evil people. There's people who do bad things
Starting point is 02:45:49 and who behave in ways that harm others, and I think we should always hold people accountable for that, but holding someone accountable doesn't mean saying that they're evil. Yeah, it actually those people usually think they're doing good. Yeah, I mean, aside from, I don't know, maybe sociopaths like are specifically trying to like harm people,
Starting point is 02:46:11 but I think most people are trying to do their best. And if they're not doing their best, it's because there's some impediment or something in their past. So I just, I genuinely don't believe in good and evil people, but I do believe in like harmful and not harmful actions. And so I don't know like, I don't care. Yeah, she's a good person. But if she contributed to harm, then she needs to be accountable
Starting point is 02:46:37 for that. Like that's my position. I don't know what the facts of the matter are. Seems like she was pretty close to the situation. so it doesn't seem very believable that she was a victim, but I don't know. I wish I've met Epstein, because something tells me he would just be a regular person, a charismatic person, like anybody else, and that's a very dark reality that we don't know which among us, what each of us are hiding in the closet. That's a really tough thing to like deal with, because then you can put your trust into some people, and they can completely betray that trust, and in the process, destroy you.
Starting point is 02:47:16 Yeah. Which there's a lot of people that interact with Epstein, that now have to, I mean, if they're not destroyed by it, then they're whole, like, the ground on which they stand ethically has crumbled, at least in part. And I'm sure you're not, I'm sure you're not, I'm interacting with people without knowing it or bad people. As I was telling my four-year-old, people who have done bad things.
Starting point is 02:47:48 People have done bad things. He's always talking about bad guys and I'm trying to move them towards. They're just people who make bad choices. Yeah, that's really powerful actually. That's really important to remember because that means you have compassion towards all human beings. Do you have hope for the future of MIT, the future of Media Lab in this context? So Dave and Newman is now at the helm.
Starting point is 02:48:11 I'm going to talk to her. I talked to her previously. I'll talk to her again. She's great. Love her. Yeah, she's great. I don't know if she knew the whole situation when she started because the situation went beyond just the Epstein scandal a bunch of other stuff happened at the same time. Some of it's not public but my what I was personally going through at that time so the
Starting point is 02:48:40 Epstein thing happened I think was it August or September 2019? It was somewhere around late summer in June 2019. So I'm a research scientist at MIT, you are too, right? And I always have, had very supervisors over the years. And they've just basically let me do what I want, which has been great. But I had a supervisor at the time, and he called me into his office for a regular check-in. In June of 2019, I reported to MIT that my supervisor had grabbed me, pulled me into a hug,
Starting point is 02:49:20 wrapped his arms around my waist, and started massaging my hip hip and trying to kiss me, kiss my face, kiss me near the mouth. And said literally the words, don't worry, I'll take care of your career. And that experience was really interesting because I just, I was very indignant. I was like, he can't do that to me. Doesn't he know who I am? And I was like, this is the me too era. And I naively thought that when I reported that, it would get taken care of.
Starting point is 02:49:56 And then I had to go through the whole reporting process at MIT. And I learned a lot about how institutions really handle those things internally. Particularly situations where I couldn't provide evidence that it happened. I know a reason to lie about it, but I had no evidence. And so I was going through that, and that was another experience for me where there's so many people in the institution who really believe in protecting the institution at all costs. And there's only a few people who care about doing the right thing.
Starting point is 02:50:31 And one of them resigned. Now there's even less of them left. So what did you learn from that? I mean, where's the source if you have hope for this institution that I think you love, at least in part, I love the idea of MIT. I love the idea. I love the research body. I love a lot of the faculty. I love the students. I love the energy. I love it all. I think the administration suffers from the same problems as any institution, any leadership of an institution that is large, which is
Starting point is 02:51:09 that they've become riskiverse, like you mentioned. They care about PR. The only ways to get their attention or change their minds about anything or to threaten the reputation of the institute or to have a lot of money. That's the only way to have power at the institute. Yeah, I don't think they have a lot of integrity or believe in ideas or even have a lot of connection to the research body and like the people who are really, because it's so weird.
Starting point is 02:51:43 You have this amazing research body of people the people who are really, because it's so weird, you have this amazing research body of people pushing the boundaries of things who aren't afraid to like, there's the hacker culture. And then you have the administration and they're really like, protect the institution at all costs. Yeah, there's a disconnect, right? Complete.
Starting point is 02:52:03 I wonder if there was always there if it just kind of slowly grows over time, a disconnect between the administration and the faculty. I think it grew over time is what I've heard. I mean, I've been there for 11 years now. I don't know if it's gotten worse during my time, but I've heard from people who have been there longer that it didn't know, like, and my dad didn't used to have a general counsel's office.
Starting point is 02:52:28 They didn't used to have all this corporate stuff. And then they had to create it as they got bigger and in the era where such things are, I guess, deemed necessary. So yeah, I believe in the power of individuals to like overthrow the thing. So just a really good president of MIT or certain people in administration can reform the whole thing because the culture is still
Starting point is 02:52:52 there of like, I think everybody remembers that MIT is about the students and the faculty. You may know because I don't know, I've had a lot of conversations that have been shocking with like senior administration. They think the students are children. They call them kids. It's like these are the smartest people. They're way smarter than you. Yeah. And you're so dismissive. But those individuals, I'm saying like the the capacity, like the aura of the place still values the students and the faculty. I'm not, I'm being awfully poetic about it, but what I mean is the administration is the, the fraught at the top of the, like the waves, the surface, like they can be removed and new life can be brought in that would keep to the spirit
Starting point is 02:53:48 of the place. Who decides on who to bring in? Who's higher? It's bottom up. Oh, I see. I see. But I do think ultimately, especially in the era of social media and so on. Faculty and students have more and more power. More and more of a voice, I suppose. I hope so. I really do. I don't see MIT going away anytime soon. And I also don't think it's a terrible place at all. Yes, an amazing place. But there's different trajectories you can take. Yeah. And like, and that has to do with a lot of things, including, Yeah. And like, and that that has to do with a lot of things, including, um, does it, is it stays, even if we talk about robotics, it could be the capital of the world and robotics. Um, but currently, if you want to be doing the best AI work in the world, you're going to go to Google or Facebook, um, or Tesla or Apple or so on, you're not going to be, you're going to go to Google or Facebook or Tesla or Apple or so on. You're not
Starting point is 02:54:47 going to be, you're not going to be an MIT. So that has to do, I think that's basically has to do with not allowing the brilliance of the researchers to flourish. Yeah, people say it's about money, but I don't think it's about that at all. Like, sometimes you have more freedom and can work on more interesting things and companies. That's really where they lose people. Yeah. And the freedom in all, in all ways, which is why it's heartbreaking to get like people like Richard Stahlman, this such an interesting line because like Richard Stahlman was a gigantic weirdo that crossed lines you shouldn't cross, right? But we don't draw too many lines. This is the tricky thing. There are different types of lines in my opinion.
Starting point is 02:55:41 But yes, your opinion. You have strong lines you hold to. But then if administration listens to every line, there's also power in drawing a line. And it becomes like a little drug. You have to find the right balance. Licking somebody's arm is never appropriate. I think the biggest aspect there is not owning it, learning from a growing from it from a perspective of installment or people like that. Back when it happened, like understanding, seeing or being empathetic, seeing the fact that this was like
Starting point is 02:56:19 totally inappropriate, not when that particular act, but everything that led up to it too. No, I think there are different kinds of lines. I think there are... So, Stalman crossed lines that essentially excluded a bunch of people and created an environment where they're brilliant minds that we never got the benefit of because he made things feel gross or even unsafe for people. There are lines that you can cross where you're challenging an institution to Like I don't think he was intentionally trying to cross a line or maybe he Maybe didn't care. There are lines that you can cross intentionally to
Starting point is 02:57:07 move something forward or to do the right thing like when MIT was like you can't put an all-gender restroom in the media lab because like something permits whatever and Joey did it anyway. That's a line you can cross to make things actually better for people and the line you're crossing is some arbitrary stupid rule that people who don't want to take the risk or like, yeah, for sure. You know what I mean? No, ultimately I think the thing you said is like,
Starting point is 02:57:35 cross lines in a way that doesn't alienate others. So like, for example, me weren't, I started for a while wearing a suit often at MIT, which sounds counterintuitive, but that's actually, people always looked at me weird for that. MIT created this culture, specifically, the people I was working with, like nobody wore a suit. Yeah, we don't trust the suit. Don't trust the suit. So I was like, fuck you, I'm wearing a suit. But that's not really hurting anybody, right? Exactly. It's challenging people's perceptions.
Starting point is 02:58:15 It's doing something that you want to do. Yeah. But it's not hurting people. And that that particular thing was, yeah, it was hurting people. It's a good line. It's a good line to like hurting ultimately the people that you want to flourish. Yeah. You tweeted a picture of pumpkin spice Greek yogurt and asked grounds for divorce, yes, no. So let me ask you, what's the key to a successful relationship? Oh my god a good couple therapists What what went wrong with the pumpkin spice Greek yogurt? What's exactly wrong is that the pumpkin? Is it the Greek? I don't understand I stared at that tweet for a while I grew up in Europe so I don't understand the pumpkin spice in everything, craze that they do every autumn here. Like, I understand that it might be good in some foods,
Starting point is 02:59:08 but they just put it in everything. And it doesn't belong in Greek yogurt. I mean, I was just being humorous. I ate one of the yogurts and actually tasted pretty good. Yeah, absolutely. So, I think part of the success of a good marriage is like Giving each other a hard time humorously for things like that Is there a broader lesson because you guys seem to have a really great marriage from the external?
Starting point is 02:59:36 I mean every marriage looks good from the external every I think yeah That's that's not true, but yeah, I think, yeah. That's, that's not true. But yeah, I got. No, but like relationships are hard, relationships with anyone are hard. And especially because people evolve and change and you have to make sure there's space for both people to evolve and change together. And I think one of the things that I really liked about our marriage vows was, I remember before we got married, Greg, at some point, got kind of nervous and he was like, it's such a big commitment to commit to something for life.
Starting point is 03:00:17 And I was like, we're not committing to this for life. And he was like, we're not. And I'm like, no, we're committing to being part of a team and doing what's best for the team. If what's best for the team is to break up, we'll break up. Like, I don't believe in this, like we have to do this for our whole lives. And that really resonated with him too. So yeah, you could put in the well, like, yeah, that was our vows. Like that we're just, we're going to be a team. You're a team and do what's right for the team?
Starting point is 03:00:48 Yeah. Yeah. That's very like Michael Jordan of you. Do you guys get married in the desert like November rain style with a slash plane or you don't have to answer that? I'm not good at these questions. Okay. You've brought up marriage like eight times.
Starting point is 03:01:08 So you're trying to hint something on the podcast? I don't. Yeah, I have an announcement to make. No, what, I don't know. It just seems like a good metaphor for, why would, it felt like a good metaphor for in a bunch of cases for the
Starting point is 03:01:27 marriage industrial complex, I remember that and Oh people complaining it just seemed like marriage is One of the things that always surprises me because I want to get married. You do? Yeah, I do and then I listen to like friends of mine that complain not all I like I like guys I really like guys that don't complain about their marriage. It's such a cheap Like if it's such a cheap release valve like that isn't that's bitching about anything. Honestly, that's just like it's too easy But especially like bitch about the sports team or the weather if you want, but like about somebody that you're dedicating your life to like if you bitch about them, you're going to see them as a lesser being also.
Starting point is 03:02:13 Like you don't think so, but you're going to like decrease the value you have. I personally believe over time you're not going to appreciate the magic of that person. I think anyway, it's like that I just noticed this a lot that people are married and they will whine about, you know, like the wife, whatever, you know, this this cut is part of the sort of the culture to kind of comment in that way. I think let me do the same thing about the husband. He doesn't, he never does this or he's a goof. He's incompetent at this or that, whatever they, there's the kind of...
Starting point is 03:02:50 Yeah, there's those tropes like, oh, you know, husband never do acts and like, why? I think those do it disservice to everyone. It's just disrespectful to everyone involved. Yeah, but it happens. So yeah, I was and brought that up as an example of something that people actually love but they complain about because for some reason that's more fun to do is complain about stuff. Yeah. And so that's what would clip you or whatever. So like you complain about, but you actually love it.
Starting point is 03:03:18 There you go. It's just a good metaphor that, you know, what was I gonna ask you? Oh, you, uh, your hamster died. When I was like eight. You miss her? Beige. What's the closest relationship you've had with a pet?
Starting point is 03:03:41 Not the one? What pet? We've had a lot of pet. That's the one. What? We have a lot of pet. Have you loved the most in your life? I think my my first pet was a goldfish named Bob and he died immediately and that was really sad. And I think I think it was really attached to Bob and Nancy and my goldfish. We got new bobs, and then Bob kept dying, and we got new bobs. Nancy just kept living. So it was very replaceable. Yeah, I was young.
Starting point is 03:04:17 It was easy to. Do you think there will be a time when the robot, like in the movie, her, be something we fall in love with romantically? Oh, yeah. Oh, for sure. Yeah. A scale.
Starting point is 03:04:31 Like, we're a lot of people. Romantically, I don't know if it's going to happen at scale. I think, I think we talked about this a little bit last time on the podcast, too, where I think we're just capable of so many different kinds of relationships. And actually part of why I think marriage is so tough as a relationship is because we put so many expectations on it, like your partner has to be your best friend and you have to be sexually attracted to them and they have to be a good co-parent and a good roommate. And like it's all the relationships at once that have to work.
Starting point is 03:05:06 But we're like normally with other people, we have like one type of relationship. We even have, we have a different relationship to our dog. Then we do our neighbor, then we do the, you know, person out someone, a coworker. I think that some people are going to find romantic relationships with robots interesting. It might even be a widespread thing, but I don't think it's gonna replace like human romantic relationships. I think it's just gonna be a separate type of thing. It's gonna be more narrow. More narrow or even like just something new that we haven't really experienced before. Maybe like
Starting point is 03:05:44 having a crush on an artificial agent is a different type of fascination. I don't know. Did people see that as cheating? I think people would. Well, I mean, the things that people feel threatened by in relationships are very many folds. Yeah. That's just an interesting one. Because maybe it would be good, a little jealousy for the relationship.
Starting point is 03:06:11 Maybe that's part of the couples therapy, you know, kind of thing or whatever. I don't think jealousy. I mean, I think it's hard to avoid jealousy, but I think the objective is probably to avoid. I mean, some people don't even get jealous when they're apart and they're asleep so someone else, like there's polyamory. Yes. Um, yeah. I think there's just such a diversity of different ways that we can structure relationships
Starting point is 03:06:35 or view them that this is just going to be another one that we add. You dedicate your book to your dad. What did you learn about life from your dad? Oh man, my dad is He's a great listener and he is the best person I know at The type of cognitive empathy that's like perspective taking So not like emotional like crying empathy, but trying to see someone else's point of view and trying to put yourself in their shoes.
Starting point is 03:07:14 And he really instilled that in me from an early age. And then he made me read a ton of science fiction, which probably led me down this path. I'll tell you how to be curious about the world and how to be open-minded. Yeah. Last question. What role does love play in the human condition?
Starting point is 03:07:33 This one we've been talking about, love and robots. And you're fascinated by social robotics. It feels like all of that operates in a landscape of something that we can call love. Love? Yeah, I think there are a lot of different kinds of love. I feel like it's, we need, and the Eskimos have all these different words for snow.
Starting point is 03:08:00 We need more words to describe different types and kinds of love that we experience. But I think love is so important and I also think it's not zero sum. That's the really interesting thing about love is that I had one kid and I loved my first kid more than anything else in the world. And I was like, how can I have a second kid and then love that kid? Also, I'm never going to love it as much as the first, but I love them both equally. It's just like my heart expanded.
Starting point is 03:08:29 And so I think that people who are threatened by love towards artificial agents, they don't need to be threatened for that reason. Artificial agents will just, if done right, will just expand your capacity for love. I agree. Beautifully put. This is awesome.
Starting point is 03:08:52 I still didn't talk about half the things I wanted to talk about, but we're already way over three hours. So thank you so much. I really appreciate you talking today. You're awesome. You're an amazing human being, a great robot, a great writer now. It's an honor that you were talking with me. Thanks for doing it. Right back out to you. Thank you.
Starting point is 03:09:10 Thanks for listening to this conversation with Kate Dolly. To support this podcast, please check out our sponsors in the description. And now, let me leave you some words from Maya Angelou. Courage is the most important of all the virtues because without courage, you can't practice any other virtue consistently. Thank you for listening and hope to see you next time. you

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.