Hidden Brain - Radio Replay: I, Robot

Episode Date: January 13, 2018

Do you ever catch yourself yelling at your Alexa? Or typing questions into Google that you wouldn't dare ask aloud? On this episode, our changing relationship with technology and what big data knows a...bout our deepest, darkest secrets.

Transcript
Discussion (0)
Starting point is 00:00:00 Quick note before we get started, this episode includes a racial epithet and discussions about pornography. This is Hidden Brain, I'm Shankar Vedantam. Maps are good representations, not just of the world we live in, but of how we think about the world we live in. Over the centuries, our maps have emphasized the places we find important. They show the limits of our knowledge and the scope of our ambitions. 700 years ago, Europeans were completely unaware of the existence of North America. Fast forward to the 1970s. Scientists can tell you in detail what the surface of the moon is like.
Starting point is 00:00:47 Today we're charting out maps of a different sort. Maps of our minds. Maps of our minds. Maps of our minds. There is in one cartographer designing these modern maps, we all are, and the maps are constantly changing. We start today's show with a personal question. Have you ever Googled something that you would never dream of saying out loud to another human being? When we have a question about something embarrassing or deeply personal, many of us today don't turn to a parent or
Starting point is 00:01:24 to a friend, but to our computers. Because there's just some things you just can't ask a real person in real life and you need to ask Google. Because it's completely anonymous and there are no judgments attached. Google knows everything. I agree to it.
Starting point is 00:01:41 Every time we type into a search box, we reveal something about ourselves. As millions of us look for answers to questions or things to buy or places to meet friends, our searchers produce a map of our collective hopes, fears and desires. My guest today is Seth Stevens-Dividewitz. He used to be a data scientist at Google and he's the author of the book, Everybody Lies, Big Data, New Data, and what the internet can tell us about who we really are. Seth, welcome to Hidden Brain. Oh, thanks so much for having me, Shankar. So Seth, we all know that Google handles billions of searches every day,
Starting point is 00:02:18 but one of the insights you've had is that the reason Google knows a lot about us is not just because of the volume of search terms, but because people turn to Google as they might turn to a friend or a confidant. That's exactly right. I think there's something very comforting about that little white box that people feel very comfortable telling things that they may not tell anybody else about their sexual interests, their health problems, their insecurities, and using this anonymous aggregate data,
Starting point is 00:02:47 we can learn a lot more about people than we've really ever known. And one of the ways we can learn a lot more about people is through these very strange correlations, you find, for example, there's a relationship between the unemployment rate and the kinds of searches people make online. Yeah, I was looking at what searches correlate most
Starting point is 00:03:05 with the unemployment rate. And I was expecting something like new jobs or unemployment benefits. But during the time period, I looked at the single search that was most highly correlated with the unemployment rate with a pornography site. And you can imagine that if a lot of people are out of work, they have nothing to do during the day.
Starting point is 00:03:22 They may be more likely to look at porn sites. another search that was high on the list was solitary. So again, when people are out of work, they're bored, they do leisure activities, and potentially this measure of how much leisure there is on the internet may help us know how many people are out of work on a given day. And of course, this sort of helps us reconsider what we think of as data. So when we think about the unemployment rate, as you say, our normal approach is to say, how many people are selling jobs? Let's track down all the jobs. This is coming at the question entirely differently.
Starting point is 00:03:53 Yeah, I think the traditional way to collect data was to send a survey out to people and have them answer questions, check boxes. There are lots of problems with this approach. Many people don't answer surveys, and many people lie to surveys. So the new era of data is kind of looking through all the clues that we leave. Many of them not as part of questions or as part of surveys, but just clues we leave as we go through our lives. One of the important differences between mining this kind of data and
Starting point is 00:04:24 the responses we get on surveys has to do with how people report their sexual orientation. I understand that the kind of queries that you see on Google might reveal something quite different than if you ask people if they're gay. That's right. If you ask people in surveys today in the United States about two and a half or three percent of men say that they're primarily attracted to men. And this number is far higher in certain states where tolerance to homosexuality is greater. So there are a lot more gay men according to surveys in California than in Mississippi. But if you look at search data for gay male pornography, it's a tiny bit higher in California, but not that much higher,
Starting point is 00:05:08 and overall about 5% of male pornography search is for gay porn, so almost twice as high as the numbers you get in service. Your research has important implications for a topic that we've looked at a lot on Hidden Brain, the topic of implicit bias. People aren't always aware of the biases they hold, and so scientists have had to find clever ways to unearth these biases. You think that Google searchers can reveal some forms of implicit bias? That's right. So one I look at is the questions that parents have about their children. If you ask many parents today, they would say that they treat their sons and daughters equally. That they're equally excited about their intellectual potential,
Starting point is 00:05:50 equally concerned about maybe their weight problems. But if you aggregate everybody's Google searches, you see large differences in gender that when parents in the United States ask questions starting, is my son, there might much more likely to use words such as gifted or a genius than they would in a search starting is my daughter. When parents in the United States search is my daughter, they're much more likely to complete it with is my daughter overweight or is my daughter ugly. So parents are much more excited about the intellectual potential of their sons and much more concerned about the physical appearance of their daughters. Before I get to the next question, Seth, I just want to give a warning to our listeners. This next section is going to involve a discussion regarding the N-word.
Starting point is 00:06:37 Seth, you report that in some states after Barack Obama was first selected president, there were more Google searches for a certain racist term than searches for first black president. I think there is a disturbing element to some of this search data where in the United States, today, many people, and maybe this is a good thing, don't feel comfortable sharing that they have racist thoughts or racist feelings, but on Google, they do make these searches in strikingly high frequency. I need to use sorted language to this. The measure is the percent of Google searches that include the word, and these searches are predominantly
Starting point is 00:07:11 searches looking for jokes, mocking African Americans. I should clarify, this is not searches for rap lyrics, which tend to use the word, the ending in A. But if you look at the racist search volumes, I think if you had asked me based on everything I had read about racism in the United States, I would have thought that racism in the United States predominantly concentrated in the South, that really the big divide in the United States when it comes to racism is South versus North. But the Google data reveals that's not really the case that racism is actually very, very high in many places in the North,
Starting point is 00:07:46 places like Western Pennsylvania or Eastern Ohio or industrial Michigan or rural Illinois or upstate New York. The real divide these days when it comes to racism is not North versus South, it's East versus West. There's much higher racism, East of the Mississippi than West of the Mississippi. So besides just saying, you know, we know that there are these patterns of racist searches in different parts of the country, you're actually saying you can do more than that. You can actually predict how different parts of the country might vote in a presidential
Starting point is 00:08:18 election based on the kind of Google searches you see in different parts of the country. Yeah, well, the first thing I found is that there was a large correlation between racist search volume and parts of the country where Obama did worse than other democratic candidates had done. So, Barack Obama was the first major party general election nominee who was African-American, and you see a clear relationship that Obama lost large numbers of votes in parts of the country where there are high racist search volumes and other researchers have found such as Nate Silver at 538 and Nate Cohn at the New York Times that there was
Starting point is 00:08:57 a large correlation between racist search volumes and support for Donald Trump and the Republican Party that parts of the country that made racist searches in high numbers were much more likely to support Donald Trump. And this relationship was much stronger than really any other variable that they tested. I'm wondering how you try and understand that kind of information. It's hard not to listen to what you're saying and draw sort of what seems to be a superficial conclusion, which is that racist people vote for Donald Trump. I'm not sure is that what you're saying? That's one of those things where it sounds so offensive to say it that I think everyone
Starting point is 00:09:33 tiptoes around the line. I will say that the data does show a strong correlation between racist searches and support for Donald Trump that is hard to explain with any other explanation. You know, it's, yeah, I mean, yeah, that kind of is what I'm saying. I'm not saying that everybody who supported Donald Trump is racist by any stretch of imagination. There are plenty of people who support Donald Trump without this racist tendency, but a significant fraction of his supporters, I think, were motivated by racial animus. Seth Stevens-Devita Witch is a former Google data scientist
Starting point is 00:10:05 and the author of Everybody Lies, big data, new data, and what the internet can tell us about who we really are. You spend a lot of time in the book talking about sex. It turns out to be an area where marketers and companies know that what we say about ourselves is nowhere close to the truth. Most people report being not interested in pornography, but the website PornHub reports that in 2015 alone,
Starting point is 00:10:30 viewers watched two and a half billion hours of porn, which is apparently longer than the entire amount of time that humans have been on Earth. What is the say about us, the fact that we either have very little insight about ourselves or we're actually lying through our teeth. Yeah, I'd say we're probably lying through our teeth.
Starting point is 00:10:49 I do talk a lot about sex in this book. One thing I like to say is that big data is so powerful it turned me into a sex expert because it wasn't a natural area of expertise for me. But I do talk a lot about sexuality and I think you do learn a lot about people that's very, very different from what they say, and kind of the weirdness at the heart of the human psyche that doesn't really reveal itself in everyday life or at lunch tables, but does reveal itself at 2am on PornHub. Pornography sites are on the only ones gathering information about our sexual and romantic
Starting point is 00:11:26 preferences. We now have apps like Tinder and sites like OkCupid that gather tons of data about us. As a result, these apps and sites know a lot about our romantic preferences. But for a long time, we've had a human version of big data for romance. Grandma. Set has some personal experience with this Big Data source. A couple of years ago, he was having Thanksgiving dinner with his family.
Starting point is 00:11:50 He was 33, didn't have a date with him, and his family was trying to figure out the qualities set needed in a romantic partner. My family was going back and forth. My sister was saying that I need a crazy girl, because I'm crazy. My brother was saying that my sister was crazy, that I need a normal girl to balance me out and my mom was screaming at my brother and sister that I'm not crazy and my dad was then screaming at my mom that of course
Starting point is 00:12:14 Seth is crazy. So it's kind of a classic Steven's Davidoid family Thanksgiving where everyone's just yelling at each other for being crazy and we're not really getting any progress in learning about yelling at each other for being crazy and not really getting any progress in learning about what I need in my love life. And then my soft spoken 88-year-old grandma started to speak and everyone went quiet and she explained to me that I need a nice girl, not too pretty, very smart, good with people, social so you will do things, sense of humor because you have a good sense of humor. And I describe why I was her advice so much better than everybody else's. I think one of the reasons that she's big data, right? So, grandmas and grandpas throughout history
Starting point is 00:12:52 have had access to more data points than anybody else and they've been able to correlate larger patterns than anybody else has because they've been around longer. And that's why they've been such an important source of wisdom historically. The problem, of course, as you also point out, is that it's very hard to disentangle your personal experiences from what actually happens in the world. And in your grandmother's case, she actually had a very specific piece of relationship advice about the kind of person you should want. And some of that might not actually be backed up by the empirical
Starting point is 00:13:25 evidence. Yeah, well, my grandma told me, has told me on multiple occasions that it's important to have a common set of friends and a partner. So she lived in a small apartment in Queens, New York with my grandfather and every evening they'd go outside and gossip with their neighbors. And she thought that was a big part in why their relationship worked. But actually recently, computer scientists have analyzed data from Facebook, and they can actually look when people are in relationships and when they're out of relationships, and
Starting point is 00:13:53 try to predict what factors or relationship make it more likely to last. One of the things they tested was having a common group of friends. Some partners on Facebook share pretty much the same friend group and some people have totally isolated friend groups. And they found, contrary to my grandmother's advice that having a separate social circle is actually a positive predictor of a relationship lasting. And so of course, the risk of trusting the individual
Starting point is 00:14:18 is that the individual's intuition about what work for his or her life might not work for everyone else. That's right. I think we tend to get biased by our own situation. Data scientists have a phrase called waiting data. Some data points get extra weight in our models, and our intuition gives too much weight to our own experience, and we tend to assume that what worked for us
Starting point is 00:14:40 will work for others as well, and that's frequently not the case. Many companies know that we don't really understand ourselves. When we come back, we look at how companies are using big data to predict what we're going to do before we know it ourselves. We'll also ask if sites like Google can use data to forecast whether you're going to get a serious illness, should they give you that information? Stay with us. This is Hidden Brain, I'm Shankar Vedantam. We're speaking today with former Google Data Scientist, Seth Stevens-Devidovets, about the research in his book, Everybody Lies, Big Data, New Data, and what the internet can tell us about who we really are. Netflix used to ask users what kind of movies
Starting point is 00:15:32 they wanted to watch. Sets says eventually the company realized that asking this kind of question was a complete waste of time. Yeah, initially Netflix would ask people what they want to view in the future so they could queue up the movies that they said and if you ask people what they want to view in the future so they could queue up the Movies that they said and if you ask people what are you gonna want to watch tomorrow or this weekend People are very aspirational. They want to watch documentaries or about world war two or avant-garde French films But then when Saturday or Sunday comes around they want to watch the same low-brow comedies that they've always watched.
Starting point is 00:16:08 So as Netflix realized, they had to just ignore what people told them and use their algorithms to figure out what they actually want to watch. So one of the things that's intriguing about what you just said is it's, I don't think it's actually the case that people were lying to Netflix when they said they wanted to watch the Aman Guard film. They actually genuinely probably aspired to do that. It might actually be that big data understands people better that they understand themselves. Yeah, probably even more common than lying to other people is lying to ourselves, particularly when we're trying to predict what we're going to do in two or three days.
Starting point is 00:16:41 We tend to assume that we're going to go to the gym more than we go to the gym or eat better than we actually will eat or watch more intellectual stuff than we actually will watch so that algorithms can correct for this over optimism that we all tend to share. When you look at a company like Facebook, which has access to these huge amounts of data about us and what we like and whom we like in our relationships. You have to wonder how the company is using this data in all kinds of different ways.
Starting point is 00:17:11 I remember Facebook got into some hot water a couple of years ago because they ran an experiment that seemed to be manipulating how people feel. Of course, there was a huge outcry about the experiment at the time. Since then, there hasn't been very much reported about what Facebook is doing, but I suspect that it might just be because Facebook is no longer telling us what it's doing, but it's still doing it anyway. Every major tech company now runs lots and lots of what are called A, B tests, which are little experiments where you put people into two different groups,
Starting point is 00:17:43 a treatment and control group, and you show one group, one version of your site, and the other group, another version of the site, and you see which version gets the most clicks or the most views. This is really exploded in the tech industry. There are many, many instances where companies are now using big data against us. Banks and other financial institutions are using clues from big data to decide who shouldn't get a loan. I think it's an area of a big concern, so I talk about a study in the book where they started to peer to peer lending a site and they started to text the people used in their requests for loans and you can figure out just from what people say in their loans
Starting point is 00:18:26 How likely they are to pay back and there are some strange correlations for example if you mention the word God Your 2.2 times less likely to pay back 2.2 times more likely to default and this does get eerie Are you really supposed to be penalized if you mentioned God in a loan application that would seem to be really wrong, even evil, to penalize somebody for a religious preference. Basically, everything's correlated with everything, right? So just about anything anybody does is going to have some predictive power for other things they do. And the legal system is really not set up for a world in which companies potentially can
Starting point is 00:19:06 mine correlations over just about everything anybody does in their life. I was thinking about an ethical issue. I'm not sure if necessarily this is a legal issue, but you mentioned in the book that, you know, if someone is googling, I've been diagnosed with pancreatic cancer, what should I do? It's reasonable to assume that this person has been diagnosed with pancreatic cancer, what should I do? It's reasonable to assume that this person has been diagnosed with pancreatic cancer. But if you collect all the people who are googling what to do about that diagnosis with pancreatic cancer
Starting point is 00:19:31 and then work backwards to see what they've been searching for in the weeks and months prior to that diagnosis, you can discover some pretty amazing things. Yeah, this is a study that researchers use Microsoft Bing data. They looked at people who searched for just Yeah, this is a study that researchers use Microsoft Bing data. They looked at people who searched for just diagnoses of pancreatic cancer and then similar
Starting point is 00:19:50 people who never made such a search. Then they looked at all the health symptoms they had made in the lead up to either a diagnosis or no diagnosis. They found that there were very, very clear patterns of symptoms that were far more likely to suggest a future diagnosis of pancreatic cancer. For example, they found that searching for indigestion and then abdominal pain was evidence of pancreatic cancer while searching for just indigestion without abdominal pain meant to person was much more unlikely to have pancreatic cancer.
Starting point is 00:20:19 That's a really, really subtle pattern in symptoms, right? Like a time series of one symptom, followed by another symptom is a evidence of a potential disease. It really shows, I think, the power of this data where you can really tease out very subtle patterns in symptoms and figure out which ones are potentially threatening and which ones are benign. So here's the ethical question. Once you establish that there is this correlation that you sort of say I have a universe of people
Starting point is 00:20:53 who clearly have pancreatic cancer, and I work backwards through their search history and I detect these patterns that no one had thought to look at before that say these particular kinds of search terms seem to be correlated with people who go on to have the diagnosis versus these search terms that do not go on to predict a diagnosis. So does a company like Microsoft now have an obligation to tell people who are googling for these combinations of search terms?
Starting point is 00:21:19 Look, you might actually need to get checked out. You might actually need to go see a doctor because of course, if you can be diagnosed with pancreatic cancer four weeks earlier, you have a much better chance of survival than if you have to wait for a month. I lean in the direction of yes, some people would not lean that direction. It could be a little creepy. If Google, right below the button, I feel lucky. You may have pancreatic cancer.
Starting point is 00:21:42 It's not exactly the most friendly thing to see on a website. But personally, if I had some sort of symptom pattern that suggested I may have a disease and there was a chance of curing it if I was told, I'd want to know that. It's just another example that really the ethical and legal framework that we've set up is not necessarily prepared for big data. Seth Stevens-Devitawitz is a former data scientist at Google and the author of the book, Everybody Lies, Big Data, New Data, and what the internet can tell us about who we really are. Seth, thank you for joining me today on Hidden Brain.
Starting point is 00:22:21 Thanks so much for having me, Shankar. Have you ever talked to your computer, cursed it for making a mistake? PC load letter? Does that mean? Have you ever argued with the traffic directions you get from Google Maps or Ways? Starting route to Grover's Mill Road. Have you ever looked at a Roomba cleaning the floor on the other side of the room and told it, please come over to this side.
Starting point is 00:22:53 Turn left. Left! It just ran its sound right off of the edge. Robots and artificial intelligence are playing an ever-large role in all of our lives. Of course, this is not the role that science fiction once imagined. It doesn't feel pity or remorse or fear. Robots bent on our destruction, remain the stuff of movies like Terminator, and robot sentience is still an idea that's far off in the future.
Starting point is 00:23:25 But there's a lot we're learning about smart machines, and there's a lot that smart machines are teaching us about how we connect with the world around us and with each other. My guest today has spent a lot of time thinking about how we interact with smart machines and how those interactions might change the way we relate to one another. Kate Darling is a research specialist at the MIT Media Lab. She joined us recently in front of a live audience at the hotel Jerome in Aspen, Colorado, as part of the Aspen Ideas Festival. Also on stage was a robot.
Starting point is 00:24:02 A green robot dinosaur about the size of a small dog known as a play-o. It's going to be part of this conversation, but before we get to that, here's Kate. Kate, welcome to Hidden Brain. Thank you for having me. You found that there is an interesting point in the relationship between humans and machines, and that point comes when we give a machine a name. I understand that you have three of these play or dinosaurs at your home.
Starting point is 00:24:30 Can you tell me some of the names that you have given to your robots? Yes. So the very first one I bought, I named Yohai after Yohai Benchler, who's a Harvard professor, who's done some work in intellectual property and other areas that I've always admired. And the second one I adopted after I filmed a Canadian documentary where the show host had to name the robot
Starting point is 00:24:53 and he gave the robot the same name he had, which was Peter. So the second one has a boring name. And then the third one is named Mr. Spaghetti. I don't know if people outside of Boston are familiar with this, but the Boston Public Transportation System, they wanted to crowdsource a name for their mascot dog. And the internet decided that the dog should be named Mr. Spaghetti. And of course, they refused to do that and named the dog Hunter.
Starting point is 00:25:22 So Mr. Spaghetti became a big thing in Boston for a while. People were very outraged about this. And so I named my PLEO, my third one, Mr. Spaghetti. I understand that companies actually have found that if you sell a robot with the name of the robot on the box, it changes the way people will interact with that robot, then if you just said, this is a dinosaur. So, this is, yeah, this is not, I don't have any data on this, but yes, I have talked to companies who feel that it helps with adoption and trust of the technology. Even very, very simple robots, like boxes on wheels that deliver medicine in hospitals.
Starting point is 00:25:59 If you give them a little nameplate that says Betsy, their understanding is that people are a little bit more forgiving of the robot. So instead of this stupid machine doesn't work, they'll say, oh, Betsy made a mistake. And I'm wondering if you've spent time thinking about why this happens. At some level, if I came up to you at home
Starting point is 00:26:18 and I said, Kate, is Mr. Spaghetti alive? You would almost certainly tell me, no, Mr. Spaghetti is not alive. I assume tell me no, Mr. Spaghetti is not alive. I assume you don't think Mr. Spaghetti is alive, right? No. Okay. So given that you know that Mr. Spaghetti is not alive, why do you think giving him a name changes your relationship to him? With robots in particular it's combined with just our general tendency to anthropomorphize these things and we're also primed by science fiction and pop culture to give robots names and view them as entities
Starting point is 00:26:52 with personalities. And it's more than just the name, right? I mean, robots move around in a way that seems autonomous to us. We respond to that type of physical movement. Our brains will project intent onto it. So I think robots are in the perfect mixture of something that we will very willingly treat with human qualities
Starting point is 00:27:14 or lifelike qualities. All right, so we have this wonderful little prop in front of us. It's a play or dinosaur. I want you to tell me a little bit about the play or dinosaur, how it works, and how you come to one three of them, Kate. What is or dinosaur. I want you to tell me a little bit about the play or dinosaur, how it works, and how you come to one three of them, Kate. What is the dinosaur?
Starting point is 00:27:28 What does it do? It's basically an expensive toy. I bought the first one, I think, in 2007. There we go, it's awake. They have a lot of motors and touch sensors, and they have an infrared camera and microphones. So they're pretty cool pieces of technology for a toy, and that's initially why I bought one because I was fascinated by everything that it can do. Like, if it starts walking around, it can walk to the edge of the table.
Starting point is 00:27:57 It can look down, measure the distance to the floor. It knows that there's a drop, and it'll get scared and walk backwards. And then they go through different life phases adolescent and fully grown and it'll have moods. So I think what we should do, we bought the robot at Hidden Brain a couple of weeks ago. We haven't had the chance to give it a name yet. And I thought we should actually reserve the honors for this evening, where we're talking to Kate,
Starting point is 00:28:28 and see if Kate wants to try and name this dinosaur. Oh, you know, since she cares about dinosaurs so much. I was looking up Kate's Twitter feed this morning. I understand that you're going to have a baby soon. Congratulations. Yes, I don't have a name for that either. OK. Just FYI, she sometimes refers to the baby as baby bot, so just for whatever that's worth.
Starting point is 00:28:49 And one retweet that you have on your Twitter feed cracked me up, it said, you don't really know how many people you don't like until you start trying to pick baby names. Yeah, that's a vote for my husband. So I don't want to tell me, you apparently haven't yet picked your baby's name. So do you have any choices of top choices? Is there a name, a spare name that you might care to give the dinosaur? Well the problem is we've had a girl's name picked out for years and now we're having a boy and we just can't, we don't even have any contenders.
Starting point is 00:29:21 No contenders. What would have been your favorite girl's name if you had had a girl? Well, so when I first started dating my now husband, he at some point said, if I ever had a daughter, I already know what I would name her. And I was like, oh, really? We're going to fight about this one. And he said, yeah, I would name her Samantha and Sam
Starting point is 00:29:43 for short, because Sam is kind of gender neutral. And I was like, oh, I Really love that. So that one was was picked out very easily. All right, since you're not having a girl They're gonna have a boy would you mind if if you considered naming the dinosaur Samantha? How would you feel about that? Oh, that would be awesome. We should name the dinosaur Samantha. All, so henceforth, this dinosaur will be called Samantha or Santa Shore. Yay! Now, some time ago, Kid conducted a very interesting experiment with the play of dinosaurs, and to sort of show how this works, I have a second prop here, which is under the table. Uh-oh.
Starting point is 00:30:24 Uh-oh. It's a hammer. I have a second prop here, which is under the table. It's a hammer. A large hammer, which we borrowed from the hotel. Now, as you all know, the dinosaur is obviously not alive. It's just cloth and plastic and a battery and wires. It has a name, of course, Samantha. But it isn't alive in any sense of the term. And so Kate, I'm going to actually give you the hammer.
Starting point is 00:30:48 Oh, no. And I think we might have a little board underneath the table here. We're going to place the dinosaur on a board. Kate, would you consider destroying Samantha? No. It's just a machine. I only make other people do that. I don't do it myself.
Starting point is 00:31:12 You wouldn't even consider harming the dinosaur? Well, so my problem is that I already know the results of our research and that would say something about me as a person, so I'm going going to say no I'm not willing to do it. Kate Darling is a research specialist at the MIT Media Lab. When we come back I last heard about that research he references in which he asked volunteers to smash a robot dinosaur. Welcome back to Hidden Brain, I'm Shankar Vedantam. We're discussing our relationships with technology, specifically robots, with Kate Darling, a researcher from MIT.
Starting point is 00:31:57 She joined us before a live audience at the Aspen Ideas Festival. A couple of years ago, Kate conducted an experiment that says a lot about how humans tend to respond to certain kinds of robots. Tell me about the experiment. So you had volunteers come up and you basically introduced them to these lovable dinosaurs, and then you gave them a hammer like this and you told them to do what. Well, so, okay, so this was the workshop part that we used the dinosaurs for. They're a little too expensive to do an experiment with 100 participants.
Starting point is 00:32:27 So the workshop that we did in a non-scientific setting, we had five of these robot dinosaurs. We gave them the groups of people and had them name them, interact with them, play with them. We had them personify them a little bit by doing a little fashion show with a fashion contest. And then after about an hour, we asked them to torture and kill them. And we had a variety of instruments. We had a hammer, a hatchet, and I forget what else. But even though we tried to make it dramatic, it turned out to be a little bit more dramatic
Starting point is 00:33:02 than we expected it to be, and they really refused to even hit the things. And so we had to kind of start playing mind games with them, and we said, okay, you can save your group's dinosaur if you hit another group's dinosaur with a hammer. And they tried, and they couldn't do that either. This one woman was standing over the thing trying, and she just couldn't, she ended up petting it instead. And then finally we said okay well we're gonna destroy all of the robots unless someone takes a hatch it to one of them. And finally someone did. Wait so you said unless one of you kills one of them we are gonna kill all of them? Yeah I think this might have
Starting point is 00:33:40 been my partner's idea. So I did this with a friend named Honest Gossel. We did this at a conference called Lift and Geneva. And we had to improvise because people really didn't want to do it. So we threatened them. And I think that clearly doesn't want you to harm her. Yeah, clearly, clearly. So what do you think is going on?
Starting point is 00:34:00 I mean, at a rational level, the dinosaur obviously is not alive. Why do you think we have such reluctance to harming the dinosaur? In fact, I might have the battery removed so the dinosaur stops making noise. OK. Well, I mean, it behaves in a really life-like way.
Starting point is 00:34:18 I mean, we have over a century of animation expertise in creating compelling characters that are very life like that people will automatically project life onto you. I mean, look at Pixar movies, for example. It's incredible. And I know that a lot of social robotists actually work with animators to create these compelling characters. And so, you know, it's very hard to not see this as some sort of living entity, even though,
Starting point is 00:34:47 you know, perfectly well, that it's just a machine because it's moving in this way that we automatically subconsciously associate with states of mind. And so I just think it's really uncomfortable to people, particularly for robots like this that can display, you know, a simulation of pain or discomfort to have to watch that. I mean, it's just not comfortable. What did you find in terms of who was willing to do it and who wasn't? I mean, when you looked at the people who are willing
Starting point is 00:35:15 to destroy a dinosaur, a dinosaur like the Clio, you found that there were certain characteristics that were attached to people who were more or less likely to do the deed. So the follow-up study that we did, not with the dinosaurs, we did with hex bugs, which are a very simple toy that moves around like an insect. And there we were looking at people's hesitation to hit the hex bug and whether they would hesitate more if we gave it a name, and whether they would hesitate more if they had natural tendencies for empathy, for empathic concern.
Starting point is 00:35:49 And we found that people with low empathic concern for other people, they didn't much care about the hex bug and would hit it much more quickly. And people with high empathic concern would hesitate more and some even refuse to hit text bugs. So in many ways what you're saying is that potentially the way we relate to these inanimate objects might actually say something about us at a deeper level than just our relationship to the machine.
Starting point is 00:36:17 Yes, possibly. I mean, we know now or we have some indication that we can measure people's empathy using robots, which is pretty interesting. You know, my colleagues and I were discussing ahead of this interview, whether you would actually destroy the dinosaur. And we were torn because we said on the one hand, you of all people should know that these are just machines, and that it's an irrational belief to project life-like values on them. But on the other hand, I said, you know, it's really unlikely she's going to do it because she's going to look like a really bad person if she smashes the dinosaur in front of 200 people.
Starting point is 00:36:53 I mean, I don't know if you've been watching Westworld at all, but the people who don't hesitate to shoot the robots they seem pretty callous to us. It's, and I think maybe there is something to it. Of course, we can rationalize it. Of course, if I had to, I could take the hammer and smash the robot, and I wouldn't have nightmares about it. But I think that perhaps turning off that basic instinct to hesitate to do that might be more harmful than Overr, you know, I think overriding it might be more harmful than just going with it.
Starting point is 00:37:32 I want to talk about the most important line we draw between machines and humans and it's not intelligence But it's consciousness. I want to play your little clip from Star Trek. Now tell me, come on, that what is data? I don't understand. What is he? A machine? Is he, are you sure? Yes.
Starting point is 00:37:49 You see, he's met two of your three criteria for sentience, so what if we meet the third? Consciousness, and even the smallest degree? What is he then? I don't know. You? Do you? So this has been a perennial concern in science fiction,
Starting point is 00:38:02 which is the idea that at some point, machines will become conscious and sentient. And very often, it's in the context of, the machines will rise up and harm the humans and destroy us. But as I read your research, I actually found myself thinking, is our desire to believe that the machines can become conscious, actually just an extension of what we've been talking about the last 20 minutes, which is we project sentience onto machines all the time.
Starting point is 00:38:26 And so when we imagine what they're going to be like in the future, the first thing that pops in our head is, they're going to become conscious. Yeah, I think there's a lot of projection happening there. I also think that before we get to the question of robot rights and consciousness, we have to ask ourselves, how do robots fit into our lives when we perceive them as conscious, because I think that's when it starts to get morally messy and not when they actually inherently have some sort of consciousness. If humans have a tendency to anthropomorphize machines to see them as human. It isn't surprising that
Starting point is 00:39:05 we're also willing to bring all the biases we have toward our fellow human beings into the machine world. Many of the intelligent assistants being built by major companies, Siri or Alexa are being given women's names. Many of the genius machines are often given men's names, how or what's in. Now, you can say, Siri and Alexa aren't people. Why should we care? Why should we care if people sexually harass their virtual assistants, as has been shown to sometimes happen?
Starting point is 00:39:33 MIT's Kate Darling says, we should care, because the way we treat robots may have implications for the way we treat other human beings. It might. We don't know, but it might. And one example with the virtual assistance you just mentioned is children. So parents have started observing, and this is anecdotal,
Starting point is 00:39:56 but they've started observing that their kids adopt behavioral patterns based on how they're interacting with these devices and how they're conversing with them. And there are some cool stories. Like there was story in the New York Times a few years ago where a mother was talking about how her autistic son had developed a relationship with Siri, the voice assistant. And she said, this was awesome because Siri is very patient.
Starting point is 00:40:24 She will answer questions repeatedly and consistently. And apparently this is really important for autistic kids. But also because her voice recognition is so bad, he learned to articulate his words really clearly and it improved his communication with others. Now, that's great, but these things aren't designed with autistic kids in mind, right? That's kind of more of a coincidence than anything.
Starting point is 00:40:45 And so there are also perhaps some unintended effects that are more negative. And so one guy wrote a blog post, a well-backed where he said, Amazon's echo is magical, but it's turning my child into an echo because Alexa doesn't require please or thank you or any of the standard politeness that you want your kids to learn when they're conversing and when they're demanding things of you.
Starting point is 00:41:11 So it starts there, but I think that as this technology improves and gets better at mimicking real conversations or life-like behavior, you have to wonder to what extent that gets muddled in our subconscious, and not just in children's subconscious, but maybe even in our own. Do you think it's a coincidence that most of the virtual assistants are given female names and female identities?
Starting point is 00:41:34 I think it's a combination of whatever market research, but also just people not thinking. I mean, I visited IBM Watson in Austin, and there's a room that you can go into and you can talk to Watson and he has this deep booming male voice and you can ask questions. And at the time I went there, there was a second AI in the room that turned on the lights and greeted the visitors and that one had a female voice. And I pointed that out, and it seemed like they hadn't really considered that. So it's a mixture of people thinking, oh, this is going to
Starting point is 00:42:11 sell better, and people just not thinking at all, because the teams that are building this technology are predominantly young, white, and male. And they have these blind spots where they don't even consider what biases they might perpetuate through the design of these systems. So you sometimes call a robot ethicist, and you've sometimes said we might need to establish
Starting point is 00:42:31 a limited legal status for robots. What do you mean by that? So yeah, it's a little bit of a provocation, but my sense is that if we have evidence that behaving violently towards very life-like objects not only tells us something about you as a person but can also change people and desensitize them to that behavior in other contexts, so if you're used to kicking a robot dog, are you more likely to kick a real dog, then that might actually be an argument if that's
Starting point is 00:43:04 the case to give robots certain legal protections the same way that we give animals protections, but for a different reason. We like to tell ourselves that we give animals protection from abuse because they actually experience pain and suffering. I actually don't think that's the only reason we do it, but for robots, the idea would be not that they experience anything, but rather that it's desensitizing to us. And it has a negative effect on our behavior
Starting point is 00:43:32 to be abusive towards the robots. So here's a thing that sort of it's worth sort of pondering for a moment. If you hear, for example, that someone owns a bunch of chickens in their farm, right? So it's their farm, their chickens, they own the chickens. And they're really mistreating the chickens to watching them harming them.
Starting point is 00:43:50 You could sort of make a property rights argument and say they can do whatever they want with their property. But I think many of us would say, even though the chicken belongs to you, there are certain things you can and cannot do with the chicken. And I'm not sure it's just about our concern that if you mistreat the chicken, that means you will turn into the kind of person who might mistreat other people,
Starting point is 00:44:11 there is sort of a level, there's a certain moral level at which I think the idea of abusing animals is offensive to us. And I'm wondering what the same thing is true with machines as well, which is it's not just the case that it might be that people who harm machines are also willing to harm humans, but just the act of harming things that look and feel and sound sentient is morally offensive in some way.
Starting point is 00:44:33 Yeah, so I think that's absolutely how we've approached most animal protections, because it's also... it's very clear that we care more about certain animals than others and not based on any biological criteria. So I think that we just find it morally offensive, for example, to torture cats or, you know, in the United States, we don't like the idea of eating horses, but in Europe, they're like, what's the difference between a horse and a cow? They're both delicious.
Starting point is 00:45:00 So that's definitely how we tend to operate and how we tend to pass these laws. And I don't see why that couldn't also apply to machines once they get to a more advanced level where we really do perceive them as lifelike and it is really offensive to us to see them be abused. The devil's advocate side of that argument, of course, is that would people then say pressing a switch and turning off a machine, that's unethical because you're essentially killing the robot?
Starting point is 00:45:30 But we don't protect animals from being killed. We just protect them from being treated unnecessarily cruelly. So I actually think animal abuse laws are a pretty good parallel here. You mentioned Westworld some moments ago, and want to play you a clip from Westworld for those of you who haven't seen Westworld Humans interact with robots in a robots that are extremely lifelike so lifelike that it's sometimes difficult to tell whether you're talking to a robot or you're talking to a human In the scene that I'm about to play you a man named William interacts with a woman who may or may not be a robot You want to ask, so ask and William interacts with a woman who may or may not be a robot. You want to ask. So ask.
Starting point is 00:46:11 Are you real? Well, if you can't tell, does it matter? So as I watched the scene and as I read your work, I actually had a thought and I want to sort of run the start experiment by you, which is that, you know, on one end of the spectrum, we have these machines that are increasingly becoming lifelike, human-like, you know, they respond in very intelligent ways, they seem as if they're alive. And on the other hand, we're learning all kinds of things about human beings that show us that even the most complex aspects of our minds are governed by a set of rules and laws,
Starting point is 00:46:44 and in some ways, our minds function governed by a set of rules and laws, and in some ways our minds function a little bit like machines. And I'm wondering, is there really a huge distinction? Is it possible, is the real question not so much can machines become more human-like, but is it actually possible that humans are actually just highly evolved machines? I have no doubt that we are highly evolved machines. I don't think we understand how we work yet, and I don't think we're going to get to that understanding any time soon, but yeah, I think I do
Starting point is 00:47:11 think that we fall a set of rules and that we're essentially programmed. I don't distinguish between souls and other entities without souls. And so it's much easier for me to say, yeah, it's probably all the same. But I can see that other people would find that distinction difficult.
Starting point is 00:47:36 Do you ever talk about this? Do you ever run this by other people instead of say, do you tell your husband, for example? I like you very much, but I think you're a really intelligent machine that I love doing it. I haven't explicitly said that to him, but... When you go home, from this trip. Yeah, we'll see how that goes.
Starting point is 00:48:02 Kid Darling is a research specialist at the MIT Media Lab. Our conversation today was taped before a live audience at the hotel Jerome and asked Thank you so much. This week's show was produced by Raina Cohen, Tara Boyle, Renek Lauer and Parth Shah. Our team includes two teams, one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, and one team, Raina Cohen, Tara Boyle, Renek Larr and Parts Cha. Our team includes Janish Met and Maggie Penman. NPR's Vice President for Programming is Anya Grunman.
Starting point is 00:48:35 You can find photos and a video of Samantha, our Pio Dinosaur on our Instagram page. We're also on Facebook and Twitter. If you enjoyed this week's show, please share the episode with friends on social media. I'm Shankar Vedantam, see you next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.